Cybercrime has become a significant issue in Asia, with the potential for artificial intelligence (AI) to exacerbate the problem. The region has become a global epicenter of cyber scams, where high-tech fraud meets human trafficking. In countries like Cambodia and Myanmar, criminal syndicates run industrial-scale “pig butchering” operations—scam centers staffed by trafficked workers forced to con victims in wealthier markets. The scale is staggering, with global losses from these schemes estimated at $37 billion.
The rise of cybercrime in the region is already having an effect on politics and policy. Thailand has reported a drop in Chinese visitors this year, after a Chinese actor was kidnapped and forced to work in a Myanmar-based scam compound; Bangkok is now struggling to convince tourists it’s safe to come. And Singapore just passed an anti-scam law that allows law enforcement to freeze the bank accounts of scam victims.
The region offers some unique dynamics that make cybercrime scams easier to pull off. For example, the region is a “mobile-first market”: Popular mobile messaging platforms help facilitate a direct connection between the scammer and the victim. AI is also helping scammers overcome Asia’s linguistic diversity. Machine translations, while a “phenomenal use case for AI,” also make it “easier for people to be baited into clicking the wrong links or approving something.”
Nation-states are also getting involved. There are allegations that North Korea is using fake employees at major tech companies to gather intelligence and get much needed cash into the isolated country.
One of the most alarming trends is the rise of fake IT worker scams, particularly those originating from North Korea. These scams involve fraudsters using stolen or fake identities to apply for jobs, often in engineering and software development roles. The scammers use AI to create deepfake videos and other sophisticated tools to deceive employers. This has cost American businesses at least $88 million over six years.
The scammers often use their insider access to steal proprietary source code and other sensitive data, and then extort their employers with threats to leak corporate data if not paid a ransom demand. As US-based companies become more aware of the fake IT worker problem, the job seekers are increasingly targeting European employers, too. Nearly all executives who have spoken about this issue have seen a flood of these types of applicants applying for open positions, most of them in engineering and software development, and all of them remote work.
The use of AI in these scams is particularly concerning because it allows fraudsters to create more convincing and harder-to-detect scams. For example, a company that provides identity verification services has seen a significant increase in the number of fake candidates applying for open jobs. The company has noted a number of oddities in the applications, including shallow LinkedIn profiles paired with beefy resumes, new-ish email addresses, phone numbers that didn’t match claimed geographic locations, and educational backgrounds that didn’t check out.
The fraudsters are also using AI to create more convincing deepfake videos. In one instance, a security company that uses AI to find vulnerabilities in code was almost fooled by a deepfake video. The co-founder of the company noted that if they almost fooled a cybersecurity expert, they definitely fooled some people.
The use of AI in cybercrime is not limited to fake IT worker scams. Cyber-scams originating in Southeast Asia have caused significant financial losses in India, including investment, trading, digital arrest, and dating scams. The use of AI in these scams allows fraudsters to create more convincing and harder-to-detect scams, making it difficult for businesses to protect themselves.
The rise of AI in cybercrime is a growing concern for businesses and governments alike. The use of AI in these scams allows fraudsters to create more convincing and harder-to-detect scams, making it difficult for businesses to protect themselves. As the use of AI in cybercrime continues to grow, it is important for businesses to be aware of the risks and take steps to protect themselves. This includes training employees to recognize the signs of a scam, using AI to flag behavioral patterns indicative of scams, and adopting a unified, AI-driven approach to data security.
Click Here For The Original Source.