DANANG, Vietnam — Artificial intelligence (AI) is currently the most powerful and accessible technology since the invention of the personal computer. That same power and accessibility are now being used for cybercrimes, made available by programmers hidden deep in the dark web for malicious, unethical, or unauthorized purposes.
GETTING DARKER The rise of this ‘technology of cyber-evil’ is fueling more sophisticated attacks across the globe and marks a significant shift in the cybersecurity landscape. GRAPHICS BY CHATGPT
The rise of this “technology of cyber-evil” is fueling more sophisticated attacks across the globe and marks a significant shift in the cybersecurity landscape. These systems operate outside of standard safety, compliance, or governance controls, often enabling unsupervised behavior such as fraud, manipulation, cyberattacks, or data abuse.
“AI platforms designed from the onset for cybercrime is what Dark AI is,” Sergey Lozhkin, head of the Global Research and Analysis Team (GReAT) at Kaspersky, told The Manila Times in a one-on-one interview. “There is no good or bad AI. AI itself is not inherently discriminatory; it is simply a tool that follows orders.”
Speaking at Kaspersky’s Cybersecurity Weekend in Da Nang, Vietnam, Lozhkin said that using current large language models (LLMs) like ChatGPT or Gemini for cybercrimes is not Dark AI. These platforms are limited to speeding up email processing or generating photos and misleading videos, and cannot generate code for cybercrime because there are so many safeguards in place.
The core of Dark AI lies in the unregulated deployment of hard-coded LLMs and other machine learning algorithms. These models, which are being peddled on the dark web, are intentionally designed to operate outside standard safety controls.
The most common form of Dark AI is known as black hat GPTs. These are specialized AI models — such as WormGPT, DarkBard, and FraudGPT — used by cybercriminals to automate tasks that were once slow and manual. This includes generating complex malware code with a high degree of “polymorphism” to evade detection, crafting highly convincing and personalized phishing emails at scale, and creating hyper-realistic deepfakes to manipulate people and bypass security protocols.
Beyond the nefarious applications of Dark AI, Lozhkin said Kaspersky experts are seeing an even more worrying trend: nation-state actors leveraging LLM models in their campaigns. According to an OpenAI report, malicious groups have been using LLMs to create fake digital identities and generate multilingual content in real time. This allows them to deceive victims, bypass traditional security filters, and carry out large-scale influence and espionage campaigns that are nearly impossible for humans to manage.
The OpenAI report identified several APT groups, while Google observed more than 57 distinct threat actors with ties to Russia, China, North Korea, and Iran using AI to bolster their cyber operations. In Russia, the APT28/Fancy Bear group used LLMs for reconnaissance to understand complex topics like satellite communication protocols and radar imaging technologies.
Hackers in North Korea have used generative Dark AI to create fake résumés and cover letters and deploy real-time deepfake videos for remote interviews, allowing them to infiltrate companies as insider threats. They have also used the same AI to fund illegal programs through cryptocurrency theft. Iran’s Charming Kitten group leveraged LLMs to improve the quality of their phishing emails and craft more effective spear-phishing campaigns.
A more “commercial” application of this technology occurred in a boardroom Zoom meeting at a European corporation. Dark AI-generated replicas of employees and executives were made to “sit” in the meeting. They asked questions, made comments, and even responded to queries in real time. The online meeting was so convincing, it led to approvals for hiring.
As Dark AI becomes more accessible and capable, the cybersecurity community is locked in a technological arms race, Lozhkin said. The challenge is no longer just detecting known threats, but also anticipating and neutralizing threats created by adaptive, self-improving systems.
“The future of digital security will depend on our ability to create defenses that are as intelligent and adaptable as the threats we face,” Lozhkin said, adding that AI will be the best defense against Dark AI.
Lozhkin’s final warning rings true: “It is important that organizations and individuals globally strengthen their cybersecurity solutions, invest in AI-powered threat detection, and continually update their knowledge of how these technologies can be exploited.”
Click Here For The Original Source.