[ad_1]
Artificial intelligence that generates malicious code can easily attack novice hackers

When a user requests “create ransomware code” in a typical interactive artificial intelligence (AI) chat window that looks like ChatGPT, hundreds of lines of code that can make malicious software appear in an instant. The AI’s output even reflects the user’s detailed request to “use a strong encryption method and leave a warning message to computer users infected by ransomware.” This is the scene of using Xanthorox AI, an AI model specializing in cyberattacks that was released on Telegram and the dark web earlier this year.
XanthoroxAI features supporting a variety of AI technologies to automate cyberattacks, from malware production, which is malicious software, to code generation, data analysis, and image analysis, such as screenshots. A developer who claimed to have made ransomware using XanthoroxAI emphasized, “AI wrote all the codes and I didn’t modify a line of code.”
As such, Generative AI technology is being used for cyberattacks, with AI helping hackers appearing. As the Generative AI’s ability to write and write codes has improved remarkably, even beginners who are not hackers with professional knowledge can easily write malicious codes using Generative AI.
AhnLab warned in a report late last year, “Attacks using AI include malicious code production, phishing attacks, deepfakes, vulnerability analysis and hacking, and are reaching a much more dangerous level than before in terms of sophistication and scale.”
For example, in the case of phishing texts and emails that many people encounter in their daily lives, there have been cases where hackers have left mistakes while writing them themselves, but phishing emails written by Generative AI have developed to a level that is difficult to distinguish from normal business mails.
In addition, there is a case of using an image Generative AI to create an SNS account with a virtual person to access the target of the attack or to use a deep fake video that has learned the voice.
Early last year, an attacker held a fake video conference targeting global financial companies in Hong Kong that reproduced the face and voice of the chief financial officer and made them transfer about 34.2 billion won.
Like “vibe coding,” in which developers write codes using Generative AI, cyber attackers are even saying “vibe hacking” because they use Generative AI. General Generative AI services such as ChatGPT are difficult to use in this way because they have protective devices, but models specialized in hacking, such as Zansorox AI, are distributed through the dark web.
“These tools have a user interface (UI) that mimics the GPT interface, so even users with low technical capabilities can easily generate malicious content,” said the Threat Intelligence Center of S2W, a South Korean security company. “Various attack elements such as phishing e-mails, ransom notes (memos left by ransomware after file encryption) and malicious scripts are automatically generated based on prompts.”
The security industry is also actively using AI to analyze violations or monitor phishing emails in real-time to prepare for attackers who use AI. For example, S2W is using AI to develop a dark web-specific AI model called ‘DarkBERT’ to detect crimes related to data leakage distributed on the dark web and to quickly respond by calculating the level of threat.
[ad_2]
Source link