0

UK Intelligence Fears AI Will Fuel Ransomware, Exacerbate Cybercrime | #ransomware | #cybercrime

[ad_1]

AI is set to fuel cybercrime in the next two years, according to a UK intelligence agency.

Security researchers, along with Hollywood, have long warned about the threat of AI programs becoming smart enough to orchestrate cyberattacks. But on Wednesday, the UK’s National Cyber Security Centre (NCSC) went further and published a report that projects AI “will almost certainly increase the volume and heighten the impact of cyberattacks.”

“All types of cyber threat actor—state and non-state, skilled and less skilled—are already using AI, to varying degrees,” according to the report, which partially cites “classified intelligence” and industry data. (The NCSC operates under the UK’s GCHQ intelligence agency.)

The report doesn’t mention the risk of artificial intelligence going rogue to take over the world. Instead, the NCSC says AI is becoming a powerful tool to help hackers perfect and streamline their attacks, making it easier to produce hard-to-detect phishing messages and malware.  

“AI will primarily offer threat actors capability uplift in social engineering,” the NCSC said. “Generative AI (GenAI) can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing. This will highly likely increase over the next two years as models evolve and uptake increases.”  

The other worry deals with hackers using today’s AI models to quickly sift through the gigabytes or even terabytes of data they loot from a target. For a human it could take weeks to analyze the information, but an Al model could be programmed to quickly pluck out important details within minutes to help hackers launch new attacks or schemes against victims. 

“AI will almost certainly make cyberattacks against the UK more impactful because threat actors will be able to analyze exfiltrated data faster and more effectively, and use it to train AI models,” the NCSC adds. 

Although the NCSC didn’t elaborate, the report goes on to say that hacking groups, including ransomware actors, are “already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing, and coding.”

Despite the potential risks, the NCSC’s report did find one positive: “The impact of AI on the cyber threat will be offset by the use of AI to enhance cyber security resilience through detection and improved security by design.” So it’s possible the cybersecurity industry could develop AI smart enough to counter next-generation attacks. But time will tell.

Meanwhile, other cybersecurity firms including Kaspersky say they’ve also spotted cybercriminals “exploring” using AI programs. “Topics frequently include the development of malware and other types of illicit use of language models, such as processing of stolen user data, parsing files from infected devices, and beyond,” Kaspersky says in its own report.

Last year, we also wrote about a developer creating a malicious version of ChatGPT called WormGPT that could be used to craft hacking schemes. The project was later abandoned, and ironically became a scheme to scam other hackers, but other malicious chatbots named “xxxGPT, WolfGPT, FraudGPT, and DarkBERT” have since emerged, Kaspersky says.

[ad_2]

Source link

National Cyber Security

FREE
VIEW