Info@NationalCyberSecurity
Info@NationalCyberSecurity
0

NCSC says AI will increase ransomware, cyberthreats | #ransomware | #cybercrime


While ransomware activity is already surging, a new National Cyber Security Centre report assessed that the threat will only increase globally over the next year as AI improves phishing and other threat actor techniques.

On Wednesday, the U.K.’s NCSC published the report, titled “The Near-Term Impact of AI on the Cyber Threat,” that detailed potentially alarming trends for ransomware and overall cyberthreats beginning in 2025 and beyond. The report is based on an NCSC assessment that combines classified intelligence, industry knowledge, academic material and open source data from the U.K. government as well as international partners.

The report, which informs U.K. government policies, determined that AI tools could help attackers develop malware and exploits more efficiently and carry out more effective phishing campaigns. Improvements generated through AI could increase risks across the threat landscape, though the report highlighted ransomware, which is already a persistent problem.

The number of ransomware attacks skyrocketed last year. For example, a threat report by NCC Group tracked an 84% increase between 2022 and 2023.

“Phishing, typically aimed either at delivering malware or stealing password information, plays an important role in providing the initial network accesses that cyber criminals need to carry out ransomware attacks or other cyber crime. It is therefore likely that cyber criminal use of available AI models to improve access will contribute to the global ransomware threat in the near term,” the NCSC wrote in the report.

The NCSC predicted that by 2025, generative AI (GenAI) and large language models would make it more difficult for cybersecurity professionals of all levels to identify phishing emails and social engineering attempts that, for example, call for password resets. While other vendors such as Splunk found that those tools don’t improve the efficacy of spear phishing emails, the NCSC assessed that GenAI would make it easier for threat actors to craft emails with fewer grammar and spelling mistakes.

The report predicted that spear phishing and other social engineering threats would not only remain but increase as AI models evolve, providing a “significant uplift” for the capabilities for novice and less skilled threat actors.

“AI will almost certainly make cyber attacks against the UK more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models,” the report said.

Predictions and key judgments in the NCSC’s assessment are based on its Professional Head of Intelligence Assessment “probability yardstick,” which includes a likelihood range from “remote” to “almost certain.”

Ransomware risks increase

While AI might contribute to more advanced phishing attacks and therefore an increase in ransomware, the NCSC said it could also widen the pool of capable threat actors that conduct ransomware attacks. “Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing and coding,” the report said, warning that “enhanced access will likely contribute to the global ransomware threat over the next two years.”

One factor that contributed to an increase in ransomware activity over the years was the as-a-service business model. Ransomware as a service expanded the threat because affiliates do not need coding experience; they can instead purchase ransomware programs from different gangs, which in turn take a percentage of whatever ransom payments the affiliates receive.

The report assessed that as-a-service business models will continue to benefit amateur or less skilled threat actors in a variety of ways in addition to ransomware. That could include GenAI as a service, which the NCSC said could already be in development.

“Commoditisation of cyber crime capability, for example ‘as-a-service’ business models, makes it almost certain that capable groups will monetise AI-enabled cyber tools, making improved capability available to anyone willing to pay,” the report said.

Another prominent risk addressed in the report was how quickly threat actors are exploiting software vulnerabilities. The time between patch releases and exploitation has already decreased, the report warned, and AI will only exacerbate the problem. “AI is highly likely to accelerate this challenge as reconnaissance to identify vulnerable devices becomes quicker and more precise,” the NCSC said.

However, the NCSC assessed that GenAI tools would only provide “minimal uplift” in malware and exploit development for capable state actors and organized cybercrime groups. Another positive outlook from the agency was how AI can improve threat detection capabilities and help identify phishing campaigns for defenders.

Nitin Natarajan, deputy director at CISA, told TechTarget Editorial that while AI could make it easier for ransomware actors, he also sees some positive outcomes and benefits that organizations including CISA can add to their repertoire. However, he agreed with the NCSC that the technology could improve phishing emails and malicious coding capabilities, which would create new risks for organizations, especially those that already struggle with identifying malicious messages.

“It used to be very easy to tell a phishing email. The graphics weren’t there, or you could tell it was written by a non-native English speaker,” he said. “I think generally in the ransomware space there are going to be benefits to bad actors, and we just need to continue to stay one step ahead.”

Senior news writer Alexander Culafi contributed to this article.

Arielle Waldman is a Boston-based reporter covering enterprise security news.



Source link

National Cyber Security

FREE
VIEW