AI is expected to increase the global ransomware threat over the next two years, the UK cyber chiefs have warned in a new report.
The near-term impact of AI on the cyber threat assessment, published by the National Cyber Security Centre (NCSC), a part of GCHQ, concludes that AI is already being used in malicious cyber activity and will almost certainly increase the volume and impact of cyber-attacks – including ransomware – in the near term.
At the beginning of the newfound AI craze brought on by largely by the launch of ChatGPT, reports provoked the worries of cyber-experts as analysts postulated the negative effects of AI in cyber.
Since then, other reports have questioned if AI will truly bring about a rise in cyber-crime, or if the fears have been overblown.
The prospect of AI allowing novice criminals to develop advanced phishing expeditions, and allowing cyber-criminals to draft and use AI-generated malware, stoked international fears.
But the NCSC report shows that many of these fears are already affecting the cyber-threat landscape – all types of threat actors, including state and non-state, skilled and unskilled, are already using AI to varying degrees.
Among other conclusions, the report suggests that by lowering the barrier of entry to novice cyber criminals, hackers-for-hire and hacktivists, AI enables relatively unskilled threat actors to carry out more effective access and information-gathering operations.
This enhanced access, combined with the improved targeting of victims afforded by AI, will contribute to the global ransomware threat in the next two years.
Ransomware continues to be the most acute cyber threat facing UK organisations and businesses, with cyber criminals adapting their business models to gain efficiencies and maximise profits.
To tackle this enhanced threat, the Government has invested £2.6 billion under its Cyber Security Strategy to improve the UK’s resilience, with the NCSC and private industry already adopting AI’s use in enhancing cybersecurity resilience through improved threat detection and security-by-design.
In the UK, the AI sector already employs 50,000 people and contributes £3.7bn to the economy, but these advancements and investments in AI seem to come with a price in cybersecurity.
“We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat,” NCSC CEO Lindy Cameron said.
“The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.
“As the NCSC does all it can to ensure AI systems are secure-by-design, we urge organisations and individuals to follow our ransomware and cybersecurity hygiene advice to strengthen their defences and boost their resilience to cyber attacks.”
Analysis from the NCA (National Crime Agency), suggests that cyber criminals have already started to develop criminal Generative AI (GenAI) and to offer ‘GenAI-as-a-service’, making improved capability available to anyone willing to pay. Yet, as the NCSC’s new report makes clear, the effectiveness of GenAI models will be constrained by both the quantity and quality of data on which they are trained.
The shift to ransomware-as-a-service, however, only cements the likelihood of this cyber-attack method to utilise the ease of AI to enhance their attacking profitability.
According to the NCA, it is unlikely that another method of cyber-crime will replace ransomware due to the financial rewards and its established business model.
“Ransomware continues to be a national security threat. As this report shows, the threat is likely to increase in the coming years due to advancements in AI and the exploitation of this technology by cyber criminals,” James Babbage, director general for threats at the National Crime Agency, said.
“AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed and effectiveness of existing attack methods. Fraud and child sexual abuse are also particularly likely to be affected.
Recommended reading
“The NCA will continue to protect the public and reduce the serious crime threat to the UK, including by targeting criminal use of GenAI and ensuring we adopt the technology ourselves where safe and effective.”
The report did reveal facets of the rise of AI in cyber – while it does lower the barrier to novice cyber-criminals, advanced uses of AI in cyber-crime will be limited to those with quality training data, expertise, and resources.
However, the NCSC expects that cyber-attacks against the UK will be made more impactful by AI as threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models.
Further, social engineering attacks will become more advanced and more difficult to detect.
Looking to 2025 and beyond, the commoditisation of AI-enabled capability in criminal and commercial markets will almost certainly make improved capability available to cyber crime and state actors.