Worry about the basics of ransomware, not the AI threat | #ransomware | #cybercrime


The recent wave of high-profile ransomware attacks targeting brands like M&S has reignited fears that AI is fuelling a surge in cybercrime. While AI is undeniably reshaping the threat landscape – enabling more convincing phishing emails and automated attack workflows – its role in ransomware remains largely overstated.  

The reality is that AI is evolving existing threats, not reinventing them. Most ransomware operators rely on simple, well-established techniques that offer speed, scale, and profitability. As long as phishing, insider threats, and ransomware continue to deliver results, there is little incentive for bad actors to adopt complex AI tools. 

Understanding how AI is actually being used by ransomware groups is key to building better defences. Breach and attack simulation (BAS) tools are proving vital to detect and close gaps before attackers take advantage. 

What’s driving the ransomware surge?

The 2025 Cyber Security Breaches Survey paints a concerning picture. According to the study, ransomware attacks doubled between 2024 and 2025 – a surge less to do with AI innovation and more about deep-rooted economic, operational and structural changes within the cybercrime ecosystem. 

At the heart of this growth in attacks is the growing popularity of the ransomware-as-a-service (RaaS) business model. Groups like DragonForce or Ransomhub sell ready-made ransomware toolkits to affiliates in exchange for a cut of the profits, enabling even low-skilled attackers to conduct disruptive campaigns. 

The most vulnerable point remains the people behind the systems. Take the recent M&S breach: this incident was not caused by advanced techniques, but the result of social engineering targeting a third-party supplier. This is a stark reminder that cybercriminals still exploit the weakest link in the chain. As such, while AI is a growing area of concern, ransomware groups today are sticking with what works. 

Defending against ransomware

Building an effective ransomware defence means recognising where traditional approaches fall short. Penetration testing and red teaming are critical for detecting complex threats, like advanced persistent threats (APTs) or insider compromise. However, ransomware operators don’t typically rely on stealth or novel tactics; they capitalise on scale, predictability, and speed. 

Breaches often stem from common, preventable issues such as poor credential hygiene or poorly configured systems – areas that often sit outside scheduled assessments. When assessments happen only once or twice a year, new gaps may go unnoticed for months, giving attackers ample opportunity. To keep up, organisations need faster, more continuous ways of validating defences. 

BAS in ransomware defence 

Breach and attack simulations (BAS) address this blind spot by enabling frequent simulations that mimic real-world tactics in controlled, repeatable ways. They build resilience, not just assurance. BAS aren’t designed to replace human-led exercises like red teaming; they complement them by running regularly, offering timely insights between manual assessments and helping teams maintain high readiness. 

To get the most out of BAS, tuning and prioritisation are essential. Well-configured platforms help teams focus on what matters most, reducing noise and enabling faster remediation of impactful findings. 

Most ransomware actors follow well-worn playbooks, making them frequent visitors to company networks but not necessarily sophisticated ones. That’s why effective ransomware prevention is not about deploying cutting-edge technologies at every turn – it’s about making sure the basics are consistently in place.  That means having robust backup and recovery processes, but training staff, maintaining visibility, and monitoring for common ransomware methods. True resilience comes from anticipating – not just reacting to – attacks. 

The AI gold rush 

While the focus has largely been on how cybercriminals might weaponise AI, many organisations are missing the more immediate risk: the vulnerabilities introduced by their own ungoverned use of AI tools. 

Shadow AI, the unauthorised use of tools like ChatGPT, bypasses security protocols and risks leaking sensitive company data. Nearly 40% of IT workers admit to secretly using unauthorised generative AI tools. 

Unregulated AI adoption, poor data governance and misconfigured AI services expand the attack surface and increase the likelihood of exposure. It creates a lack of visibility into how internal AI tools are processing data, complicating incident response. 

To manage this risk effectively, organisations must apply the same security scrutiny to internal AI as they would to any new technology. That includes governance frameworks, clear visibility into data flows, and regular testing of how AI tools are used across the business. Staff also need clear cybersecurity training on the dangers of shadow AI use. 

Focus on the fundamentals

The narrative around AI-powered ransomware can distract from the real risks: internal missteps and old, reliable attack methods. For security leaders, the response should be clear: the real threat is what’s already working for attackers. 

Security leaders must resist the temptation to chase hypothetical threats. That means strengthening the basics, simulating attacks continuously, and keeping defences grounded in real-world tactics. In the fight against ransomware, AI isn’t the biggest danger. Complacency and misplaced priorities, coupled with shadow AI use, are. 

Ben Lister is the head of threat research at NetSPI 




Source link

.........................

National Cyber Security

FREE
VIEW