Matias Madou, CTO & Co-Founder, Secure Code Warrior discusses how modern threat modelling must evolve beyond slow, traditional practices.
New risks, smarter modelling
Threat modelling is not a new concept for companies running a modern, defence-centred security program.
In fact, it’s one of the core tenets of preventative cybersecurity best practices.
The most effective way to navigate a rapidly expanding threat landscape is to identify vulnerabilities within software or a network, map them out and remediate them – before an attacker can successfully orchestrate a breach.
By eliminating possible avenues of attack before a threat can even launch, you stand the best chance of mitigating risk and keeping your codebase and network secure.
It is no secret that AI coding has created new threat vectors and significantly increased the enterprise attack surface in multiple ways.
However, when leveraged by a security-proficient expert, AI technology can also be a very powerful asset for enhancing and accelerating threat modelling.
Developers have long struggled to truly claim a seat at the table in traditional threat modelling programs, but with the right skills, they have the opportunity to wield AI responsibly to seriously cut risk and rework in their codebase.
Why traditional threat modelling falls short in today’s environment
Threat modelling has traditionally existed in the realm of security professionals.
It’s always been their job to predict the many ways threat actors can enter a network or compromise software.
To accomplish this, they held meetings and brainstormed, improved their knowledge through training and conducted threat hunting to address nagging “what if” questions about potential vulnerabilities.
More recently, suites of automated scanners and tools designed to spot hundreds or even thousands, of potential vulnerabilities, have entered the picture and supplemented security teams’ personal knowledge.
Once vulnerabilities were found, security teams would typically rely on developers to fix programs and applications, especially if a vulnerability was deemed critical or dangerous.
This relationship tended to cultivate an unhelpful “us versus them” mentality between developers and AppSec professionals, but the results remained impressive for a long time.
It may not have been terribly efficient, but the ends often justified the means.
While this approach might have been successful in the past, the evolving threat landscape is making traditional threat modelling practices increasingly unworkable in a modern software development ecosystem.
Developers have been brought on the threat modelling journey in some enterprise environments, sometimes working side-by-side with their AppSec counterparts.
This is a productive partnership, as developers know their code best and if they are security-aware, they are well-positioned to identify potential security weaknesses that could be exploited.
Even today, this setup is relatively rare and many companies do not engage the development cohort for these activities.
The primary reasons tend to vary, but generally, it comes down to a combination of the following:
- Inadequate security awareness: Unfortunately, many developers do not have the knowledge, tools or skills required to assist in threat modelling. This is largely a result of insufficient secure coding best practices within organisations and infrequent on-the-job training
- Slow and manual processes: Even if a senior, security-skilled developer is the right person for the job, traditional threat modelling processes are tedious, manual and rarely integrate well into a development workflow. This can drive good developers away from participating in these tasks, often seeing them as low-value and at odds with the KPIs they are typically measured against
- Outdated tools: It’s a harsh reality that, by the time a threat model is completed, it is likely already outdated. Static threat models tend to have limited value in enterprise environments for this reason
The evolution of defence
We’ve reached a point where the threat landscape today is far more dangerous than it has ever been.
Bots are everywhere and they can probe millions of networks for vulnerabilities in the blink of an eye.
This is coupled with the fact that there are billions of Internet of Things (IoT) devices deploying around the world with limited or no internal security.
Even human attackers are becoming increasingly sophisticated and well-trained and with AI support, cyber-attacks are becoming more potent and automated.
Even with automation tools at their side, security professionals can’t predict every possible avenue of attack.
It’s like trying to hold back the ocean’s tide with a bucket.
In that scenario, the size of the bucket is inconsequential.
Modern threat modelling takes a more holistic, developer-focused approach and it is made far more seamless with the right AI tooling.
We know that AppSec teams can’t keep up with threats from ground zero anymore.
Instead, security experts are increasingly recommending that we need to shift threat modelling away from the beachheads of our production environments and back into the development process.
This really gets at the core of what threat modelling was supposed to do in the first place, to prevent threats from even launching by not giving attackers any leverage to work.
Leveraging AI
A new threat modelling collaboration effort might not happen overnight and it likely will require small steps at first.
It might start with group meetings, ideally involving security awareness personnel.
They should also include developers who have shown an aptitude for security by completing foundational education pillars that allow them to navigate common security bugs and misconfigurations.
The overarching goal should be to create a plan for everyone to work under the same set of tools for easier communication and information sharing and a quicker response once vulnerabilities are discovered.
Once that is accomplished and developers and AppSec professionals see and respect each other as equal and supportive colleagues, they can move into more advanced threat modelling tactics, assisted by approved AI tools.
Insights from HackerOne’s (2025) Hacker-Powered Security Report reveal that 67% of security researchers already leverage LLMs in their threat modelling, yet according to ISC2’s (2025) AI Pulse Survey indicates that only 7% of companies use them frequently for this purpose, despite their significant potential.
Leveraging LLMs for threat modelling is a brilliant starting point, but they are too prone to hallucination and too lacking in nuance and contextual understanding to be the absolute final word on risk.
As a result, developers must be grounded in security best practices, continuously upskilled and their AI tooling traceable.
These tools are best wielded by security-skilled devs and can generally perform well in:
- Providing efficient, actionable intelligence as features are being built
- Reducing context-switching, since many popular IDEs feature integrated AI tools
- Delivering guidance in developer-centric language that relates to the work they are doing
- Building a ‘breaker’ mindset; something that, to date, has been elusive for software ‘builders’
- Assisting in creating guardrails for AI code generation and boilerplates
It’s crucial to understand that AI cannot and should not, replace human intuition.
But it’s equally important to recognise that its integration into the threat modelling process serves as a powerful catalyst for modern development.
When well-trained developers leverage AI to handle the heavy lifting of pattern recognition and rapid analysis, they bridge the gap between abstract risk and actionable defence.
Embedding these tools early on in the software development lifecycle (SDLC) does more than just patch holes; it cultivates a culture of proactive resilience.
By augmenting human empathy with machine precision, teams can secure their codebases at scale, ensuring that efficiency and security are no longer at odds, but are instead the foundational pillars of a streamlined workflow.
Click Here For The Original Source.
