AI vs. AI: The Future of Cybersecurity Is a Machine-Only Battlefield | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


SAN FRANCISCO—At the 2026 RSAC Conference, Mitch Ashley, vice president and practice lead at Futurum Group, and Alan Shimel, founder and CEO of Techstrong Group, explained how it is no longer possible for humans to secure sensitive systems on their own. They suggest that, despite AI’s rapid growth and the current field of products with inadequate safeguards for privacy and security, the average security professional is moving toward an oversight role rather than hunting for vulnerabilities themselves.


AI Innovation Is Outrunning Our Ability to Secure It

The AI market is messy. Ambitious startups are launched daily, and new AI tech is cropping up before anyone can make sense of what’s already out there. By the time a security solution is proposed for a popular AI model, it has been replaced or made obsolete by a shiny new development. A theme at this year’s RSAC has been AI guardrails. As AI agents gain access to sensitive data and systems, experts at multiple panels have said that protections are needed to limit their access in terms of time, authority, and scope. 

From simple oversights to novel prompt injection attacks, there’s no telling what the next zero-day vulnerability will be, especially in the new agentic world that Ashley and Shimel say has already arrived. “The amount of code, the amount of validation requests coming through, will overwhelm the human in the loop. And I’m telling you that we’re moving to something we call human at the helm,” Ashley noted.

This notion places humans in an advisory role, providing structured guardrails for agentic AI systems. To put it simply, human security professionals will be responsible for sifting through the results of an AI’s findings. However, the findings will be overwhelming. Shimel stressed this, stating that, “As humans, we move at hundreds of repetitions per minute. AI moves in the degree of thousands.” Essentially, the human security professional will need to leverage other AI agents to secure active AI agents. The idea is convoluted enough to make your head spin, but Ashley and Shimel argued that it is an inevitable future that current systems are ill-prepared to handle. 


The AI Security Loop: Machines Auditing Machines

How can the veracity of an AI auditor be verified without building yet another AI to audit the auditor and so on? This method leaves room for issues to persist, and the reliability of these AI audit models remains to be determined. 


The amount of code, the amount of validation requests coming through, will overwhelm the human in the loop.

– Mitch Ashley, vice president and practice lead at Futurum Group

“We don’t have enough humans to do the whole job,” Shimel said, taking a pragmatic approach to the problem of recursion. He and Ashley propose a multi-model approach. Agentic security will have to be layered, as one tool verifying another will inevitably miss things. Much like how a human-based security team has many layers of checks and balances, so, too, will agentic systems need to be layered to be self-regulating. 



Newsletter Icon

Get Our Best Stories!

Stay Safe With the Latest Security News and Updates


SecurityWatch Newsletter Image

Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.

By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.

Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

I spoke with Shimel after the presentation and asked whether he believed these self-regulating systems were imminent or more of a general idea. While he said that widespread adoption is still a way off, Shimel believes this technology is out there and being trialed. He acknowledged that “recursion is a complex issue and that it is a problem that will need to be addressed instead of avoided due to its inevitable nature. Some oil is going to be spilled on the road there,” he continued, noting that adoption will come with its fair share of incidents. 


Why AI Systems Are So Hard to Secure

Traditional systems can be inspected. Take a VPN audit, for example. A third-party firm can review internal processes and evaluate privacy policies to ensure compliance. Traditional security audits don’t work well for AI-based systems because the functions at the very core of the service are in what’s called a “black box”—meaning you can see the data that has been input (a prompt) and the output (generation), but not the internal logic and processes that deliver the result. 

Recommended by Our Editors


Security must be built-in, not tacked on…isn’t that the dream of any security professional?

– Alan Shimel, founder and CEO of Techstrong Group

It’s like asking a human why they made a certain decision. They can point toward the stimulus and the resulting action, but we can’t look into our own minds when a decision is being made and see the exact path a thought took. Monitoring AI agents poses a similar problem, since we cannot trace every step back through the decision-making process to find a “why.” 


Can AI Still Be Secured—or Is It Too Late?

Ashley and Shimel proposed that guardrails at a fundamental level are necessary to safeguard these systems, with Shimel stating that, “Security must be built-in, not tacked on…isn’t that the dream of any security professional?” 

The path forward to securing AI may be uncertain, but you can take substantial steps to secure your online presence now. A VPN can prevent third parties, such as your ISP, from collecting your data. An antivirus will prevent unauthorized apps and malware from compromising your system. You can also pair those two tools with a password manager to further secure your online accounts against data breaches

About Our Expert



——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW