How the AI Coding Boom Is Rewriting Application Security #AI


Agentic AI
,
Application Security
,
Artificial Intelligence & Machine Learning

Costanoa Ventures’ John Cowgill on Moving From Static Analysis to Runtime Defense


John Cowgill, partner, Costanoa Ventures

Artificial intelligence-generated code is arriving faster than security teams can review it, and the risks are moving from the line level to the system level, said John Cowgill, partner at Costanoa Ventures.

See Also: Taming the Rise of Shadow AI Agents

AI coding models are producing more secure code at the line level, but that improvement is masking a deeper problem: Code that is individually correct can still be brittle and insecure at the level of the system.

“We’re going to need to have dynamic analysis running at all times in application security,” Cowgill said. He described this as the transition from AI Security 1.0 – guarding AI at the edges through prompt filtering and LLM input controls – to AI Security 2.0, in which security must monitor what agents are actually doing in runtime, across distributed systems, in real time.

In this video interview with Information Security Media Group at RSAC Conference 2026, Cowgill also discussed:

  • Why 2026 is shaping up as the year of the “vulnpocalypse;”
  • How AI agents can help triage, prioritize and eventually remediate vulnerabilities;
  • What it will take for a new class of AI detection and response vendors to win the emerging runtime security market.

Cowgill leads Costanoa Ventures’ cybersecurity practice and invests in applied AI and national security technology. Before joining Costanoa, he was a consultant at McKinsey & Company, advising consumer and technology companies on strategy and operations projects across the retail, healthcare and technology sectors.





Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW