Red Teaming AI: Tackling New Cybersecurity Challenges | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


Artificial Intelligence & Machine Learning
,
Events
,
Next-Generation Technologies & Secure Development

DistributedApps.ai’s Ken Huang on Agentic AI Risks and Threat Modeling



Ken Huang, Chief AI Officer, DistributedApps.ai


As AI agents gain autonomy and access dynamic tools, organizations must adopt new threat modeling approaches like mixture threat modeling, a new method that accounts for AI’s unpredictability, said Ken Huang, chief AI officer at DistributedApps.ai.

See Also: How Generative AI Enables Solo Cybercriminals

He stressed the need for continuous red teaming of AI systems and raised concerns about “viper coding,” the fast-paced, AI-driven development method that can lead to insecure code. He emphasized the need for modernized security practices tailored to autonomous AI.

“Traditional trust boundaries are no longer applicable as AI agents operate across multiple platforms. … We need a more flexible approach to security,” Huang said.

In this video interview with Information Security Media Group at RSAC Conference 2025, Huang also discussed:

  • How agentic AI expands the attack surface;
  • Why identity-based controls are no longer enough;
  • The risks of viper coding.

Huang is an accomplished AI expert who leads initiatives focused on generative AI security at DistributedApps.ai. He is the author of eight books on AI and Web3 and co-chairs the AI Organizational Responsibility Working Group and AI Control Framework at the Cloud Security Alliance.



——————————————————-


Click Here For The Original Source.