TrojAI has extended its platform for securing artificial intelligence (AI) applications, tools and platforms to now include a red teaming capability that is performed by AI agents that have been specifically trained to perform that task.
Additionally, the company has extended its firewall for AI to now include an instance of AI coding assistants, while also providing a private preview of Agent Runtime Intelligence, a platform that captures and analyzes AI agent execution traces to discover which tools are being used, what memory has been accessed, data retrieval patterns and any potential system prompt exposure.
Company CEO Lee Weiner said collectively these capabilities provide cybersecurity teams with a set of best-of-breed capabilities for securing AI environments in a way that both simulates attacks and provides visibility into how AI agents are behaving.
AI agents create something of a cybersecurity paradox in that it is relatively trivial to create a prompt injection attack that can be used to create a malicious set of instructions. At the same time, the AI agents themselves in the absence of any specific guardrails are designed to aggressively access as much data as possible. In effect, if an AI agent is compromised the potential downstream impact of that breach could easily prove catastrophic.
The Agent-Led AI Red Teaming capability that has been added to TrojAI Defend makes use of multiple AI agents to orchestrate multi-turn and dynamic attack chains using the library of datasets and manipulations the company has previously collected to enable cybersecurity teams to better appreciate the scope of that threat, said Weiner.
The overall goal is to enable cybersecurity teams to run a complex series of tests without having to manually configure them, he added. The test results are then automatically mapped to align with OWASP, MITRE or NIST frameworks.
Previously, TrojAI enabled cybersecurity teams to build red team simulations on their own using its TrojAI Detect platform. Now that task can be assigned to a set of AI agents to create and manage. The company has also developed a TrojAI Defend firewall designed specifically for AI tools, applications, models and Model Context Protocol (MCP) servers at runtime.
It’s not clear to what degree cybersecurity teams are investing in separate tools and platforms to secure AI environments versus possibly hoping to extend the reach of their existing investments. Given the pace at which AI innovation is occurring there is a strong case to be made for using a best-of-breed platform that has been specifically designed for AI environments, noted Weiner.
Regardless of approach, the pace of AI adoption is exceeding the ability of many cybersecurity teams to keep up. In all probability, the number of cybersecurity incidents involving AI is only going to exponentially increase in the months ahead. In fact, it may require a significant number of high-profile incidents before the requisite funding needed to secure AI environments is actually made available.
In the meantime, cybersecurity teams might want to prepare for the worst on the assumption that when it comes to understanding how to employ AI, their adversaries may be two steps or more ahead of them.
Click Here For The Original Source.
