Capsule Security raises $7 million to guard AI agents

Capsule Security has emerged from stealth with a $7 million seed round led by Lama Partners and Forgepoint Capital International.

The Tel Aviv startup has developed a security product designed to monitor and control AI agents as they operate inside business systems. It targets risks that arise when agents access sensitive data, call tools, and carry out automated workflows.

Capsule is entering a market drawing growing attention as large companies deploy AI agents more widely across coding, customer service, and internal operations. Citing Microsoft data, the company noted that more than 80% of Fortune 500 companies now use active AI agents built with low-code and no-code tools.

Its core argument is that AI agents should be treated as a new type of privileged user because they can take action quickly and with limited human oversight. That creates a security challenge for companies relying on conventional tools designed for more predictable software and human access controls.

“AI agents are quickly becoming a new class of privileged user in the enterprise, except they can act at machine speed and they do not behave like deterministic software,” said Naor Paz, Chief Executive Officer and Co-Founder, Capsule Security.

Paz continued, “That creates a dangerous gap between what security teams can govern today and what agents can do in production. Capsule closes that gap by enforcing trust at runtime, inside the execution path, so teams can move fast with agents while staying in control of what those agents can access and execute.”

Research Findings

The launch was accompanied by research disclosures on vulnerabilities affecting major agent platforms. Capsule identified ShareLeak, which it described as a critical indirect prompt injection flaw in Microsoft Copilot Studio, and PipeLeak, a prompt injection issue in Salesforce Agentforce.

According to Capsule, ShareLeak has been patched and assigned CVE-2026-21520. PipeLeak showed how untrusted lead-form inputs could influence agent behaviour and trigger unsafe downstream actions.

The company presented those findings as examples of a broader class of runtime risk, in which external content can alter an agent’s goals or steer its use of connected tools. Routine workflows, it argued, can become security exposure points when agents act on manipulated prompts.

Capsule also developed ClawGuard, an open-source enforcer for OpenClaw that adds a checkpoint before agents execute tool calls. The tool is intended to address risks in open agent frameworks, where each invocation can become a decision point.

Product Focus

Capsule’s platform is designed to work without proxies, gateways, software development kits, or browser extensions. It supports platforms including Cursor, Claude Code, Microsoft Copilot Studio, ServiceNow, and Salesforce Agentforce, and can route telemetry into existing security workflows.

The company has also been listed as a representative vendor in Gartner’s market guide for “guardian agents”, a term for AI systems designed to oversee and protect other AI agents. Capsule said its models evaluate actions in context and can block unsafe or unauthorised activity before completion, while generating telemetry for governance, investigation, and compliance teams.

Advisers include Chris Krebs, the first Director of CISA; Omer Grossman, former Global CIO at CyberArk; Jim Routh, former CISO at several global companies; and Dr. Yonesy Núñez, a former CISO and senior security executive in financial services.

“AI agents are a new class of privileged user, operating at machine speed with minimal oversight,” said Chris Krebs, Advisor, Capsule Security. “Legacy tools weren’t built to monitor what happens between prompt and action-that’s the runtime gap. Capsule closes it.”

Backers

The seed round adds investors to a growing group backing security startups focused on AI behaviour rather than only model posture or access policies. As businesses give AI systems broader permissions across internal software, investors have been looking at products that can track decisions and intervene before actions are completed.

Ron Zalkind, Founding General Partner at Lama Partners and a board member at Capsule Security, said the investment was driven by the shift in how software is being built and operated through AI-driven automation.

“Agents have the ‘superpower’ to write and deploy code at unprecedented rates, fundamentally changing how software is built and operated,” said Ron Zalkind, Founding General Partner, Lama Partners, and Board Member, Capsule Security.

Zalkind said, “With that level of power comes a new responsibility to secure it. Security leaders understand that legacy tools were never designed to interpret intent, context, and real-time behavior, which are essential for securing dynamic agentic environments. From day one, Naor and Lidan have combined deep technical rigor with clarity of vision to build a platform that allows organizations to confidently adopt AI agents while stopping dangerous actions before damage is done.”

Damien Henault, Managing Director/Partner at Forgepoint Capital International and a Capsule board member, described the startup’s technical approach.

“Capsule fine-tuned Small Language Models (SLMs) to create a multi-agent system of ‘Guardian Agents’ that can protect AI with AI, covering both posture and low-latency runtime protection. The team is the strongest of the agent-space players, having expertise in both traditional security and deep familiarity with emerging protocols like MCP and Skills,” said Damien Henault, Managing Director/Partner, Forgepoint Capital International, and Board Member, Capsule Security.

Click Here For The Original Source

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW