F5 and Forcepoint have formed a partnership to secure enterprise AI across its lifecycle, linking data discovery and classification with runtime protection for AI systems.
The partnership combines Forcepoint’s Data Security Posture Management tools with F5’s AI red teaming and guardrails in the F5 Application Delivery and Security Platform. It is aimed at businesses deploying AI in copilots, assistants, and automated workflows while trying to control how sensitive data is used.
Security teams are under pressure as companies move AI projects from trials into production. That shift raises new questions about where sensitive data sits, how it moves through models and applications, and what controls remain in place once services go live.
Many organisations still manage data governance, application security, and runtime protection separately. That separation can leave gaps between internal policy and the behaviour of AI tools in live environments.
Joined Approach
Under the partnership, Forcepoint’s software discovers, classifies, and prioritises sensitive and business-critical data across cloud, software-as-a-service, endpoint, and broader enterprise environments. F5 then applies runtime controls across APIs, gateways, applications, and AI agents.
The combined set-up is designed to help security teams identify data vulnerabilities in real time, rank AI use cases by risk, enforce controls over AI interactions, and monitor systems for misuse or abnormal behaviour. It also includes telemetry and policy validation to check whether AI systems are operating in line with internal governance requirements.
For companies embedding AI into daily operations, that extends security oversight beyond the data layer into live applications and model activity. It also reflects a wider market shift as vendors try to address AI risk with products that span data management, application security, and operational monitoring.
John Maddison, Chief Marketing Officer at F5, said enterprises were moving quickly.
“Enterprises are moving AI initiatives from experimentation to production faster than most security programs can adapt,” he said.
“By combining Forcepoint’s deep data intelligence and contextual awareness with F5’s advanced application security and runtime protections, organizations eliminate operational security gaps with unmatched confidence and control in their AI operations. As AI’s threat surface continues to expand, the combined power of DSPM technologies with F5’s AI Red Team and AI Guardrails equips enterprises with proactive tools to securely scale and govern AI at every stage of its lifecycle.”
Runtime Focus
A key element of the partnership is F5’s emphasis on runtime protection, an area that has become more prominent as AI systems move into production and interact with users, applications, and internal data stores. Its tools monitor AI models and test them for adversarial behaviour, while also seeking to detect prompt abuse and prevent data exfiltration.
Forcepoint’s role centres on understanding the data fed into those systems and the policies attached to it. That includes identifying which data is suitable for AI use and which projects require tighter scrutiny.
The partnership comes as businesses look for ways to adopt AI without replacing existing security architecture. Rather than offering a newly built single platform, the companies are positioning the alliance as a way to connect established controls across separate layers of the stack.
Naveen Palavalli, Chief Product and Marketing Officer at Forcepoint, said the security model around AI needed to change.
“AI has fundamentally redefined data security, exposing static policies for what they are: inadequate,” he said. “F5 and Forcepoint are establishing a new standard of continuous, adaptive protection that follows data from the moment of creation through every stage of its lifecycle, including the runtime layer where AI systems operate, evolve, and expand risk vectors. The threats AI brings require a new category for proactive data and AI risk mitigation, and our partnership is delivering on this today.”
The announcement highlights how cybersecurity suppliers are increasingly tying AI governance to operational controls, with a focus on data lineage, application behaviour, and policy enforcement once systems are in use.
Click Here For The Original Source.
