How Zscaler and OpenAI turn zero-trust security into an AI accelerator #AI


Zscaler Inc.‘s new partnership with OpenAI Group PBC has the potential to transform the security firm’s cloud-native zero-trust platform into an engine that can both harden its own stack and help customers deploy artificial intelligence with confidence at large scale.

Understanding the news

Zscaler has joined OpenAI’s Trusted Access for Cyber or TAC program, gaining access to security-tuned frontier models, including GPT-5.4-Cyber, and related APIs to strengthen its Zero Trust Exchange and AI Red Teaming and SecOps offerings. Internally, Zscaler is integrating GPT-5.4-Cyber and Codex-style security models into a multi-agent security architecture and a secure SDLC, providing developers with “Security-as-a-Service” to detect and remediate vulnerabilities earlier. Externally, these OpenAI capabilities power Zscaler’s AI Red Teaming and OpenAI-assisted MDR investigations.

Why TAC and GPT5.4Cyber matter

OpenAI’s TAC program is a gated-access framework that provides vetted defenders with tiered access to increasingly capable models, culminating in GPT-5.4-Cyber, a variant tuned for defensive cybersecurity tasks such as vulnerability discovery, binary analysis and exploit chain reasoning. The intent is to put more powerful, offensive-style analysis in the hands of trusted security teams while enforcing identity, usage policies and safeguards to reduce abuse. In other words, let the security industry use AI to fight AI threats.

By being part of TAC, Zscaler gains early, deep access to these capabilities and can embed them in detection pipelines, SDLC workflows and red teaming tooling, rather than treating ChatGPT-like systems as a sidecar productivity tool. That distinction matters because it turns frontier models into core infrastructure for how Zscaler builds, tests and runs its security cloud — essentially “compiling” AI into the fabric of the service.

Hardening the platform, shifting risk left

From a customer perspective, the first benefit is a harder underlying platform. Zscaler is integrating GPT-5.4-Cyber and Codex Security into its secure SDLC and internal multi-agent security architecture, providing developers with on-demand AI review of code, configurations and dependencies as they build. This functions as an always-on security-as-a-service layer, catching flaws before they ship and shrinking the window of exposure.

Because the models are tuned for cyber use cases, they can reason about complex vulnerability patterns across code, infra-as-code and policy, not just surface-level linting or pattern matching, and can propose specific remediation steps as part of the workflow.

This is classic “shift left,” but with a twist: Instead of bolting security checks onto DevOps pipelines, Zscaler is using AI to embed security knowledge into every commit and deployment, ultimately increasing resilience for all customers using the Zero Trust Exchange.

Safer, faster enterprise AI deployments

The more strategic angle is how this partnership accelerates customers’ own AI initiatives.

Most large enterprises are now doing three things at once:

  • Standing up internal large language model platforms and agent frameworks.
  • Exposing AI-powered features in customer-facing apps.
  • Connecting those systems to sensitive data in software-as-a-service, infrastructure-as-a-service and private apps.

That stack introduces a new attack surface: prompt injection, data exfiltration via tools, insecure agents, misconfigured model endpoints and brittle guardrails. Zscaler’s OpenAI-powered capabilities address that problem in three ways.

AI Red Teaming for real AI systems

Zscaler’s AI Red Teaming platform has been built around OpenAI models since early 2024 and now adds GPT-5.4 Cyber as a new back-end engine.

Key advantages for enterprise AI teams include:

Realistic attack simulation: Using OpenAI’s text, image and speech models, Zscaler can generate sophisticated, multimodal attack sequences against customers’ AI apps such as prompt injection, tool abuse, jailbreaks and model confusion at a scale and level of creativity that can’t be matched with human-only teams.

Instant remediation: The platform goes beyond “here’s a vulnerability” reporting by automatically generating optimized system prompts, policy updates and configuration hardening steps to close the gaps it discovers. This shortens the loop from discovery to fix, which is especially important when AI features ship weekly.

AI asset and agent analysis: Zscaler analyzes MCP tools and AI agents, including source code and integration patterns, to produce a global risk posture for the customer’s AI estate. For organizations whose AI footprint has sprawled across business units, this serves as a radar that prioritizes which agents and tools need immediate hardening. In practical terms, this lets enterprises harden AI apps before rolling them out. Run red teaming, accept or mitigate findings, and ship with evidence that the system has been tested against frontier-grade adversarial creativity.

Guardrailed, AI-accelerated SecOps

Even if the AI apps are secured, the rest of the environment must still detect and respond to incidents that move faster than human-only teams can handle.

Zscaler’s Red Canary MDR model addresses this by pairing OpenAI agents with human analysts in a “human-in-the-loop” design. AI agents handle tedious tasks such as enriching alerts with context, correlating signals across Zscaler data pipelines, and assembling timelines and likely root causes. Human experts remain in charge, defining workflows, enforcing guardrails and validating outputs to help Zscaler maintain its 99.6% true-positive rate.

Zero trust as the AI safety net

The partnership also reinforces Zscaler’s core zero-trust message: Even as the company leans into AI, it is doing so on an architecture that makes applications invisible to the public internet and eliminates traditional VPN- and firewall-based attack surfaces. This was a big topic of conversation with Zscaler executives at the recent RSAC cybersecurity event.

This is important for AI rollouts because:

  • LLM endpoints, vector stores and AI gateways often end up exposed as new public services. Putting them behind Zscaler’s Zero Trust Exchange means they are reachable only from authenticated, authorized users and workloads, not from the open internet.
  • As models gain access to sensitive tools — databases, SaaS application programming interfaces internal apps — zero-trust policies can constrain which users and agents can invoke which tools, from which devices and locations, with full inspection and logging.

The net effect: Enterprises can move faster on AI experiments and production deployments because they are building on a platform that assumes compromise, collapses lateral movement and limits blast radius by design.

What this unlocks for enterprise AI roadmaps

For CIOs and CISOs driving AI agendas, Zscaler’s OpenAI partnership is a signal that security and AI can compound rather than collide. Three practical accelerators emerge:

Faster experimentation with guardrails

Red teaming-as-a-service plus zero-trust controls mean teams can spin up pilots with less fear that a misconfigured agent or endpoint will expose sensitive data. Security can move from being the “department of no” to a partner that offers reusable patterns: red teaming templates, prompt policies, AI guardrails and network controls that come pre-validated.

Enterprise-wide AI security baselines

With AI asset discovery and analysis, organizations can build a unified inventory of AI apps, agents and tools and apply consistent policies, such as data access, logging, red teaming cadence, across business units. GPT5.4Cyber’s analysis capabilities can help normalize findings and recommendations, avoiding the “every team does AI security differently” anti-pattern that slows approvals.

Continuous, closed-loop improvement

Findings from AI Red Teaming inform SDLC improvements and platform hardening, while insights from MDR investigations feed back into detection logic and agent behavior. Because all three loops — build, attack, respond — are now AI-accelerated, the overall time-to-secure can keep pace with time-to-deploy, which is the core bottleneck for many AI programs today.

How customers should respond

For enterprises already using Zscaler and betting big on AI, the move suggests a few immediate actions:

  • Engage Zscaler’s AI Red Teaming capabilities early in your AI lifecycle — treat them as a standard gate for any model or agent that touches sensitive data.
  • Align your internal AI governance with Zscaler’s Zero Trust controls: treat LLMs and agents as first-class applications that must sit behind zero trust, with least-privilege access to data and tools.
  • Work with Zscaler to plug MDR outputs and AI-driven detections into your broader security operations center and threat intel workflows, so AI-linked incidents don’t get siloed.

By tying frontier models like GPT-5.4-Cyber to Zero Trust, red teaming and managed detection and response, Zscaler is trying to give customers what they badly need in 2026: a way to ship AI faster without silently expanding their attack surface. If it executes, this could be one of the stronger blueprints for securing the enterprise AI lifecycle from code to production. This gives customers security confidence, which is currently very low.

Zeus Kerravala is a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE.

Image: SiliconANGLE/Google Gemini

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.

About SiliconANGLE Media

SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.



Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW