OpenAI has announced a new AI model called GPT-5.4-Cyber. Similar to Anthropic’s Claude Mythos, this new “cyber-permissive” variant of its GPT-5.4 is built for defensive cybersecurity and not public use.
OpenAI’s newest variant is meant to prepare the way for more capable models to come
OpenAI says that its new GPT-5.4-Cyber variant of GPT-5.4 is specifically meant to prepare the way for more capable models coming this year.
In preparation for increasingly more capable models from OpenAI over the next few months, we are fine-tuning our models specifically to enable defensive cybersecurity use cases, starting today with a variant of GPT‑5.4 trained to be cyber-permissive: GPT‑5.4‑Cyber.
Access to GPT-5.4-Cyber is limited to “the highest tier” of “users willing to work with OpenAI to authenticate themselves as cybersecurity defenders.”
Trusted Access for Cyber is required for using GPT-5.4-Cyber
OpenAI says this is because GPT-5.4-Cyber is “purposely fine-tuned for additional cyber capabilities and with fewer capability restrictions.”
This is a version of GPT‑5.4 which lowers the refusal boundary for legitimate cybersecurity work and enables new capabilities for advanced defensive workflows, including binary reverse engineering capabilities that enable security professionals to analyze compiled software for malware potential, vulnerabilities and security robustness without needing access to its source code.
Because this model is more permissive, we are starting with a limited, iterative deployment to vetted security vendors, organizations, and researchers.
The rollout is part of an expanded version of Trusted Access for Cyber, a cybersecurity initiative launched by OpenAI earlier this year. The company highlights two methods for gaining access to Trusted Access for Cyber:
You can learn more about TAC here, and GPT-5.4-Cyber here.
OpenAI also recently introduced a new version of its Pro plan, aimed at Codex users.


FTC: We use income earning auto affiliate links. More.

