Back to articles
The Claude Code Leak Changed the Threat Model. Here's How to Defend Your AI Agents.
How-ToSecurity

The Claude Code Leak Changed the Threat Model. Here's How to Defend Your AI Agents.

via Dev.totemp-noob

IntentGuard — a policy enforcement layer for MCP tool calls and AI coding agents The Leak That Rewrote the Attacker's Playbook On March 31, 2026, 512,000 lines of Claude Code source were accidentally published via an npm source map. Within hours the code was mirrored across GitHub. What was already extractable from the minified bundle became instantly readable : the compaction pipeline, every bash-security regex, the permission short-circuit logic, and the exact MCP interface contract. The leak didn't create new vulnerability classes — it collapsed the cost of exploiting them . Attackers no longer need to brute-force prompt injections or reverse-engineer shell validators. They can read the code, study the gaps, and craft payloads that a cooperative model will execute and a reasonable developer will approve. Three findings from the leak are especially alarming: Context poisoning via compaction — MCP tool results are never micro-compacted; the auto-compact prompt faithfully preserves "us

Continue reading on Dev.to

Opens in a new tab

Read Full Article
3 views

Related Articles