
Your AI Copilot Might Be Poisoned: RAG Attacks and Why Static Analysis Still Wins
This week, a Hacker News post about document poisoning in RAG systems caught my attention. And over on Zenn (Japanese dev community), someone found malware disguised as a "useful tool" on GitHub . These aren't isolated incidents. They're symptoms of the same problem: the code your AI writes is only as trustworthy as its training data and context . I've been building a security scanner specifically for AI-generated code for the past two weeks. Here's what I've learned about why this matters — and what actually works to catch the problems. The Attack Surface Nobody Talks About When you use an AI coding assistant, you're trusting: The model's training data — was any of it poisoned? The RAG context — are your docs, READMEs, and examples clean? The packages it suggests — are they typosquatted? The patterns it follows — are they secure by default? The RAG poisoning paper shows how attackers can inject malicious content into the documents that AI systems use as context. Imagine someone submit
Continue reading on Dev.to
Opens in a new tab



