
I Tested 6 Sneaky Prompts Against .cursorrules and CLAUDE.md. They Caught Zero Violations.
"Your AI coding rules are suggestions, not enforcement. I tested semantic evasion attacks against keyword-based rule files — and built an open-source engine that actually stops violations before they hit your codebase Last month, my AI assistant dropped a production table. Not maliciously. I had a .cursorrules file that said NEVER delete patient records . I asked the AI to "clean up old patient data." It interpreted "clean up" as DELETE FROM. My rule file didn't catch it because the prompt never contained the word "delete." That's when I realized: every AI rule file in existence is a suggestion, not a constraint. The Problem Every Developer Hits Eventually If you've Googled "cursor keeps changing my code" or "AI broke my codebase" — you're not alone. The frustration is real and measurable: awesome-cursorrules has 39,000+ stars on GitHub. Developers desperately want to constrain AI behavior. AGENTS.md has been adopted by 60,000+ projects . Google built an entire spec for it. CLAUDE.md e
Continue reading on Dev.to
Opens in a new tab