
I Tested 50 AI App Prompts for Injection Attacks. 90% Scored CRITICAL.
So I spent last week doing something slightly unhinged. I pulled 50 system prompts out of public AI app repos on GitHub — just sitting there in the code, plain text — and ran every single one through a prompt injection scanner. The average score was 3.7 out of 100 . Median? Zero . 35 out of 50 had no defenses at all. Not weak defenses. Not "could be better" defenses. Literally nothing. How I got here Last week I published results from scanning 100 vibe-coded apps for the usual security stuff — XSS, exposed secrets, missing auth. That was bad enough. But while I was going through those repos, I kept tripping over the same thing: system prompts just... sitting there. Zero guardrails. Not even a basic "don't reveal your instructions" line. Raw instructions to an LLM with zero thought given to what happens when a user decides to be creative with their input. I couldn't stop thinking about it. So I made it a project. Grabbed 50 AI-powered apps from public GitHub repos — chatbots, coding ass
Continue reading on Dev.to
Opens in a new tab




