Back to articles
I Built a Security Scanner That Uses AI to Review Its Own Findings

I Built a Security Scanner That Uses AI to Review Its Own Findings

via Dev.to PythonAlex LaGuardia

Every AI coding tool ships code fast. None of them check if it's safe. I built Critik — an open-source security scanner that catches what your AI writes and your review misses. Regex and AST find the candidates. An LLM reviews each one with full file context, confirms the real problems, kills the false positives, and explains why in plain English. pip install critik and you're scanning in 30 seconds. The Numbers Are Ugly 53% of teams that shipped AI-generated code later found security issues that passed review. Georgia Tech's Vibe Security Radar tracked 74 CVEs from AI coding tools in Q1 2026 alone — 6 in January, 15 in February, 35 in March. Accelerating. Here's what I keep finding when I scan AI-built projects: Hardcoded API keys — Cursor generates a Supabase client and pastes the service_role key right in the file SQL injection via f-strings — Copilot autocompletes db.execute(f"SELECT * FROM users WHERE id = {user_id}") without blinking Firebase rules wide open — Bolt scaffolds read

Continue reading on Dev.to Python

Opens in a new tab

Read Full Article
4 views

Related Articles