Back to articles
How I built a tool that detects AI slop in codebases (and what patterns I found)
NewsTools

How I built a tool that detects AI slop in codebases (and what patterns I found)

via Dev.toRohan San

The Problem I've been using AI coding assistants heavily for the past year — Cursor, Copilot, Claude through various interfaces. They're incredible for velocity, but I kept noticing the same lazy patterns slipping through code review: TODO comments everywhere, often contradicting the actual implementation Placeholder variable names like data2 , temp , result_final Empty except blocks with just pass Entire blocks of commented-out code Functions named handle_it , do_stuff , process_data that do three unrelated things The worst part? These patterns are invisible to traditional linters. pylint doesn't care if your function is called processData or handleStuff . flake8 won't flag a TODO comment. But every human reviewer immediately spots them and questions whether you actually understand the code you're submitting. So I built roast-my-code — a CLI that specifically hunts for these patterns. The Detection Rules The analyzer is built around three categories: AI Slop , Code Quality , and Style

Continue reading on Dev.to

Opens in a new tab

Read Full Article
2 views

Related Articles