
The Tool That Turned on Itself: AI-Slop-Detector v2.9.0 v2.9.1
v2.8.0 fixed the math. v2.9.0 gave the tool memory. v2.9.1 was the uncomfortable version where we ran the detector on its own source code — and then had to actually fix what it found. Here's the full story. 1. v2.9.0 — Just one more thing After shipping v2.8.0, I looked at the codebase and had the thought that's never good: "This is almost there. Just one more thing." Three "one more things" later, v2.9.0 was done. 1) Problem 1: The tool had no memory Every run produced a score. That score disappeared. You had no way to know if a file was getting better or worse. No way to know if the AI had been touching the same file repeatedly, each time nudging the deficit score up a little further. Most linters (static analysis tools that check code for problems without running it) work this way — scan, report, forget. For tracking AI-generated code quality over time, that's not enough. The direction of change matters as much as the score itself. The fix: SQLite auto-recording on every run. (SQLit
Continue reading on Dev.to Python
Opens in a new tab

