
I Built a Vibe-Check Tool — Then Ran It on an AI-Built Codebase and It Scored 0/100
The Setup A few days ago I built vibe-check — a CLI that scores how much of your codebase was written by AI, file by file, from 0 (human) to 100 (vibe-coded). It works by detecting patterns that AI models reliably leave behind: over-commenting, generic naming conventions, hallucinated imports, repetitive structure, placeholder code. I was happy with it. Ran it on a few projects, got plausible scores. Published a blog post. Got some stars on GitHub. Then I ran it on NeuralDreamWorkshop — a full-stack BCI (brain-computer interface) app with a React frontend, Express middleware, FastAPI ML backend, and 16 machine-learning models for EEG emotion classification, sleep staging, and dream detection. It returned: 0/100. MOSTLY HUMAN. The Numbers Here's what vibe-check found across the full repo: Scan path: /Users/sravyalu/NeuralDreamWorkshop Files analyzed: 244 Skipped: 11 Errors: 0 ╭───────────────────────────── Repository Summary ─────────────────────────────╮ │ Repo Vibe Score 0/100 — MOSTL
Continue reading on Dev.to
Opens in a new tab

