
I Ran My Own AI-Score Tool on 6 Months of My Code. I Had to Sit With the Results.
I built a CLI tool a few months ago that does one thing: it scans a codebase and estimates what percentage of the code was likely written by AI versus a human. I called it ai-score . I was proud of it. I wrote a blog post, got some stars, moved on. Last week I ran it on everything. The Tool If you haven't seen it, ai-score works by running a combination of pattern analysis — structural regularity, comment-to-code ratios, naming consistency, docstring presence — against a set of heuristics trained on code I labeled by hand. It's not perfect. It's not trying to be. But it catches the things I've noticed about my own AI-assisted code: the eerily even function lengths, the variable names that are always exactly descriptive enough, the error handling that covers every case but somehow feels hollow. You can install it and run it on your own repos: pip install ai-score ai-score ./your-project Source: github.com/LakshmiSravyaVedantham/ai-score The output looks like this: $ ai-score ./skill_bui
Continue reading on Dev.to
Opens in a new tab


