
My Team Tracks AI-Generated Code. The Number Shocked Us.
My team tracks how much of our codebase is AI-generated. The number shocked us. We deployed Buildermark last week. It's an open-source tool that scans Git history and flags AI-written lines. Why We Started Measuring Every startup has that moment. You're reviewing a PR and realize you can't tell who wrote it. The human or the AI. We hit 40% AI-generated code by volume. Some files were 90%. The CTO asked for the report. Then asked what it meant. Nobody had an answer. The Three Problems Nobody Talks About → Problem 1: Ownership blur When AI writes the fix, who owns the bug? We found junior devs treating Claude output as gospel. They'd copy-paste without understanding. Senior engineers would approve because "it looks fine." → Problem 2: The review gap Human-written code gets scrutinized. AI-written code gets rubber-stamped. We caught security issues in AI-generated config files. Stuff a human would never write. → Problem 3: The bus factor If your AI provider degrades (like Claude did last
Continue reading on Dev.to
Opens in a new tab



