
I Scanned a 1K-Star Cursor Project. AI Code Doesn't Look Like AI Code Anymore.
There's a common belief that AI-generated code is easy to spot. Obvious comments, step-by-step numbered instructions, hedging language like "might need to adjust this later." I built vibecheck , a static analysis tool that detects these patterns. I ran it against ryOS , a 1,100-star web-based macOS clone built entirely with Cursor by Ryo Lu (Head of Design at Cursor). If any project would have AI fingerprints, this one would. The results surprised me. Zero comment-level AI tells None. No "// Initialize the state variable" above a useState. No "// Step 1: Fetch the data." No narrator comments, no hedging, no placeholder stubs. The code reads clean line by line. AI-generated code has evolved past the obvious tells. The models learned to stop over-explaining. If you're still looking for bad comments as your AI detector, you're looking at last year's problem. The smell moved to architecture vibecheck found 4,523 issues across 378 files. Here's where the signal actually is: God functions. M
Continue reading on Dev.to JavaScript
Opens in a new tab



