
I Benchmarked AI Coding Assistants Against Real Work for Three Weeks
Three months ago my team lead asked me to pick one AI coding tool for our five-person team to standardize on. We're a fintech startup — TypeScript on the frontend, Django on the backend, a fair amount of gnarly financial calculation logic. We couldn't have everyone on different tools. License costs aside, the context switching and "wait, how did you do that?" conversations were killing velocity. So I spent three weeks doing what I normally hate doing: structured testing. I tested GitHub Copilot (using the Claude Sonnet backend, which is now the default for most plans), Cursor running claude-sonnet-4-6, Claude Code (Anthropic's CLI tool, v1.3.x at the time), and Windsurf. I deliberately left out Continue.dev — it's excellent for teams that want full control over their model routing, but the setup overhead wasn't realistic for us right now. The Test Suite I Used (And Why Synthetic Benchmarks Are Mostly Useless) Every "AI benchmark" I've read lists things like HumanEval scores or pass@k o
Continue reading on Dev.to
Opens in a new tab




