
The METR Study Changed How I Think About AI Coding
Last month, METR published a study that should make every developer uncomfortable. They took 16 experienced open-source developers — people who knew their codebases inside out — and randomly assigned tasks to be done with or without AI tools. The developers predicted AI would make them 24% faster. The measured result? 19% slower. And here's the kicker: after the study, participants still believed AI had helped them. I've been using AI coding tools daily for the better part of a year. When I read that study, my first reaction was "well, those developers must have been doing it wrong." My second reaction was: that's exactly the kind of thinking the study warns about. The perception gap is the real finding The speed numbers get all the attention, but I think the important finding is the perception gap. We feel faster because AI handles the boring parts — boilerplate, syntax, the stuff that feels like work but isn't where the actual difficulty lives. Meanwhile, the hard parts get harder: u
Continue reading on Dev.to Webdev
Opens in a new tab

