
I used both Claude Sonnet 4.6 and Gemini 3.1 Pro for two weeks straight. Here's what nobody tells you.
Everyone's got a hot take on which AI is "better." Most of those takes are based on like, one prompt they tried at 11 pm. I actually used both — back-to-back, same tasks, real projects — and I have thoughts. Spoiler: it's not what you'd expect. The coding thing Claude reads your prompt. Like, the whole thing. I gave it a gnarly debugging task with like six constraints buried in the middle. It caught all of them. Didn't skip a single one. Debugging with Claude honestly feels like pairing with a senior dev who's slightly too focused — in a good way. It finds the issue, explains why it happened, and doesn't pad the response with stuff you didn't ask for. Gemini... vibes. It's genuinely strong on algorithms and logic. But it'll occasionally add stuff you never mentioned — confidently — like it decided mid-response that you probably also needed that. Debugging with Gemini sometimes feels like asking a very confident intern. Not always wrong. Just... bold. Design output — ok, I did not expec
Continue reading on Dev.to Webdev
Opens in a new tab




