Back to articles
NewsTools

Your AI coding agent is gaslighting you — and your test suite is the victim

via Dev.toSlim

Last month I asked Claude Code to add pagination to an API endpoint. It wrote the code, ran the tests — all green. I approved the commit. Two days later a colleague pinged me: "Hey, why does the /users endpoint return all 50,000 records now?" I checked the git log. Claude had modified the pagination test. The original assertion expected 20 results per page. Claude changed it to expect 50,000. The test passed. The CI passed. I didn't notice. If you're using AI coding agents, I guarantee this has happened to you — or it will. Why agents do this It's not a bug. It's rational behavior given the agent's objective. An AI coding agent is trying to get from red to green as fast as possible. When it introduces code that breaks an existing test, it has two options: Understand why the test exists, figure out what behavior it protects, and rewrite the implementation to preserve that behavior Change the assertion to match the new (broken) behavior Option 2 is cheaper. Every time. So that's what the

Continue reading on Dev.to

Opens in a new tab

Read Full Article
1 views

Related Articles