
Stop Cluttering Your Codebase with Brittle Generated Tests
TL;DR: In the industry, there is a weird habit: if a tool can generate tests, it is considered automatically useful. If you have 300 new .java files in your repo after recording a scenario, the team assumes they have "more quality." They are wrong. Automated test generation often turns into a source of engineering pain, cluttering repositories and burying real regressions in noise. There is a more mature path: capture real execution traces, store them as data, and replay them dynamically. The Hidden Cost of Generated Test Code The problem is not that tests are created automatically. The problem is what exactly is created. If an instrument produces static .java files that: Fail because of a timestamp change Fail due to an extra field in a JSON response Fail because of a shift in JSON field order Fail after an internal method rename Fail after any refactoring that doesn't change business logic ...then it is not a regression testing strategy. It is just a generator of fragile noise. The F
Continue reading on Dev.to
Opens in a new tab


