
I stopped letting AI write tests after the code. Here's what happened.
The dirty secret of AI-generated tests Your AI coding assistant writes tests. Great. But when does it write them? After the code. After looking at the seeded data. After seeing the implementation. That's not testing. That's the AI confirming its own work. It's like grading your own exam — you'll always pass. Write the tests BEFORE looking at the data I built Don Cheli, an open-source framework where TDD is an iron law: Describe what you want Spec gets generated (Gherkin with acceptance criteria) Tests are written from the spec — the AI hasn't seen any data yet (RED) Code is the minimum to make tests pass (GREEN) Refactor The framework blocks you from advancing if tests don't exist first. No shortcuts. No // TODO: add tests later . But wait, there's more Before you even start coding: /razonar:pre-mortem — Imagine the project already failed. Why? Fix it now. /dc:estimate — 4 models estimate effort independently (COCOMO, Planning Poker AI, Function Points, Historical) /dc:debate — PM vs A
Continue reading on Dev.to Webdev
Opens in a new tab

![[MM’s] Boot Notes — The Day Zero Blueprint — Test Smarter on Day One](/_next/image?url=https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1368%2F1*AvVpFzkFJBm-xns4niPLAA.png&w=1200&q=75)

