
The missing tool in my AI coding stack was Playwright
AI is really good for frontend work. Definitely Opus 4.6. ChatGPT-5.4 is still catching up, but Claude has the crown right now, anyways. I use it a lot for design, UI ideas, and getting screens built fast. But the biggest problem shows up right after that. The UI looks done. The code looks fine. Nothing throws errors. And then the actual product is broken. I ran into this while building the application form in my open source ATS. Some fields just didn’t work. No big red error or obvious crash. The AI had written something that looked correct, but it never actually tested the flow. I used to just manually check after the AI has written the code. That’s basically the problem with AI coding right now. A lot of code is almost right. And i came accross a stack overflow survey found that 84% of developers use or plan to use AI tools, but more distrust the output than trust it: 46% vs 33%. What fixed this for me was Playwright. I started writing tests for the critical flows. For me that means
Continue reading on Dev.to Webdev
Opens in a new tab



