
I Built a CLI That Auto-Fixes Failing Python Tests
I built a CLI tool that analyzes pytest failures and proposes fixes automatically. How It Works Runs your test suite Analyzes failure patterns Proposes a fix with diff preview You approve or reject Re-runs to verify no regressions Demo VID-20260220-WA0000.mp4 - Google Drive drive.google.com Current Approach Right now it's mostly pattern-based - analyzes pytest output and looks for common bugs like wrong operators, flipped logic, off-by-one errors. There's an LLM fallback for trickier cases, but the core is deterministic to keep it predictable and safe. The goal is to automate the test → fix → validate loop, not just explain errors like ChatGPT would. What's Next Looking to handle more complex failure patterns and improve the global optimization approach (scoring multiple candidate fixes across the full test suite). Also looking for beta testers - DM me if you want to try it on a real project. Feedback Welcome What kind of test failures would you find most useful to auto-fix? **Publish
Continue reading on Dev.to
Opens in a new tab



