
I built an open-source tool that lets AI agents hire real humans to test your product
The problem I use AI coding tools (Claude Code, Cursor) for almost everything. Development speed is incredible — what used to take a week now takes an afternoon. But there's a gap nobody talks about: "it compiles" ≠ "users can use it". I kept shipping features that worked perfectly in my terminal but confused real users. And manual QA was eating all the time I saved from AI coding. The irony was painful. Traditional UX testing tools like UserTesting or Maze didn't help either — they're built for product managers reviewing dashboards, not for AI agents writing code. What I built human_test() is an open-source platform that closes the loop between AI coding and real user experience. Tell your agent: "Test my app at localhost:3000, focus on the signup flow" Then your agent calls human_test(), 5 real humans test your product with screen recording and audio narration, AI analyzes the recordings and generates a structured report, finds 3 critical issues, auto-generates fixes, and creates a P
Continue reading on Dev.to Webdev
Opens in a new tab




