
I Let an AI Agent Use My Browser Tool Unsupervised. It Found 3 Bugs in 20 Minutes.
I build Charlotte , an open-source MCP server for browser automation. It renders web pages into structured, token-efficient representations that AI agents can read and act on. I've spent weeks benchmarking it, optimizing response sizes, writing docs. But I'd never done the obvious thing: point an agent at a real task, give it Charlotte as its only browser tool, and just watch what happens. So I did. No guidance, no hints, no "use this tool in this way." Just: "Test this UI feature in the browser." The agent found three bugs in about twenty minutes. Not through any testing framework. Just by trying to get its job done and hitting walls. The Setup I had a locally running web app (a code review tool called Crit) with a new feature: comment template chips that appear when you click a line gutter to open a comment form. I wanted to verify the feature worked. Light mode, dark mode, chip insertion, cursor positioning. I added Charlotte to the project's MCP config and told the agent to test th
Continue reading on Dev.to Webdev
Opens in a new tab


