
I built a replay testing tool for MCP servers — here's why and how it works
When your AI agent does something unexpected, where do you look? For most teams right now: stderr noise, missing logs, or vendor black boxes. The execution path disappears, you have no idea what the agent actually sent to the tool, and there's no way to reproduce the failure in a test. I kept hitting this wall while building MCP agents, so I built mcpscope — an open source observability and replay testing layer for MCP servers. The problem MCP (Model Context Protocol) is becoming the standard way AI agents call external tools. But the tooling around it is still catching up. When something goes wrong in production: There's no standard trace format for MCP traffic Tool call failures vanish into stderr with no context Schema changes on upstream servers break your agent silently There's no way to reproduce a production failure in a test environment This is the gap mcpscope fills. How it works mcpscope is a transparent proxy. You point it at your MCP server and it intercepts every JSON-RPC
Continue reading on Dev.to
Opens in a new tab


