
We Audited Anthropic's Official MCP Servers — Here's the Compliance Problem No One's Talking About
The Model Context Protocol has exploded. 88M+ monthly SDK downloads. 18,000+ servers. Adoption by Claude Code, Cursor, Windsurf, and every major AI coding tool. But here's the question nobody is asking: are these servers compliant with the regulations that take effect in months? We built mcp-security-audit — an open-source tool that connects to any MCP server, enumerates its tools and resources, classifies risk levels, scans for injection patterns, and produces a scored report (0-100, grades A-F). Then we pointed it at Anthropic's own official reference servers. The Scorecard Server Tools Grade Score Findings server-time 2 A 100 0 server-sequential-thinking 1 A 100 0 server-git 12 A 100 0 server-fetch 1 A 100 0 server-everything 13 A 97 1 server-memory 9 A 97 1 server-sqlite 6 C 73 4 server-filesystem 14 F 7 7 Six servers passed clean. Two didn't. One failed catastrophically. The Worst Offender: server-filesystem Score: 7/100. Grade: F. 7 findings. This is the official Anthropic filesy
Continue reading on Dev.to Python
Opens in a new tab



