
98% of MCP Tools Don't Tell AI Agents When to Use Them
We analyzed 78,849 tool descriptions across 15,923 MCP servers and AI skills. The results explain a lot about why AI agents feel "dumb." TL;DR : Only 2% of tools tell the AI agent when to use them. Only 3% document their parameters. This is why AI agents pick the wrong tool — and it's fixable. The Numbers What AI Agents Need What They Get "What does this tool do?" (action verb) 68% have one "When should I use this tool?" (scenario trigger) 2% have one "What format should parameters be?" (param docs) 3% have them "Can you show me an example?" (param examples) 7% have them "What happens if it fails?" (error guidance) 2% have it 98% of tools don't tell the AI agent when to use them. The agent has to guess from the tool name and a vague description. Why This Is a Security Problem As a Reddit user pointed out in response to our State of MCP Security report : "The missing usage guidance number is the one that doesn't get enough attention. When a tool doesn't tell the agent when to use it, th
Continue reading on Dev.to DevOps
Opens in a new tab


