
I Audited 11 MCP Servers. 22,945 Tokens Before a Single Message.
Your AI tool definitions are eating your context window and you probably don't know by how much. We measured. We collected real tool schemas from 11 popular MCP servers — GitHub, filesystem, git, Slack, Brave Search, Puppeteer, and more. 137 tools total. The result: 22,945 tokens injected before your model reads a single user message. One server (GitHub) accounts for 69% of that. 132 optimization issues across the set. Apideck quantified it too: one team burned 143,000 of 200,000 tokens on tool definitions alone. Scalekit's benchmarks show MCP costs 4-32x more tokens than CLI equivalents. This isn't theoretical — here's the data. The baseline: one tool Here's a simple function. Two parameters, one docstring. @tool def search_inventory ( query : str , max_results : int = 10 ) -> str : \ " \"\" Search product inventory by name or SKU. \"\"\" return " results " In OpenAI function-calling format, this costs roughly 60 tokens. That includes the function name, description, parameter names, t
Continue reading on Dev.to
Opens in a new tab



