
"MCP in Production: Why Perplexity's CTO Walked Away (And What the Data Says)"
MCP in Production: Why Perplexity's CTO Walked Away (And What the Data Says) Model Context Protocol looked like the future of AI tooling six months ago. Every major AI lab endorsed it. The GitHub stars piled up. Then Perplexity shipped it to production — and the numbers told a different story. The 81% Problem Nobody Warns You About The headline stat from Perplexity's internal post-mortem: 81% of their MCP context budget was consumed by protocol overhead , not actual tool content. That's not a configuration issue. That's the spec working as designed. Here's the math that broke their token budget: A typical MCP tool manifest with 12 tools: ~4,200 tokens just to describe available tools JSON envelope overhead per tool call: ~340 tokens (request + response wrapping) Actual useful data returned per call: median 890 tokens Net efficiency ratio: roughly 1 token of value for every 4.7 tokens spent At Perplexity's scale — millions of queries per day — that overhead isn't a rounding error. It's
Continue reading on Dev.to Webdev
Opens in a new tab



