
The Hidden Incompatibilities Between LLM Providers
If you've ever tried to swap one LLM provider for another, you've probably noticed that things aren't as interchangeable as the docs suggest. On paper, the APIs are converging: they all accept messages, they all support tool calling, they all accept JSON Schemas for structured output. In practice, each provider has opinions about what "valid" means, and those opinions don't always agree with each other or with the relevant specs. At FutureSearch, our everyrow.io app is powered by tens of thousands of LLM calls per day across Anthropic, Google, and OpenAI. Our system needs to be able to route any task to any provider depending on performance, cost, and availability. There are middleware services which aim to abstract away the provider-specific differences. But they aren't perfect, so we often need to address these quirks ourselves. Over the past year, we've accumulated a collection of provider-specific patches and workarounds. None of them are individually complicated, but together they
Continue reading on Dev.to
Opens in a new tab




