
I built an LLM Request Cascade proxy that auto-switches models before you ever timeout
You're mid-task in Claude Code. You hit enter. Then... nothing. 12 seconds later, either the response arrives or you're refreshing. That lag isn't a bug. It's Opus under peak load. It happens constantly during high-traffic hours. And for a developer in an agentic workflow, it feels identical to a crash. I got tired of it, so I built glide : a transparent proxy that sits between your AI agent and the API, and automatically switches to a faster model when yours is slow, before you ever experience the timeout. pip install glide glide start export ANTHROPIC_BASE_URL = http://127.0.0.1:8743 claude # Claude Code now routes through glide That's the entire setup. The problem with existing approaches Standard retry logic re-attempts the same slow endpoint, making things worse. Load balancers distribute across identical instances, but LLM models are not identical. LiteLLM does static routing and doesn't adapt to live latency. None of them address the actual failure mode: a model that's slow righ
Continue reading on Dev.to Python
Opens in a new tab



