
NadirClaw 0.8: Vision Routing and the Silent Failure It Fixed
Here's a bug that's annoying to diagnose: you send a screenshot to Cursor, get a response that clearly didn't look at the image. You try again. Same thing. You figure it's a model issue and move on. If you're running NadirClaw in front of Cursor, the bug was in the router. How NadirClaw routes requests Before 0.8, here's what happened when you sent an image: NadirClaw's classifier embeds your prompt using sentence embeddings and compares it to two pre-computed centroid vectors (one for "simple", one for "complex"). This takes ~10ms. No extra API call. Your screenshot is probably attached to a short message like "what's wrong here?" - that classifies as simple. Simple routes to your cheap model. If that's DeepSeek or an Ollama model, neither supports vision. The multimodal content array (the image_url part) got flattened to text before hitting LiteLLM. The image disappeared. DeepSeek answered based on the text alone. Looked wrong. Was wrong. No error. No log warning. Just a bad answer.
Continue reading on Dev.to Python
Opens in a new tab




