
How to Set Up NadirClaw with Docker + Ollama for Zero-Cost Local LLM Routing
I run a lot of AI coding tools. Claude Code, Cursor, Continue. They all burn through API credits fast. Simple tasks like "read this file" or "what does this function do?" hit the same expensive models as complex refactoring requests. NadirClaw is an LLM router I built to fix this. It classifies prompts and routes simple ones to cheap (or free) models, complex ones to premium models. The result: 40-70% lower API bills. This tutorial shows you how to run NadirClaw with Ollama in Docker for completely free local routing. No API keys, no costs, no external dependencies. What You'll Build By the end of this guide, you'll have: NadirClaw running in Docker as an OpenAI-compatible proxy Ollama running locally with free models (Llama, Qwen, DeepSeek) A setup that routes simple prompts to local models and complex prompts to your choice of cloud provider (or keep it fully local) Total cost: $0/month for simple requests. Pay only for the complex prompts that need premium models. Prerequisites Dock
Continue reading on Dev.to
Opens in a new tab


