
Free LLMs on OpenRouter Keep Going 404. I Fixed It With 120 Lines of Python
I built a small pipeline on OpenClaw to stay on top of 3D printing news. Nothing fancy — a Python script that pulls from YouTube, RSS feeds, and Reddit, uses a free LLM to summarize what's worth reading, and emails me a digest. I use OpenRouter's free tier because I'm cheap and the models are good enough for summarization. It worked great. For about two weeks. Then I started getting errors. The problem nobody talks about Here's something I didn't fully appreciate until it bit me: free models on OpenRouter change constantly. Models get added, removed, rate-limited into uselessness, or quietly replaced with different versions. If you hardcode your model list — which every tutorial tells you to do — you're building on sand. One morning I woke up to this: [06:03] LLM HTTP 404 [openai/gpt-oss-120b:free]: model not found [06:03] LLM HTTP 429 [nousresearch/hermes-3-llama-3.1-405b:free]: rate limited [06:03] LLM HTTP 404 [mistralai/mistral-small-3.1-24b-instruct:free]: model not found [06:03]
Continue reading on Dev.to Tutorial
Opens in a new tab



