
I was paying $140/mo for my AI agent. So I built AirClaw.
Last night at 2am I checked my OpenAI dashboard. $140. Just in API fees. Just for running my personal AI agent. That felt insane. I own the hardware. Why am I paying a monthly bill forever just to run something on my own machine? So I built AirClaw. What it does: AirClaw bridges OpenClaw (personal AI agent for WhatsApp/Telegram/Discord) to a local LLM running on your own GPU via AirLLM. Instead of every message costing you API money, it runs completely locally. Forever free. bashpip install airclaw && airclaw install That's literally it. It auto-detects your OpenClaw config, backs it up, patches it to point to localhost instead of OpenAI. Then you start the local server and restart OpenClaw. The tech: AirLLM uses layer-by-layer inference — instead of loading the whole 70B model into VRAM at once, it streams one layer at a time. So you only need 4GB VRAM regardless of model size. Trade-off is speed but 7B runs fast enough for real-time chat. RabbitLLM (newer fork) adds support for Qwen2
Continue reading on Dev.to Python
Opens in a new tab




