
How to Run NemoClaw with a Local LLM & Connect to Telegram (Without Losing Your Mind)
I just spent a full day wrestling with NemoClaw so you don’t have to. NemoClaw is an incredible agentic framework, but because it is still in beta, it has its fair share of quirks, undocumented networking hurdles, and strict kernel-level sandboxing that will block your local connections by default. My goal was to run a fully private, locally hosted AI agent using a local LLM that I could text from my phone via Telegram. Working with an RTX 4080 and its strict 16GB VRAM limit meant I had to optimize my model choice and bypass a maze of container networks to get everything talking. If you are trying to ditch the cloud and run OpenClaw locally on WSL2, here is the exact step-by-step fix to get your agent online. Part 1: Escaping the Sandbox (Connecting the Local LLM) By default, NemoClaw runs your agent inside a nested Kubernetes ( k3s ) container within WSL2. If you try to point it to your local Ollama instance using localhost or the default Docker bridge, the sandbox's strict egress pol
Continue reading on Dev.to Tutorial
Opens in a new tab




