
Running OpenClaw on NVIDIA Jetson Thor with Docker Model Runner: A Complete Guide
What if you could run your own AI-powered Discord bot — completely local, no cloud APIs, no subscription fees — on an NVIDIA Jetson Thor? That's exactly what we did. In this guide, I'll walk you through setting up OpenClaw , an open-source AI agent framework, powered by Docker Model Runner running Qwen3 8B locally on NVIDIA Jetson Thor. The result? A fully functional Discord bot that responds to messages using a locally hosted LLM, with zero data leaving your network. Prerequisites Before we begin, make sure you have the following ready: NVIDIA Jetson Thor with Docker Engine installed Docker Model Runner plugin enabled Node.js v22+ installed A Discord account with server admin access Basic familiarity with the terminal Step 1: Install OpenClaw OpenClaw provides a one-liner installer that detects your OS and sets everything up via npm: curl -fsSL https://openclaw.ai/install.sh | bash You should see output confirming the installation: 🦞 OpenClaw Installer ✓ Detected: linux ✓ Node.js v22.
Continue reading on Dev.to Tutorial
Opens in a new tab


