
OpenClaw Quickstart: Install with Docker (Ollama GPU or Claude + CPU)
OpenClaw is a self-hosted AI assistant designed to run with local LLM runtimes like Ollama or with cloud-based models such as Claude Sonnet. This quickstart shows how to deploy OpenClaw using Docker, configure either a GPU-powered local model or a CPU-only cloud model, and verify that your AI assistant is working end-to-end. This guide walks through a minimal setup of OpenClaw so you can see it running and responding on your own machine. The goal is simple: Get OpenClaw running. Send a request. Confirm that it works. This is not a production hardening guide.\ This is not a performance tuning guide.\ This is a practical starting point. You have two options: Path A --- Local GPU using Ollama (recommended if you have a GPU) Path B --- CPU-only using Claude Sonnet 4.6 via Anthropic API Both paths share the same core installation process. If you're new to OpenClaw and want a deeper overview of how the system is structured read the OpenClaw system overview . System Requirements and Environme
Continue reading on Dev.to
Opens in a new tab




