
Self-Host Your AI Code Assistant With Continue.dev + Ollama — VS Code Copilot Without the Subscription
You're paying $19/month for GitHub Copilot. Your code is leaving your machine, hitting someone else's servers, and coming back as suggestions. It works. But you could also run the same workflow locally — for free, with full privacy, on hardware you probably already own. This guide sets up Continue.dev with Ollama so you get AI code completion, chat, and refactoring directly in VS Code — no API keys, no subscriptions, no data leaving your network. What You Need A machine with 16GB+ RAM (Mac mini M-series is ideal, but any modern desktop works) VS Code or a fork (Cursor users: you already have this built in, but keep reading for the self-hosted angle) Docker (optional, for running Ollama in a container) 10 minutes Step 1: Install Ollama Ollama makes running local LLMs trivially simple. One binary, one command. # macOS / Linux curl -fsSL https://ollama.com/install.sh | sh # Or with Docker docker run -d --name ollama \ -p 11434:11434 \ -v ollama_data:/root/.ollama \ ollama/ollama:latest Pu
Continue reading on Dev.to
Opens in a new tab




