Back to articles
Using Ollama with VS Code for Local AI-Assisted Development
How-ToTools

Using Ollama with VS Code for Local AI-Assisted Development

via Dev.toSteve Baker

Using Ollama with VS Code If you want an AI coding assistant without sending your code to the cloud, Ollama makes it easy to run an LLM locally and easily integrates with Visual Studio Code amongst other IDEs. 1. Install Ollama & Run A Model Download and install Ollama from the official site. After installing, verify it works via your terminal: ollama --version Then select a model to run. You will need to choose a model, version and number of parameters based on your hardware (RAM/CPU/GPU specs). You may also need to limit the context length for the best performance. One of the models I tested for coding tasks was qwen3-coder:7b : ollama run qwen3-coder The model will download automatically when you run it. You can select & test this model out via the Ollama GUI. 2. Install A VS Code Extension To use Ollama inside VS Code, install an extension that supports it - a popular option is Continue . Installation Steps: Open VS Code Go to Extensions Search for Continue Click Install 3. Configu

Continue reading on Dev.to

Opens in a new tab

Read Full Article
2 views

Related Articles