Back to articles
Ollama Has a Free Local AI Model Runner

Ollama Has a Free Local AI Model Runner

via Dev.to WebdevAlex Spinov

Ollama is a free tool that lets you run large language models locally on your machine. Run Llama 3, Mistral, Gemma, and more — no API keys, no cloud, no costs. What Is Ollama? Ollama makes it ridiculously easy to run AI models on your own hardware. One command to install, one command to run any model. Key features: Run LLMs locally (no internet needed) Supports 100+ models OpenAI-compatible API GPU acceleration (NVIDIA, AMD, Apple Silicon) Model customization (Modelfile) Multi-model serving Lightweight and fast Works on macOS, Linux, Windows Quick Start Install # macOS/Linux curl -fsSL https://ollama.com/install.sh | sh # Or download from ollama.com Run a Model # Run Llama 3.2 (3B parameters) ollama run llama3.2 # Run Mistral ollama run mistral # Run Code Llama ollama run codellama # Run Gemma 2 ollama run gemma2 First run downloads the model. After that, it starts instantly. Available Models Model Size Use Case llama3.2:1b 1.3GB Fast, lightweight tasks llama3.2:3b 2GB General purpose

Continue reading on Dev.to Webdev

Opens in a new tab

Read Full Article
2 views

Related Articles