Back to articles
🚀 Fixing Ollama Not Using GPU with Docker Desktop (Step-by-Step + Troubleshooting)
How-ToDevOps

🚀 Fixing Ollama Not Using GPU with Docker Desktop (Step-by-Step + Troubleshooting)

via Dev.to DevOpsForam Jaguwala

Running LLMs locally with Ollama is exciting… until you realize everything is running on CPU 😅 I recently ran into this exact issue — models were working, but GPU wasn’t being used at all. Here’s how I fixed it using Docker Desktop with GPU support , along with the debugging steps that helped me understand the real problem. 🔴 The Problem My initial setup: Ollama installed locally ✅ Models running successfully ✅ GPU usage ❌ Result: Slow responses High CPU usage Poor performance 🧠 Root Cause After debugging, I realized: The issue wasn’t entirely Ollama — it was how my local environment handled GPU access. Even though the GPU was available, it wasn’t properly exposed to Ollama in my local setup. However, the same GPU worked perfectly inside Docker, which confirmed that the environment played a major role. 🟢 The Solution: Docker Desktop + GPU Instead of continuing to debug locally, I moved Ollama into a Docker container with GPU enabled. This approach turned out to be much simpler and more

Continue reading on Dev.to DevOps

Opens in a new tab

Read Full Article
6 views

Related Articles