Back to articles
Best LLMs for OpenCode - Tested Locally

Best LLMs for OpenCode - Tested Locally

via Dev.toRost

I have tested how OpenCode works with several locally hosted on Ollama LLMs, and for comparison added some Free models from OpenCode Zen. OpenCode is one of the most promising tools in the AI developer tools ecosystem right now. TL;DR - OpenCode Best LLMs Clear winner for local: Qwen 3.5 27b Q3_XXS on llama.cpp The 27b at IQ3_XXS quantization delivered a complete, working Go project with all 8 unit tests passing, full README, and 34 tokens/sec on my 16GB VRAM setup (CPU+GPU mixed). Five stars, no caveats. This is my go-to for local OpenCode sessions. Qwen 3.5 35b on llama.cpp — fast for coding, but validate everything The 35b is excellent for quick agentic coding tasks — but my migration map tests exposed a serious reliability problem. Across two IQ3_S runs it produced 63–73% slug mismatches, and in the IQ4_XS quantization it forgot to include page slugs entirely, generating category paths that would map 8 different pages to the same URL. The coding quality on the IndexNow task was gen

Continue reading on Dev.to

Opens in a new tab

Read Full Article
7 views

Related Articles