
RAG vs Fine-Tuning: What I Actually Learned After 6 Months of Building LLM Apps
Six months ago my team was building an internal support tool for a B2B SaaS company — about 120 employees, docs spread across Notion, Confluence, and a half-dead SharePoint instance from 2019. The ask was simple: a chatbot that could answer questions about internal processes without making stuff up. Simple, right. I had to make the call: RAG or fine-tune a model. I'd read the think pieces. I'd watched the YouTube explainers. None of them gave me the answer I actually needed, which was which one for this specific situation, and what will break first. So I spent about six months running both approaches across three different projects, and here's what I actually found. Why Most Comparisons Miss the Point The framing of "RAG vs fine-tuning" is a bit of a false dichotomy, but before I get to that — the techniques solve genuinely different problems, and conflating them leads to expensive mistakes. Here is the thing: fine-tuning changes how a model thinks. RAG changes what a model knows at qu
Continue reading on Dev.to
Opens in a new tab


