
I Built a Knowledge Graph Into the Retrieval Pipeline and Then Dropped It in Production
The vector search returned seven chunks about "database indexing strategies" for a query about "machine learning model training." All seven had cosine similarity scores above 0.72. All seven were confidently, precisely wrong. This is the failure mode that nobody warns you about when you build a RAG system on pure vector search. Embeddings capture semantic proximity, not semantic correctness. "Database indexing" and "model training" both live in the same neighborhood of the embedding space because they co-occur in the same documents, the same blog posts, the same technical discussions. The vectors are close. The meanings are not. I had three options. Fine-tune the embedding model (expensive, slow, and the problem would resurface with every new document domain). Raise the similarity threshold from 0.7 to 0.85 (which would kill recall on legitimate queries). Or add a second retrieval signal that doesn't rely on vector proximity at all. I chose the third option, and then I added a third si
Continue reading on Dev.to
Opens in a new tab



