
We tested structured ontology vs Markdown+RAG for AI agents — "why?" recall was 0% vs 100%
Our AI agent knew the company uses Provider A for identity verification. It could name the provider, list the integration specs, recite the timeline. Then we asked why Provider A was chosen over Provider B. The agent couldn't answer. Not once across 24 attempts. Zero percent recall on reasoning questions. So we built the layer that was missing — and ran 48 controlled experiments to measure the difference. The problem: AI agents can't answer "why?" If you give an AI agent a folder of Markdown docs and let it use RAG to find answers, it handles factual questions well. What modules exist? Who owns this component? When was this decision made? But "why?" is different. Reasoning is rarely stored as a discrete fact. It's spread across meeting notes, scattered through Slack threads, buried in the third paragraph of a design doc written six months ago. The connection between a strategic goal and an operational decision almost never appears as a single retrievable chunk. This means vector search
Continue reading on Dev.to
Opens in a new tab
