
I spent months trying to stop LLM hallucinations. Prompt engineering wasn't enough. So I wrote a graph engine in Rust.
I started this project after reading about AIRIS , a cognitive agent from SingularityNET that learns by interacting with a Minecraft world. Not because I cared about Minecraft — but because of the principle: an AI that learns by doing, in a way you can actually observe and trace. That got me thinking. If an agent can learn from a simulated physical environment, could you do something similar in text? Could you build a system that builds knowledge through direct interaction with users, step by step, and where every piece of that knowledge is inspectable? I tried. And I failed. Several times. The purity trap My first attempt was absurdly ambitious. I wanted to build everything from scratch — zero external libraries, zero implicit behavior, zero randomness. Every component had to be fully deterministic and transparent. No shortcuts. It sounds principled. In practice, it was a dead end. I couldn't use any library that had opaque internals or non-deterministic behavior, which meant rewritin
Continue reading on Dev.to
Opens in a new tab



