
What if LLMs needed a spine, not a bigger brain?
I’ve been building something for the past few months, and I’m still trying to figure out whether I’m hitting a real problem or just over-structuring something that better prompting would already solve. My starting intuition is simple: LLMs are very good at generating, but much less reliable when you expect continuity from them. As soon as you want an agent that can hold a line, remember things cleanly, recover after tension, and stay coherent over time, you start seeing the limits of the model on its own. Not necessarily because it lacks intelligence, but because it lacks a kind of skeleton. In many systems, the LLM does everything at once: it speaks, it decides, it improvises its own memory and its own frame. And that works, until it starts to drift. Prompting can take you pretty far, but it still feels fragile. That’s the space I’m exploring. The idea is to move governance outside the model: the LLM generates, but it does not decide on its own. An explicit policy layer handles decisi
Continue reading on Dev.to
Opens in a new tab

