
I built an open-source \"limbic system\" for AI agents — emotion, bias, and memory as MCP servers
Every time you start a new conversation with an AI, it resets to zero. No emotional continuity. No memory of yesterday. No consistent personality. Just a stateless language model pretending to know you. I've been working on a set of open-source tools to fix this — not by faking emotions, but by giving AI agents a persistent internal state that actually influences how they respond. I'm calling it the EmiliaLab Outer OSS — a "limbic system" layer that sits between raw LLMs and your application. The problem When you build an AI character — for a Discord bot, a VTuber, a game NPC, or just a personal assistant — you run into the same wall: The AI has no consistent emotional state across sessions Its "personality" is just a static system prompt It responds the same way whether you've been kind to it for weeks or just insulted it Real personality isn't static. It's shaped by history, current mood, cognitive tendencies. That's what I wanted to model. What I built Five MCP servers + a SDK + two
Continue reading on Dev.to Python
Opens in a new tab


