
Why I Built a Personal AI Assistant and Kept It Small
I like the idea of a personal AI assistant, but I do not like how heavy most of them feel. When you actually try to use them, there is often too much going on. Huge system prompts, too much token overhead, frameworks that are hard to trust, and too many layers between what you ask for and what actually happens. That is why I built Atombot . Atombot is a small personal AI assistant inspired by OpenClaw and nanobot. I was not trying to build a complex agent platform. I wanted something simpler, something I could understand, change, and actually use. Privacy also matter. A personal assistant handles my personal data, and I do not want to send that data outside of my machine. That made local LLM support important to me. Local models can run heavier assistant frameworks, but from my experience, they do not perform well. Local models usually cannot handle large context windows because of too much overhead from system prompts, instructions, tool definitions, and extra logic. It works fine for
Continue reading on Dev.to
Opens in a new tab
