
I Built a Desktop AI That Writes and Hot-Loads Its Own Tools at Runtime
I got tired of pausing to look things up. Reading a book and hitting a word I didn't know. Watching anime and missing a line. Every time, the same friction — stop what I'm doing, open a new tab, search, lose my place, lose the moment. I wanted something that would just... explain it to me. Out loud. Without me doing anything. So I built Samuel. A voice AI that floats on my macOS desktop, watches my screen, listens to my audio, and talks to me in real time. No typing. No switching windows. I just keep doing what I'm doing and he's there when I need him. But the thing that turned into the most interesting engineering problem wasn't the language teaching. It was this: what happens when I want Samuel to do something he doesn't know how to do yet? The Problem With Fixed Tool Sets Most AI agents — including ones built on OpenAI's Agents SDK — have a fixed set of tools defined at compile time. You add a tool, rebuild the app, restart. That's fine for most use cases. But Samuel is a desktop co
Continue reading on Dev.to
Opens in a new tab


