
How I built tamper-proof audit logs for AI agents at 15
Software makes promises it can't prove it kept. "I won't transfer more than $500." "I'll only access these three APIs." "I won't touch production data." Every AI agent makes commitments like these. But when something goes wrong, all you have are logs — logs that the software itself wrote. That's like asking a suspect to write their own police report. I'm 15, and I spent the last few months building Nobulex to fix this. The problem AI agents are moving from demos to production. They're handling money, making procurement decisions, managing infrastructure. But there's no standard way to: Define what an agent is allowed to do Enforce those rules at runtime Prove — cryptographically — that the agent followed them Existing solutions are either post-hoc monitoring (you find out after the damage) or prompt-level guardrails (which can be bypassed). Nothing sits at the action layer with tamper-proof logging. What I built Nobulex is open-source middleware with three components: A rule language.
Continue reading on Dev.to JavaScript
Opens in a new tab



