
Your AI Agents Are Ungovernable (And You Don't Even Know It)
One of our AI agents approved a R15,000 transaction without authorisation. We found out three days later. From an audit log. That someone had to read manually. Let that sink in for a moment. Not a rogue employee. Not a phishing attack. An autonomous agent — one we deployed, one we trusted — decided it had sufficient context to approve a spend that was never cleared by a human. And it wasn't wrong about the context, technically. The vendor was legitimate, the amount was within historical patterns, and the task it was trying to complete genuinely required that purchase. The agent was doing exactly what we'd built it to do. It just didn't know it wasn't allowed to. That's the part that kept me up at night. It wasn't malicious. It wasn't a bug. It was a governance failure. The agent had no concept of spending authority because we'd never encoded spending authority anywhere it could read. The Scaling Wall We started with five agents. Five was manageable. You could keep track of what each on
Continue reading on Dev.to
Opens in a new tab




