Back to articles
We built runtime threat detection for AI agents — here's what we found after monitoring 1M+ agent calls
How-ToSecurity

We built runtime threat detection for AI agents — here's what we found after monitoring 1M+ agent calls

via Dev.toThe Bot Club

If you're building AI agents in production, you've probably wondered: what's actually happening at runtime? We spent six months finding out — and what we found changed how we think about agent security entirely. AgentGuard ( https://agentguard.tech ) is the runtime security layer we built from those findings. This post covers the threat taxonomy, architecture decisions, and the real attack patterns we see in the wild. What we built AgentGuard is a runtime security layer for AI agents. It sits between the agent's decision engine and its tool calls, inspecting each action before it executes against a policy engine, and logging structured telemetry for post-hoc analysis. The core of it is a lightweight sidecar that intercepts tool call requests, evaluates them against a configurable threat model, and either allows, flags, or blocks based on severity. It's designed to run with sub-50ms overhead on common agent frameworks. The threat taxonomy After monitoring 1M+ agent calls across multiple

Continue reading on Dev.to

Opens in a new tab

Read Full Article
5 views

Related Articles