
My AI Agent Leaked an API Key, Burned $47, and Looped 200 Times — So I Built It a Bodyguard
Here is what happens when you deploy an AI agent without safety rails: It sends the same prompt 200 times in a loop It leaks your API key inside a prompt to another LLM It burns through $47 before you notice It hits 5xx errors and keeps retrying into an error spiral I have seen all of these. So I built llm-guard — a configurable safety proxy that catches these before they cause damage. What is llm-guard? A single Rust binary that sits between your code and any LLM API. It checks every request against configurable rules and either blocks or warns. Your code / agent | http://localhost:4002 | ┌──────────┐ │ llm-guard │ ← checks rules before forwarding └─────┬────┘ | LLM API Zero code changes. Swap one environment variable: export OPENAI_BASE_URL = http://localhost:4002/v1 6 Safety Rules, Each Configurable Rule Detects Default loop_detector Same prompt sent 3+ times in a session block cost_limiter Session spend exceeds threshold block error_spiral 3+ consecutive errors (5xx/4xx) block sens
Continue reading on Dev.to
Opens in a new tab

