Back to articles
I built an open-source firewall for AI agents — it blocks dangerous tool calls before they execute

I built an open-source firewall for AI agents — it blocks dangerous tool calls before they execute

via Dev.toJustin Yuan

The problem nobody talks about Every AI agent framework — LangChain, CrewAI, Anthropic, OpenAI — gives the LLM full control over which tools to call and with what arguments. The model says "run this SQL query: DROP TABLE users " and your code just... executes it. No confirmation. No policy check. No audit trail. Existing observability tools (LangFuse, Helicone, Arize) log what happened. That's useful for debugging. But the database is already gone. What I built AEGIS is an open-source, self-hosted firewall that sits between your AI agent and its tools. It doesn't just observe — it intercepts and blocks before execution. How it works Agent calls a tool → AEGIS SDK intercepts → Gateway classifies (SQL? file? shell?) → Policy engine evaluates (injection? traversal? exfiltration?) → Decision: allow / block / pending (human reviews) → Ed25519 signed, SHA-256 hash-chained, stored in dashboard. One line to integrate python import agentguard agentguard.auto(" http://localhost:8080" ) Your exis

Continue reading on Dev.to

Opens in a new tab

Read Full Article
2 views

Related Articles