
I built an AI security Firewall and made it open source because production apps were leaking SSNs to OpenAI
About a year ago I started keeping a list. Every production AI integration I saw that shipped with zero input validation, zero output scanning, zero audit trail. The list got long fast. The pattern was always the same - OpenAI SDK, one API call per user message, return the result. Clean, fast to build, completely unprotected. So I spent almost an year building Sentinel Protocol and today I'm open-sourcing it. What it is A local security proxy for LLM API calls. It sits between your application and any LLM provider - OpenAI, Anthropic, Google Gemini, Ollama, DeepSeek, Groq, etc and runs 81 security engines on every request. Zero cloud calls for security decisions. Everything runs on your machine. The audit trail is a plain JSONL file that stays local. Getting started npx --yes --package sentinel-protocol sentinel bootstrap --profile paranoid --mode enforce --dashboard Proxy starts at http://127.0.0.1:8787 . Dashboard at http://127.0.0.1:8788 . Change one line in your SDK: const openai =
Continue reading on Dev.to
Opens in a new tab

![[MM’s] Boot Notes — The Day Zero Blueprint — Test Smarter on Day One](/_next/image?url=https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1368%2F1*AvVpFzkFJBm-xns4niPLAA.png&w=1200&q=75)

