Back to articles
How (A) I Built Open-Source LLM Guardrails with FastAPI

How (A) I Built Open-Source LLM Guardrails with FastAPI

via Dev.to PythonMani G

Building production AI applications means dealing with prompt injection, PII leakage, hallucinated outputs, and agents that go rogue. We (me and AI) built AgentGuard — an open-source FastAPI service that sits between your app and any LLM provider to handle all of this in one place. What it does AgentGuard runs seven parallel input safety checks on every request before it reaches your LLM: prompt injection heuristics, jailbreak pattern detection, PII and secret detection, restricted topic filtering, and data exfiltration attempts. On the output side, it validates schema conformance, citation presence, grounding coverage, policy compliance, and a composite quality score (internally called the "slop score") that ranges from 0.0 (clean) to 1.0 (reject). Beyond checks, it also compiles versioned prompt packages — replacing ad-hoc prompt strings with auditable YAML configs — and governs agent actions through a risk-scoring and human-in-the-loop approval layer. Why transparent heuristics I ma

Continue reading on Dev.to Python

Opens in a new tab

Read Full Article
2 views

Related Articles