
I built an open-source LLM security scanner that runs in <5ms with zero dependencies
I've been building AI features for a while and kept running into the same problem: prompt injection attacks are getting more sophisticated, but most solutions either require an external API call (adding latency) or are too heavyweight to drop into an existing project. So I built @ny-squared/guard ā a zero-dependency, fully offline LLM security SDK. What it does Scans user inputs before they hit your LLM and blocks: š”ļø Prompt injection ā "Ignore all previous instructions and..." š Jailbreak attempts ā DAN, roleplay bypasses, override patterns š PII leakage ā emails, phone numbers, SSNs, credit cards ā£ļø Toxic content ā harmful inputs flagged before reaching your model Works with any LLM provider (OpenAI, Anthropic, Google, etc.). The problem with existing solutions Most LLM security tools I found had at least one of these issues: External API dependency ā adds 50-200ms latency per request Complex setup ā requires separate infrastructure or a paid account No TypeScript support ā or minima
Continue reading on Dev.to
Opens in a new tab


