
Stop AI Agents from Leaking PII, Secrets, and Prompt Injections
Your agent's context dict gets passed to the LLM. What's in it? Credit card numbers from a database query. AWS keys from a config lookup. A prompt injection disguised as user input. These get signed, logged, and sometimes leaked. asqav scans everything before signing. If it finds problems, the sign request is blocked. What gets scanned PII - 50+ entity types via Presidio (emails, SSNs, credit cards, phone numbers, medical records) Prompt injection - DeBERTa model detects jailbreaks and indirect injection attempts Toxic content - Hate speech, harassment, violence classification Secrets - API keys, private keys, tokens, high-entropy strings via detect-secrets Custom patterns - Your own regex rules per organization How it works Scanning runs inside the sign_action pipeline. After policy evaluation, before signing. import asqav asqav . init ( api_key = " sk_... " ) agent = asqav . Agent . create ( " my-agent " ) # This gets scanned automatically sig = agent . sign ( " api:call " , { " prom
Continue reading on Dev.to Python
Opens in a new tab



![[MM’s] Boot Notes — The Day Zero Blueprint — Test Smarter on Day One](/_next/image?url=https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1368%2F1*AvVpFzkFJBm-xns4niPLAA.png&w=1200&q=75)