Back to articles
Stop AI Agents from Leaking PII, Secrets, and Prompt Injections

Stop AI Agents from Leaking PII, Secrets, and Prompt Injections

via Dev.to PythonJoão André Gomes Marques

Your agent's context dict gets passed to the LLM. What's in it? Credit card numbers from a database query. AWS keys from a config lookup. A prompt injection disguised as user input. These get signed, logged, and sometimes leaked. asqav scans everything before signing. If it finds problems, the sign request is blocked. What gets scanned PII - 50+ entity types via Presidio (emails, SSNs, credit cards, phone numbers, medical records) Prompt injection - DeBERTa model detects jailbreaks and indirect injection attempts Toxic content - Hate speech, harassment, violence classification Secrets - API keys, private keys, tokens, high-entropy strings via detect-secrets Custom patterns - Your own regex rules per organization How it works Scanning runs inside the sign_action pipeline. After policy evaluation, before signing. import asqav asqav . init ( api_key = " sk_... " ) agent = asqav . Agent . create ( " my-agent " ) # This gets scanned automatically sig = agent . sign ( " api:call " , { " prom

Continue reading on Dev.to Python

Opens in a new tab

Read Full Article
7 views

Related Articles