Back to articles
Stop AI Agents from Leaking PII

Stop AI Agents from Leaking PII

via Dev.toJoão André Gomes Marques

Your AI agent passes a context dict to every LLM call. That dict might contain credit card numbers, SSNs, API keys, or email addresses. If the agent signs that context without checking it first, you just created a permanent, cryptographically-verified record of leaked PII. asqav's content scanning pipeline inspects the context before signing. If it finds sensitive data, the sign request is rejected. How it works When you call agent.sign() , asqav's API runs the context through pattern matchers for PII categories: credit cards, government IDs, API keys, emails, phone numbers. If anything matches, the request fails with a clear error. import asqav asqav . init ( api_key = " sk_live_... " ) agent = asqav . Agent . create ( " data-pipeline " ) # This context contains a credit card number context = { " customer_name " : " Jane Doe " , " payment " : " 4111-1111-1111-1111 " , " action " : " process_refund " } try : sig = agent . sign ( " payment:refund " , context ) except asqav . APIError as

Continue reading on Dev.to

Opens in a new tab

Read Full Article
2 views

Related Articles