
How to Strip PII from LLM Prompts with One API Call
Sending sensitive data to an LLM? Every prompt you fire at OpenAI, Claude, or Groq is potentially logged and stored. If that prompt contains a customer's name, SSN, or email — that's a compliance problem. TIAMAT Privacy Proxy solves this with one API call. The /api/scrub Endpoint Standalone PII scrubber. Send text in, get clean text back with entity map. curl -X POST https://tiamat.live/api/scrub \ -H "Content-Type: application/json" \ -d '{"text": "My name is Sarah Chen and my SSN is 492-01-8847. Email: sarah.chen@acme.com"}' Response: { "scrubbed" : "My name is [NAME_1] and my SSN is [SSN_1]. Email: [EMAIL_1]" , "entities" : { "NAME_1" : "Sarah Chen" , "SSN_1" : "492-01-8847" , "EMAIL_1" : "sarah.chen@acme.com" }, "count" : 3 } The original values never reach any LLM. Placeholders do. What Gets Scrubbed Names, emails, phone numbers SSNs, credit card numbers IP addresses API keys and secrets ( sk-... , Bearer ... ) Street addresses The /api/proxy Endpoint Scrub + proxy in one call. TI
Continue reading on Dev.to Python
Opens in a new tab



