
How to Handle PII in LLM API Calls (Practical Guide)
Every time you send a user query to an LLM API, you're potentially sending personal data to a third-party server. Under GDPR and most data protection laws, that's a data processing operation with legal requirements. Here's the practical approach to handling PII in LLM pipelines. The problem User sends a message to your chatbot: "Hi, I'm Ade Okonkwo, my email is ade@company.ng and my order #12345 hasn't arrived. My phone is 08034567890." Your code sends this to OpenAI/Anthropic. Their servers — probably in the US — now have your customer's name, email, phone, and order number. That's a cross-border data transfer of personal data to a third-party processor. You need: A Data Processing Agreement with the provider A lawful basis for the processing A privacy notice telling the user about it Ideally, audit logging of what was sent The fix: detect and redact Before sending to the API, scan for PII and optionally redact it: from agent_shield import Shield shield = Shield ( redact_by_default =
Continue reading on Dev.to Tutorial
Opens in a new tab



