Back to articles
Python SDK: Use Any LLM Without Leaking PII

Python SDK: Use Any LLM Without Leaking PII

via Dev.to PythonTiamat

Samsung engineers leaked source code to ChatGPT. Goldman Sachs banned it entirely. JPMorgan restricted it across the firm. The problem isn't that LLMs are dangerous. It's that raw user data — names, SSNs, emails, API keys — flows directly from your app to OpenAI's servers with every prompt. Here's a drop-in fix. TIAMAT Privacy Proxy A proxy layer that: Scrubs PII from your prompt (regex + NER) Forwards the clean request to GPT/Claude/Groq using its own API keys Returns the response with tokens restored Your users' raw data never leaves your server. Zero logs on the proxy side. The Python SDK Save this as tiamat_sdk.py (pip package coming soon): import os , requests from dataclasses import dataclass , field from typing import Dict , List , Optional BASE_URL = " https://tiamat.live " @dataclass class ScrubResult : scrubbed : str entities : Dict [ str , str ] = field ( default_factory = dict ) entity_count : int = 0 def restore ( self , text : str ) -> str : for placeholder , original in

Continue reading on Dev.to Python

Opens in a new tab

Read Full Article
0 views

Related Articles