
How to scrub patient data out of LLM prompts before it becomes a breach report
Healthcare teams keep discovering the same problem one prompt at a time: someone pastes patient context into an LLM because they need help now, not because they want to create a compliance incident. The interesting part is not that this happens. Of course it happens. The interesting part is how small the fix can be if you put it in the right place. A useful privacy layer for AI doesn't need to start with a giant governance platform. It can start with one boring, reliable step: scrub sensitive fields before the prompt ever leaves the app. I built a tiny proof of concept for this today after noticing the same pattern across healthcare AI, support tooling, and internal copilots: the model isn't the first problem. Input hygiene is. The core idea Before text reaches an LLM, scan it for common sensitive fields and replace them with stable placeholders. That means things like: email addresses phone numbers Social Security numbers dates of birth medical record numbers A minimal Python version
Continue reading on Dev.to Python
Opens in a new tab



![[MM’s] Boot Notes — The Day Zero Blueprint — Test Smarter on Day One](/_next/image?url=https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1368%2F1*AvVpFzkFJBm-xns4niPLAA.png&w=1200&q=75)