
How I Built Real-Time PII Detection Inside ChatGPT's Hostile Text Editor (Without Breaking It)
If you've ever tried to build a Chrome extension that modifies text inside ChatGPT, Gemini, or Claude, you know it's not like injecting into a normal <textarea> . These apps use rich text editors — ProseMirror (ChatGPT), Draft.js-style frameworks — that actively fight external DOM manipulation. They reconcile state internally, swallow events, and will silently undo your changes on the next keystroke. I spent months figuring out how to detect and highlight sensitive data (API keys, passwords, PII) inside these editors without breaking them . This is the technical story of what I built, what failed, and the architecture I landed on. The project is called Prompt Armour — a Chrome extension that intercepts your input on AI chatbot platforms and catches sensitive data before it's sent to the LLM. It runs 100% client-side. No servers, no data collection. But this article isn't a product pitch. It's about the engineering. The Problem: You Can't Just Modify the DOM My first instinct was simple
Continue reading on Dev.to Webdev
Opens in a new tab




