Back to articles
I ran a privacy proxy on my AI traffic. Here's what it found.

I ran a privacy proxy on my AI traffic. Here's what it found.

via Dev.toDmitry Bondarchuk

When I built Velar — a local proxy that masks sensitive data before it reaches AI providers — I mostly thought of it as a tool for other people's problems. I was wrong. After running it on my own machine during normal browser-based interactions with ChatGPT, here's what it intercepted: Masked Items ---------------------------------------- API_KEY: 30 ███████████████░░░░░ ORG: 9 ████░░░░░░░░░░░░░░░░ JWT: 1 ░░░░░░░░░░░░░░░░░░░░ Total: 40 40 items. Without doing anything unusual. But the story behind that API_KEY number is what really got me. 30 API keys — before I even hit Send All 30 API_KEY detections came from a single session where I was editing a script directly inside the ChatGPT input field. Here's the thing most people don't realize: ChatGPT sends the contents of the input field to its servers in the background as you type. Not when you hit Send — continuously, while you're still editing. So I pasted a script that contained an API key, spent a few minutes tweaking it before sendi

Continue reading on Dev.to

Opens in a new tab

Read Full Article
6 views

Related Articles