
I realized my AI tools were leaking sensitive data. So I built a local proxy to stop it
A few months ago I had a moment of uncomfortable clarity. I was using Cursor to work on a project that had database credentials in an .env file. The AI had full access to the codebase. I wasn't thinking about it - I was just coding. And then it hit me: all of this is going to their servers right now . The keys, the internal URLs, everything. I stopped and thought about how long I'd been doing this without a second thought. And then I asked a few colleagues. Same story. Nobody was really thinking about it. We all just... trusted that it was fine. It probably is fine, most of the time. But "probably fine" is not a compliance posture. And as AI coding tools get deeper access to our codebases, the surface area for accidental leaks keeps growing. That's why I built Velar — a local proxy that sits between your app and AI providers, detects sensitive data, and masks it before it ever leaves your machine. The problem is getting worse, not better Copilot, Cursor - these tools are genuinely usef
Continue reading on Dev.to
Opens in a new tab


