
The gap in AI agent security nobody talks about: your .env is already in the context window
Your AI coding agent just read your .env file. Not on purpose. You asked it to fix a bug in your config loader, so it read the config file to understand the format. That config had AWS_SECRET_ACCESS_KEY in it. Now that key is sitting in the context window. And from here, the agent can include that key in anything. A curl command. A file write. A code snippet it generates as an "example." Not because it's attacking you. Because the key is in context and the model thinks it's relevant. No sandbox catches this. The file was inside your project folder. The agent had permission to read it. Everything worked exactly as designed. this is the gap Every AI agent security tool I've looked at focuses on blocking dangerous actions. Don't run rm -rf / . Don't execute SQL drops. Don't call sketchy URLs. That matters. But the dangerous moment isn't always when the agent does something. Sometimes it's when the agent reads something and sensitive data quietly enters the context window. Think about it:
Continue reading on Dev.to
Opens in a new tab




