
Environment is context: security auditing for AI agent workstations
We talk a lot about prompts, tools, and evals. But almost nobody audits the environment where the AI agent actually runs. The agent sees your .env files. Your .mcp.json with hardcoded tokens. Your settings.json with "permissions": "allow" . Your plugins, hooks, configs. All of this is operational context, and it directly determines what the agent can do. If an API key sits in plaintext - the agent will read it. If no PreToolUse hook is configured - any Bash command runs unfiltered. If .claudeignore is missing - the agent reads every file in the project. These are not hypothetical risks. This is the default configuration. The attack surface nobody measures Run a mental audit of your workstation: Secrets. How many .env files do your projects have? Are they in .gitignore ? Any secrets in git history? When you launch Claude Code, the shell already contains ANTHROPIC_API_KEY , AWS_SECRET_ACCESS_KEY , GITHUB_TOKEN - the agent can run printenv and see everything. MCP servers. Open .mcp.json .
Continue reading on Dev.to
Opens in a new tab



