
Defense in Depth: Tenant Isolation for an Agent That Executes Code
How we built five layers of security to prevent cross-tenant data leaks in a code-executing agent — and why we're still adding more. The Problem We built an AI agent that takes natural language questions and executes bash commands to answer them — curl calls to internal APIs, jq for data transformation, file I/O for intermediate results. Our platform is multi-tenant, and each tenant's data is accessed through authenticated, tenant-scoped API calls that the agent runs on behalf of the user. All our users are authenticated before they ever reach the agent. The primary threat isn't a malicious user trying to break in — it's the model itself drifting: hallucinating a wrong tenant ID, following a prompt injection buried in data it's processing, or dumping environment variables in a debug attempt. But we architected our defenses as if intent didn't matter. "Accidental" doesn't make a data leak any less serious. So we build defense in depth. Design Principles Four principles guide the archite
Continue reading on Dev.to
Opens in a new tab




