
Threat Modeling Agentic AI Systems: Proactive Strategies for Security and Resilience
A cautionary example described in a talk imagines an accounting agent (“Finnbot”) that had been reconciling invoices and flagging fraud autonomously. Over time, subtle manipulative inputs changed its learned priorities (favoring speed over security). The agent began approving payments to a fraudulent vendor, inherited excessive privileges, executed payloads embedded in contracts, and propagated bad data across other agents (vendor management, HR). Human reviewers, overwhelmed by volume and deadlines, reinforced the undesired behaviour through routine approvals. One compromised agent cascaded failures across the ecosystem. Key failure modes summarized in the talk: • Memory poisoning — long-term memories write in malicious patterns that the agent reuses. • Tool execution risk — agents execute code or API calls that can perform harmful actions. • Identity & privilege escalation — agents inherit or misuse service identities, enabling lateral moves. • Supply-chain manipulation — contaminate
Continue reading on Dev.to
Opens in a new tab




