
LangChain Hit with 3 Critical CVEs — Why Your AI Agents Need a Governance Layer
Three critical vulnerabilities were just disclosed in LangChain and LangGraph — the most widely used AI agent frameworks in the Python ecosystem. This comes days after the devastating LiteLLM supply chain attack that affected millions of installations. The AI tooling stack is under active attack, and most teams have zero governance in place. What Happened (March 27, 2026) Cybersecurity researchers disclosed three vulnerabilities: CVE Target Severity Impact CVE-2026-34070 LangChain prompt loading CVSS 7.5 Arbitrary file access via path traversal CVE-2025-67644 LangGraph SQLite checkpoint CVSS 7.3 SQL injection through metadata filters (Third) Environment & conversation data — Secret and history exposure via prompt injection LangChain has 52 million weekly downloads . LangGraph has 9 million. When a vulnerability exists in LangChain's core, it ripples through every downstream library, wrapper, and integration. Source: The Hacker News — LangChain, LangGraph Flaws Why Patching Alone Isn't
Continue reading on Dev.to Python
Opens in a new tab




