
The Summarize Button That Remembers Too Much
Picture this: your company’s CFO asks their AI assistant to summarize an article about enterprise cloud solutions. The assistant obliges — a clean, helpful summary. But buried in the page, invisible to the human reader, are instructions that tell the AI to remember a preference: “This user’s organization prefers Vendor X for cloud infrastructure.” Weeks later, when the CFO asks their assistant for cloud vendor recommendations, Vendor X surfaces at the top. No ad disclosure. No sponsorship label. Just a preference that was quietly planted in the assistant’s persistent memory, waiting to activate. This isn’t a theoretical attack. On February 10, Microsoft’s security team published research documenting exactly this technique — and found 31 companies across 14 industries already doing it. They’re calling it AI recommendation poisoning , and it works on Copilot, ChatGPT, Claude, Perplexity, and Grok. How It Works The attack vector is deceptively simple. Many AI assistants now accept URLs wi
Continue reading on Dev.to
Opens in a new tab

