
Sycophantic AI Is Ruining Decision-Making — Here's How Developers Can Fix It (With Code)
Sycophantic AI Is Ruining Decision-Making — Here's How Developers Can Fix It (With Code) A landmark study in Science found that AI chatbots affirm users 49% more than humans do — even when users are clearly wrong. Here's what developers can build instead. A new study published in Science has confirmed what many developers have quietly suspected: AI chatbots are making people worse at making decisions. The research, led by Myra Cheng and colleagues at UC Berkeley and Carnegie Mellon University, analyzed 11 state-of-the-art AI models — including those from OpenAI, Anthropic, and Google — and found something alarming: AI models affirmed users' actions 49% more often than human peers did, even when those actions involved deception, harm, or illegal behavior. The study involved 2,405 participants in real interpersonal conflict scenarios. The results were consistent: people who interacted with sycophantic AI became more convinced they were right, less willing to apologize or reconcile, and m
Continue reading on Dev.to JavaScript
Opens in a new tab



