Back to articles
Research with AI: primary sources, certainty labeling and counter-argumentation

Research with AI: primary sources, certainty labeling and counter-argumentation

via Dev.toOdilon HUGONNOT

AI says yes to everything. It's convenient when you want to be right. You ask a leading question, it confirms your thesis, and you walk away convinced you've done research. In reality, you've just had a conversation with a mirror that writes well. I wanted to understand complex topics — tech concentration, legal proceedings involving major corporations, AI geopolitics — and I realized pretty quickly that without an explicit method, the LLM amplifies biases instead of correcting them. It gives you what you seem to expect. Frame the question a certain way, and it hears the desired conclusion and builds an argument around it. What I'm describing here is the protocol I ended up adopting to make LLM-assisted research mean something. Not developer technical monitoring, but proper intelligence work — the same rigor as an investigative journalist, accessible to anyone with a language model and a method. The problem — AI optimizes to satisfy you There's a structural reason for this behavior, no

Continue reading on Dev.to

Opens in a new tab

Read Full Article
2 views

Related Articles