
Why I make LLMs argue with each other before I make architecture decisions
The problem with asking one model You ask Claude about your API design. It gives you a confident, well-structured answer. You move on. Two weeks later, during code review, someone spots the thing the model didn't mention — the thing you would have caught if you'd thought about it from a different angle. This happens because LLMs are agreement machines. Ask one model a question, you get one perspective wrapped in confidence. The model won't naturally play devil's advocate against its own answer. It'll give you the best answer it can produce, not the best answer the problem deserves. I started doing something simple: same prompt, same codebase context, two different models. And I noticed that the interesting part was never where they agreed — it was where they disagreed. Structured disagreement as a design tool The idea isn't new. Adversarial review exists in every serious engineering culture: red teams, architecture review boards, RFC processes. What's new is that you can now run a ligh
Continue reading on Dev.to
Opens in a new tab



