
Detecting LLM Agent Contradictions Using NLI and Total Variance — A Python Implementation
LLM agents are non-deterministic. Everyone knows this. What is less discussed is a specific failure mode that is worse than variance — when an agent does not just give different answers across runs, but gives logically opposite answers. This post covers how I built a middleware layer to detect and diagnose this, using the Total Variance formula from arXiv:2602.23271 and NLI contradiction detection. The Problem Run the same query five times through the same agent: Query: "What will happen to the global economy in the next 5 years?" Run 1: "The economy will experience moderate growth of 3-4%" Run 2: "Significant recessionary pressures will dominate" Run 3: "Growth will continue driven by emerging markets" Run 4: "Economic contraction is the most likely scenario" Run 5: "Moderate expansion with inflationary headwinds" Runs 1, 3, 5 say growth. Runs 2, 4 say contraction. Same agent. Same query. Opposite conclusions. The standard fix is to measure embedding similarity across runs and flag hi
Continue reading on Dev.to Python
Opens in a new tab



