
How to Measure Whether Your Model's Uncertainty Space Is Flat or Curved
A practical guide to Riemannian epistemic geometry in language models, with code. Most calibration research treats uncertainty as a scalar or a vector. You compute a confidence score, you compare it to ground truth, you minimize ECE. The space in which that uncertainty lives is assumed to be flat. That assumption might be wrong. And if it is wrong, it has concrete consequences for out-of-distribution detection, adversarial robustness, and AI safety. This post explains how to test it, using code from my current research on AletheionLLM-v2 . The baseline: diagonal distance in a 5D epistemic manifold AletheionLLM-v2 is a 354M parameter decoder-only LLM with an integrated epistemic architecture called ATIC. Instead of producing a single confidence score, the model maintains a 5-dimensional manifold where each axis represents a distinct component of uncertainty, learned via BayesianTau . The current distance metric (branch main ) is diagonal: def distance_diagonal ( x1 , x2 , tau_sq ): diff
Continue reading on Dev.to
Opens in a new tab
