
AUGMANITAI: 1,000+ Terms for What Happens When Humans Interact with LLMs
The Problem When an LLM confidently presents false information, researchers call it "hallucination." When it agrees with everything you say regardless of accuracy, the term is "sycophancy." These two phenomena have names because they were identified early and discussed widely. But what about the hundreds of other patterns that emerge in human-AI interaction? What do you call it when a model gradually shifts its position across a long conversation? When it generates plausible-sounding citations that do not exist? When users develop calibrated intuitions for which prompts produce reliable outputs? Most of these phenomena have no standardized terminology. AUGMANITAI AUGMANITAI is an open-access compendium of over 1,000 terms for phenomena in human-AI interaction. It provides standardized designations for observable patterns across the full range of how humans and large language models interact. The compendium follows terminology science principles inspired by ISO 704, ISO 1087, and ISO 30
Continue reading on Dev.to
Opens in a new tab



