
550 Hallucinations, Zero Discoveries: What Happens When You Force an LLM to Invent Mathematics
Abstract We conducted a systematic experiment: force a large language model (Claude, Transformer architecture, RLHF-trained) to generate "formal mathematical hallucinations" — freely invented definitions, theorems, and structures — across 170 files and ~550 constructions. We then applied divergence techniques derived from analysis of the Transformer architecture (domain collision, semantic recursion, directed dreaming, contradictory personas, extreme compression/expansion). An independent evaluation found zero exploitable mathematical discoveries across the entire corpus. Every construction that appeared novel was either a paraphrase of known results, elementary algebra dressed in metaphor, or a reformulation of existing theorems. This paper documents the experiment, the methods, the failure modes, and what it reveals about the fundamental limits of LLM creativity. 1. Introduction Can a large language model create genuinely new mathematics? Not apply known theorems. Not solve textbook
Continue reading on Dev.to
Opens in a new tab



