Back to articles
An LLM Is Not a Deficient Mind

An LLM Is Not a Deficient Mind

via Dev.toRoman Dubinin

I called it "the perfect bullshitter." This was GPT-2, maybe early GPT-3. I was feeding it prompts and getting back text that looked like answers — structured, fluent, confident. The kind of output that would survive a casual reading. It was not grounded in anything. The model was hallucinating probable responses, assembling tokens that matched what you'd expect to see in text that answered that kind of question. Whether it matched reality was beside the point. I work with multi-agent systems now — code reviewers, planners, critics. The systems are better. The outputs are sharper. But the property I noticed back then has not gone away. It has gotten harder to see. The thing is, I'd already read the diagnosis. Peter Watts wrote it in 2006. I just didn't recognize what I was looking at until I'd spent enough time watching models talk. The parallel Blindsight spoilers ahead. If you haven't read it — the full text is free online . Read it. What follows will still be here when you get back.

Continue reading on Dev.to

Opens in a new tab

Read Full Article
4 views

Related Articles