
AI Crash Course: Hallucinations
In the last article , we talked about probability, token prediction, and how temperature can change the types of responses we get from AI models. However, it would be irresponsible of us to talk about generative AI without also addressing the elephant in the room: hallucination. Hallucination is when generative AI models return a response that isn’t grounded in facts or based on their training data. You’ve almost certainly experienced it for yourself if you’ve chatted with an LLM for even a little while. As of the writing of this article, there is no known way to prevent AI models from occasionally generating hallucinated content. There are several things we can do to reduce hallucinations, but nothing will eliminate them completely. This makes hallucinations one of the most prominent issues facing us today, as developers building with AI. Generative AI is incredibly impressive from a purely technical perspective, but if the output can’t be consistently trusted then our ability to buil
Continue reading on Dev.to Beginners
Opens in a new tab




