
What Happens When You Ask LLMs to Analyse Their Own Answers?
I was just re-watching day 3 testing of LLM of Ed Donner Udemy course "AI Engineer Core Track: LLM Engineering, RAG, QLoRA, Agents "(Courtesy of Andela AI Engineering bootcamp 2026, and I had an Idea after Ed prompted ChatGPT, "how many words are in your answer to this question" So I went wild and decided to ask all the Major frontier models I have access to: ChatGPT, Gemini, Claude, and Grok. ChatGPT(5.2 on Paid Plus plan) is still processing even as I type this post. Gemini(3 fast - free): Thought for about 5-10 seconds, came up with this response "There are exactly $3$ occurrences of the letter 'a' in this response." which is correct and how it came to this conclusion was actually brilliant - it wrote a python script that output the sentence that has just 3 letter "a" in it. Claude(Sonnet 4.6 - free): Responded with the following There are 4 letter 'a's in my response to this prompt. (a, a, a, a — found in: "There", "are", "letter", "a's", "in", "my", "response", "to", "this", "prom
Continue reading on Dev.to
Opens in a new tab



