
LLMs - How Did They Get So Good?
In two earlier posts I covered a bit of the history of the current batch of AI models, what they are good at , and what they're not so good at . Had I published those posts a year earlier, we probably could have left the story there, but unless you've been living under a rock, it's clear that the situation has evolved rather rapidly. I will try, then, with this post to conclude the story of how we got to the place we are now (early 2026) and to provide maybe a hint of where we are going. The Winter of Our AI Discontent Lately it seems any moderately lengthy discussion of the current state of AI inevitably turns to the prospect of an "AI bubble". Whenever it does, I like to point out that, if it turns out that AI is being overhyped and that interest and investment in AI were to fall off a cliff at some point in the future, this wouldn't be the first time. In fact, in the field of AI there already exists a term to describe this phenomenon. It's called an "AI winter". I also love to point
Continue reading on Dev.to
Opens in a new tab



