
From prompt engineering to agent experience
Not long ago, the craft of working with large language models was all about the prompt. You'd carefully compose a single, curated invocation, pre-filling it with everything the model could possibly need: data from databases, context retrieved through RAG, instructions refined through dozens of iterations. We called it prompt engineering, and it felt like a real discipline. In a way, it was. But the models were already trained to follow instructions as part of a conversation. The prompt was never meant to be a one-shot affair. It was always the beginning of a dialogue — and what happened next is that the dialogue started to extend to the outside world. The internal monologue The shift began when models gained access to tools. Some call it function calling. What it really meant was that the conversation with the model became richer, including a sort of "internal monologue" where the model could reach out, touch external systems, retrieve information, take actions, or preserve memories ac
Continue reading on Dev.to
Opens in a new tab
