
Why Asking an LLM for JSON Isn’t Enough
When I first learned prompting, I assumed something simple. If I needed structured data from an LLM, I assumed I could just tell the model to respond in JSON . And honestly… it works. You can write something like: You are an API that returns movie information . Always respond with JSON using this schema : { " title " : string , " year " : number , " genre " : string } And the model usually follows it. So naturally I thought: If prompting already works, why does “structured output” even exist? The answer became clear once I started thinking about how LLMs are used in real applications. 🤯 The Real Problem In tutorials, the LLM response is usually just displayed on screen. But in real systems, the response often becomes input for code . For example: const movie = JSON . parse ( response ) movie . title movie . year If the structure changes even slightly, the entire system can break. This is where the difference appears: Humans tolerate messy text. Software does not. Code expects predictab
Continue reading on Dev.to Webdev
Opens in a new tab

![[MM’s] Boot Notes — The Day Zero Blueprint — Test Smarter on Day One](/_next/image?url=https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1368%2F1*AvVpFzkFJBm-xns4niPLAA.png&w=1200&q=75)

