
Structured Output for AI Coding Agents: Why I Built Pare
If you've spent any time watching Claude Code, Cursor, or Copilot work through a coding task, you've seen it: the agent runs git log and gets back 200 lines of formatted terminal output. It runs npm outdated and parses an ASCII table. It runs docker ps and tries to extract container IDs from column-aligned text that was designed for a human glancing at a terminal. Most of the time it works. Sometimes it doesn't — the agent misreads a column boundary, hallucinates a field that wasn't there, or burns through context window space on ANSI color codes and decorative characters that carry zero information. And every time, it's spending your tokens on text it has to re-parse back into the structured data the CLI tool had internally before it formatted it for human eyes. I started tracking this in my own workflows. In a typical 30-minute coding session, an agent might make 40-60 tool calls. Each one returns raw terminal text that the model has to interpret. The token overhead from progress bar
Continue reading on Dev.to
Opens in a new tab

