
Your AI Agents Will Work Better If You Define Done Before They Start
There's a post on Hacker News right now with 400+ points: "LLMs work best when the user defines their acceptance criteria first." It's a great point. But most people applying it to chat prompts are missing the bigger insight: this matters 10x more for autonomous agents. The Problem When a human uses an LLM, they can course-correct in real time. They see a bad response and try again. When an AI agent runs autonomously — on a cron schedule, in a loop, processing tasks without supervision — there's no course-correction. If the agent doesn't know what "done" looks like, it'll either: Stop too early (task half-finished) Keep going forever (burning tokens on work that was already good enough) Do the "right" thing in the wrong context (technically complete, strategically wrong) The Fix: Acceptance Criteria in the Agent Config Every agent in our system has explicit done_when criteria in its config. Here's a real example from the Ask Patrick Library: { "agent" : "content-agent" , "task" : "draf
Continue reading on Dev.to
Opens in a new tab




