
The Acceptance Criteria Pattern: How to Define 'Done' for Your AI Agent
There's a thread on Hacker News right now with 164+ points: "LLMs work best when the user defines their acceptance criteria first." Great point. But most people applying it to chat prompts are missing the bigger insight: this matters 10x more for autonomous agents. The Problem When you're chatting with an LLM, you course-correct in real time. You see a bad response and try again. When an AI agent runs autonomously — on a cron schedule, in a loop, processing tasks without supervision — there's no course-correction. If the agent doesn't know what "done" looks like, it will: Stop too early (task half-finished) Keep going forever (burning tokens on work that was already good enough) Do the "right" thing in the wrong context (technically complete, strategically wrong) The Fix: Acceptance Criteria in the Agent Config Every agent in our system has explicit done_when criteria in its config. Here's an example: { "agent" : "content-agent" , "task" : "draft_tweet" , "done_when" : [ "tweet is unde
Continue reading on Dev.to DevOps
Opens in a new tab


