
The Working Set Prompt: How to Keep LLM Outputs Consistent Across Multi-Step Work
If you’ve ever used an LLM (or a “chat assistant”) for a task that spans multiple steps, you’ve probably seen the same failure pattern: Step 1 is great. Step 2 forgets a key constraint. Step 3 confidently rewrites something you didn’t want changed. Step 4 contradicts Step 1. The model isn’t “being lazy”. You’re just asking it to juggle too many moving parts without a stable reference. My fix for this is a simple prompt pattern I call the Working Set . A Working Set is a small, explicit bundle of context that you keep up to date and re-send (or re-assert) every time you start a new sub-task. Think of it like a mini project README + scratchpad + acceptance criteria. This post shows you how to build one, why it works, and includes copy/paste templates for: writing (blog posts, docs) coding (feature work, refactors) “messy” tasks (research, planning) Why multi-step prompting breaks In a long conversation, the “important” parts of your instructions get diluted by: Recency bias : newer messa
Continue reading on Dev.to Tutorial
Opens in a new tab



