
Async compaction: the race conditions nobody talks about
Claude Code blocks the agent while compacting. LangGraph runs compaction in the background and silently drops messages. Aider spawns a background thread and hopes for the best. Async compaction sounds like the obvious optimization — until you try to build it. We surveyed how major frameworks handle context compaction timing — synchronous, asynchronous, or not at all — and catalogued the concurrency hazards that emerge when you move compaction off the critical path. Here's what we found. Why compaction blocks Most frameworks run compaction synchronously. The agent stops, the LLM summarizes, the agent continues with a shorter context. It's slow but safe. Framework Approach Agent blocked Race risk Claude Code Sync at 95% capacity Yes None LangChain Sync after turn Yes None AutoGen Sync between chats Yes None Cursor None (manual reset) N/A N/A ChatGPT None (manual) N/A N/A Aider Background thread No Medium Google ADK Async event-based No Medium LangGraph Async background No High Six of eig
Continue reading on Dev.to
Opens in a new tab



