
Autonomous AI Coding: Ralph Loops with Sub-Agents and Skills (Pt. 1)
Introduction I've been testing Ralph loops recently for agentic coding. The idea is simple: spin up new Claude Code sessions for each task to get a fresh context until the agent achieves the goal, and persist the necessary context between sessions via .md files. It's simple, but it solves many long-known LLM issues like context rot and the generally small size of the context window. This becomes especially relevant when the agent works with large, complex repositories or infrastructure. In addition, you often want to have a proper research phase to check current best practices and available libraries and their interfaces, which also consumes a lot of context. Even frontier models like Sonnet and Opus still have relatively small context windows, and that problem won’t disappear anytime soon. But we still need a way to leverage the advantages of agentic coding today. I didn't like the approach of spinning up Claude Code sessions programmatically (for example, in bash). In that case, you
Continue reading on Dev.to Tutorial
Opens in a new tab



