
What Happened When My Coding Agent Started Remembering User Mistakes
By: Shreya R Chittaragi — Memory & Adaptation Module Hindsight Hackathon — Team 1/0 coders The first time our mentor called a guessing user "someone who rushes through problems without reading carefully" — using only behavioral signals, no labels — I knew the memory layer was working. No one told the system this user was a rusher. No dropdown, no profile form, no manual tag. The agent watched how fast they submitted, counted their edits, saw the syntax errors, and concluded it on its own. Then it adapted its hint accordingly. That's what behavioral memory looks like when it actually works. What We Built Our project is an AI Coding Practice Mentor — a system where users submit Python solutions to coding problems, get evaluated, and receive personalized hints. The personalization isn't based on what they tell us about themselves. It's based on how they actually behave while solving problems. The stack: FastAPI backend handling code execution and routing Groq (LLaMA 3.3 70B) for generatin
Continue reading on Dev.to Webdev
Opens in a new tab




