
I Built 3 Tools to Stop My AI from Being a Yes-Man (and forgetting everything)
The Problem If you use Claude Code (or any AI coding assistant) seriously, you've hit these: Your AI agrees with everything you say. You challenge it, it immediately surrenders. No pushback, no analysis — just "you're right, sorry." Your AI asks instead of doing. "Want me to fix that?" Just fix it. You told it to. Your AI forgets things after context compression. Important context evaporates. Old memories pollute new decisions. There's no cleanup. Your AI keeps making the same mistakes. You correct it, it says "I'll do better," then does the exact same thing tomorrow. These aren't bugs. They're structural problems with how AI assistants work. And prompts in CLAUDE.md don't fix them — they get ignored after the first compact. The Fix: Code, Not Prompts We built three tools that solve these problems with actual enforcement — Python scripts you drop in and forget. 1. Self-Guard (PreResponse Hook) A hook that intercepts your AI's response before it reaches you and detects bad behavior patt
Continue reading on Dev.to Python
Opens in a new tab


