
I Found Out My AI Was Working 2x Longer Than Me — So I Built a Tool to Show It
I was looking at my cc-session-stats output — 116 hours in 50 days — and something didn't add up. My gut said I was working hard. The numbers said I averaged 2.3h/day. Those felt inconsistent. So I dug into the raw data. # Main (interactive) sessions: 86 # Total hours from main sessions: 41.3h # Subagent sessions (Task tool): 3,420 # Total hours from subagents: 79.0h The AI ran for 1.9x longer than I did. 66% of my Claude Code "usage" was actually Claude running autonomously — spawning subagents via the Task tool, processing, returning results — while I was doing something else (or sleeping). This changes the story completely. The tool: cc-agent-load npx cc-agent-load Browser version: yurukusa.github.io/cc-agent-load Output: cc-agent-load ═════════════════════════════════════════════ ▸ Your Time vs AI Time You ████████░░░░░░░░░░░░░░░░ 41h (34%) 86 sessions AI ████████████████░░░░░░░░ 80h (66%) 3423 sessions ▸ AI Autonomy Ratio ████████ 1.9x — AI ran 1.9x longer than you Your AI matches
Continue reading on Dev.to JavaScript
Opens in a new tab


