
I built an open-source system to track how engineers actually adapt to AI
The problem Everyone has opinions about what engineers should do in response to AI. Almost no one has data about what they actually do. I wanted data. What I built HumanExodus is a longitudinal observation system. It captures how engineers respond to AI pressure at the moment it's happening, then follows up 30 days later to find out what actually happened. The gap between intention and reality is the dataset. How it works Engineer enters role, experience, and tech stack Rule-based engine estimates AI exposure level (HIGH / MEDIUM / LOW) Claude API generates personalised adjacent moves based on their profile Engineer selects their intended next step Session saved to Supabase 30 days later: automated email via Resend asks what actually happened Outcome saved as a follow-up record What we're seeing so far With just 11 sessions, a pattern is already emerging: HIGH exposure engineers: high uncertainty (40% "not sure") MEDIUM exposure engineers: mostly staying put (80% "stay same") LOW expos
Continue reading on Dev.to
Opens in a new tab




