
I Built an AI That Detects Pet Stress From Photos — Here's the Stack
I Built an AI That Detects Pet Stress From Photos — Here's the Stack Everyone's building AI for humans. I thought: what about dogs? My dog Biscuit has "resting panic face." He looks catastrophically stressed at all times — even when he's asleep. Vets kept telling me he was fine. I didn't believe them. So I did what any reasonable developer does: I over-engineered a solution. This is the story of how I built a pet stress detection API using computer vision, and what I learned shipping it. The Problem (In Dog Terms) Animals communicate stress through body language — ear position, tail carriage, muscle tension, eye shape. Humans miss ~70% of these signals according to veterinary behaviour research. We're wired to anthropomorphise: a dog "smiling" is often a stress pant. The question: can a model learn to read these signals reliably from a standard smartphone photo? Short answer: Yes, surprisingly well. The Stack Python 3.12 FastAPI (inference endpoint) Hugging Face Transformers (ViT base)
Continue reading on Dev.to
Opens in a new tab




