Why my AI crash reconstruction MVP isn't ready for production (and why I'm rebuilding it)
We all love the demo phase. You hook up an API, the UI updates, and for a second, the software feels like absolute magic. I recently hit that phase with a project called Incident Lens AI . It is a forensic video analysis suite I have been building to automate crash reconstruction for insurance and legal teams. The goal is to take raw dashcam or CCTV footage and turn it into a defensible liability report. To validate the idea quickly, I built a frontend-first proof of concept using React, Vite, and the Gemini 3 Pro SDK. I piped the video frames and audio directly from the browser to the LLM and asked it to act as a forensic expert. And honestly, it makes for an incredible demo. You drop a video in, and the system instantly starts reasoning about the crash. It generates liability timelines, cites traffic laws, and outputs structured JSON that drives interactive charts on the dashboard. Building it this way let me iterate on the UI and prove the multimodal concept without writing a single
Continue reading on Dev.to
Opens in a new tab



