
I Added AI Video Clips to My SaaS and It Broke Everything — 5 Times
Static images were working fine. Users were happy. Revenue was growing. So naturally, I decided to break everything. I run RepoClip , a SaaS that turns GitHub repos into promotional videos. The pipeline analyzes code with Gemini, generates scene images, adds AI narration, and renders the final video with Remotion. The previous article covered switching the image model from FLUX.2 to Nano Banana 2 — a 6.7x cost increase that turned out to be noise. This time, I went bigger. Instead of still images, I wanted each scene to be an AI-generated video clip . Five 5-second clips stitched together with narration. The kind of output that makes people say "wait, AI made this?" The model: Kling 3.0 Pro via Fal.ai's queue API. The result: it works — beautifully. But getting there nearly broke me. Why Video Clips? The still-image pipeline was solid. Nano Banana 2 produces gorgeous frames. But promo videos with static images and Ken Burns zoom feel like... slideshows. Because they are. AI video gener
Continue reading on Dev.to Webdev
Opens in a new tab



