
We Just Launched Virtual Try-On in Our API — Here's How It Actually Works (With Real Results)
Virtual try-on has been a "coming soon" feature for most AI APIs. The models that actually work well were either non-commercial licensed, needed 48GB+ VRAM, or required DensePose infrastructure that nobody explains how to set up. We shipped it this week on PixelAPI . Here's an honest breakdown of what we built, what we learned, and what the results actually look like. What We Built Endpoint: POST /v1/virtual-tryon Pricing: 50 credits ($0.05 per try-on) Categories: upperbody , lowerbody , dress The pipeline: you send a person image + a garment image → you get back the person wearing the garment. That's the promise. Here's the reality of what makes it actually work. The Tech Stack We evaluated several models before picking one: Model License VRAM Notes CatVTON CC BY-NC-SA ❌ 8GB Non-commercial only OOTDiffusion Apache 2.0 ✅ 12GB Decent quality Leffa MIT ✅ 20-24GB CVPR 2025 — best quality We went with Leffa (CVPR 2025). MIT licensed, state-of-the-art quality, and the paper's attention flow
Continue reading on Dev.to Python
Opens in a new tab




