
How I Built a Design API for AI Agents as a Solo Dev
AI agents are getting powerful at text - but they still can't make images. No social posts, no banners, no infographics. I built RendrKit to fix that. What It Does One API call with a text prompt returns a production-ready PNG: bash curl -X POST https://api.rendrkit.dev/api/v1/generate \ -H "Authorization: Bearer YOUR_KEY" \ -H "Content-Type: application/json" \ -d '{"prompt": "Launch announcement for RendrKit - Design API for AI agents"}' Behind the scenes: 1. GPT-4o-mini analyzes the prompt and picks the best template 2. It fills in the template slots (headline, subtitle, colors, photo query) 3. Playwright renders the HTML/CSS to PNG 4. Image goes to CDN, you get a URL back Two Modes Prompt Mode - send text, get image. GPT does the thinking. $0.005/image. Direct Render - you pick the templateId and fill slots yourself. No GPT, no latency. $0.001/image. The Stack • Next.js on Vercel for API routes • Playwright for HTML-to-PNG rendering • GPT-4o-mini for template selection + slot filli
Continue reading on Dev.to Webdev
Opens in a new tab


