
Your framework choice is now your biggest AI cost lever
The Wasp team published something worth reading today — they gave Claude Code the exact same feature prompt for two identical apps, one in Next.js and one in Wasp, and measured everything. The numbers: Metric Wasp Next.js Total cost $2.87 $5.17 Total tokens 2.5M 4.0M API calls 66 96 Output tokens (code written) 5,416 5,395 The last row is the interesting one. The AI wrote almost exactly the same amount of code. But it cost 80% more to do it in Next.js. The reason: cache creation and cache reads. Every LLM call re-reads the codebase context from scratch. A bigger codebase means every single turn costs more — not just for reading, but for loading into cache in the first place. Next.js cache creation was 113% more expensive. Not because the AI did more. Because it had more boilerplate to read before it could start. What this actually means We've been evaluating frameworks on DX, performance, and ecosystem. Add a new one: context efficiency. How much of an AI's context window goes to signa
Continue reading on Dev.to Webdev
Opens in a new tab


