Back to articles
AI Model Pricing Is a Mess — Here Is How We Track It

AI Model Pricing Is a Mess — Here Is How We Track It

via Dev.toSimon Sharp

AI Model Pricing Is a Mess — Here Is How We Track It There are over 100 LLM models available through commercial APIs today. Their pricing changes constantly — sometimes multiple times per week. New models launch, old ones get deprecated, and providers quietly adjust rates. If you are building with LLMs, you have probably experienced this: you pick a model, hardcode it, ship it, and three months later discover you are paying 10x what a newer model would cost for the same quality. We built WhichModel to fix this. The Scale of the Problem Here is what tracking LLM pricing actually looks like: 10+ providers with different pricing pages, formats, and update cadences 100+ models with different input/output/cached token rates Capability matrices that change with each model update (vision support, tool calling, JSON mode, context windows) Quality tiers that do not map cleanly to price — a $0.60/M-token model can outperform a $15/M-token model on specific tasks Most teams handle this by... not

Continue reading on Dev.to

Opens in a new tab

Read Full Article
2 views

Related Articles