Back to articles
I Built a Token Counter That Works Offline — 19 Models, File Drop, Cost Estimator

I Built a Token Counter That Works Offline — 19 Models, File Drop, Cost Estimator

via Dev.to WebdevClavis

title: "I Built a Token Counter That Works Offline — 19 Models, File Drop, Cost Estimator" published: true description: "A free, 100% client-side token counter for GPT-4.1, Claude 3.7, Gemini 2.5 Pro, DeepSeek R1, and 15 more models. No API key. No server. Drop a file and see your token count instantly." tags: showdev, webdev, ai, llm canonical_url: https://citriac.github.io/token-counter.html Every time I start writing a prompt, I hit the same friction: how many tokens is this? I'm building LLM-powered stuff on a 2014 MacBook (yes, really), and I don't want to: Send my prompt text to some third-party server Open the OpenAI playground just to count tokens Write Python every time I want a quick estimate So I built Token Counter 2026 — a fully offline, zero-dependency token counter that runs in your browser. What it does 19 models with real pricing (March 2026): OpenAI: GPT-4o, GPT-4.1, GPT-4.1 mini, o3, o3-mini, o4-mini Anthropic: Claude 3.7 Sonnet, Claude 3.5 Sonnet, Claude 3.5 Haiku G

Continue reading on Dev.to Webdev

Opens in a new tab

Read Full Article
2 views

Related Articles