
Why is tracking LLM token usage still so annoying?
If you build anything with the OpenAI or Claude APIs, you've probably run into this at some point. You're testing prompts, running scripts, tweaking things quickly… and suddenly you realize you have no real sense of how many tokens you're burning in real time. You can check dashboards later, sure. But while you're actually developing, it's basically invisible. You run something, it works, and only later do you discover the cost. I kept running into the same problem: • running prompt experiments • testing agents or scripts • debugging API calls and having no immediate visibility into token usage while coding. Most tools that exist are either: dashboards after the fact logging solutions full analytics platforms But I just wanted something extremely simple: a tiny indicator that shows token usage while I'm working. So I ended up building a small macOS menu bar tool that shows token usage in real time while you're developing. No dashboards. No analytics platform. Just a token counter sitti
Continue reading on Dev.to
Opens in a new tab




