Claude Code has quickly become one of the most active AI development tools in everyday engineering workflows, but one part of the experience has remained surprisingly opaque for many users: how many tokens are actually being consumed, which models are being used across sessions, how prompt cache affects usage, and what that work would cost under API pricing. That is the gap claude-usage is trying to fill, an open-source GitHub project that turns Claude Code’s local session logs into a browser-based dashboard with charts, history, and rough cost estimates.

What makes the tool interesting is that it does not invent a new telemetry layer. It reads data Claude Code already stores locally. Anthropic’s own Claude Code documentation says the ~/.claude directory contains plaintext data written during sessions, including full conversation transcripts under projects/<project>/<session>.jsonl, along with tool results, debug logs, and file history. The claude-usage scanner parses those JSONL files and writes the extracted data into a local SQLite database at ~/.claude/usage.db.

According to the repository, the project tracks usage from the Claude Code CLI, the VS Code extension, and dispatched Code sessions routed through Claude Code. It explicitly says it does not capture Cowork sessions, because those run server-side and do not write local JSONL transcripts. That distinction matters, because the dashboard can give a broad view of activity without necessarily reflecting every possible Claude interaction a user may have.

The project is also intentionally lightweight. Its README says it requires Python 3.8 or newer and no third-party packages, relying only on the standard library, including sqlite3, http.server, json, and pathlib. The workflow is simple: scan indexes session files, today shows a terminal summary, stats surfaces all-time usage, and dashboard scans and opens a local dashboard on localhost:8080. The scanner is incremental, so repeated runs only process new or modified files.

For developers who spend a lot of time inside Claude Code, the practical appeal is obvious. Anthropic does offer some visibility through Claude Code itself, including the /context command and a customizable status line. The official docs show that users can build a status line script to surface model and context information in the interface, and they also note that community projects already exist around that idea. But that is still different from having a dedicated historical dashboard that aggregates sessions, token usage, and model mix over time.

That said, the dashboard should be read as a useful approximation, not a perfect billing instrument. The claude-usage README estimates costs using Anthropic API pricing and warns that Pro and Max users have a different subscription-based cost structure. That alone means the “cost” it shows is often better understood as an API-equivalent value than as a direct reflection of what a user is paying. There is also another wrinkle: Anthropic’s current public pricing page lists Sonnet 4.5 at 3 dollars per million input tokens and 15 dollars per million output tokens, and Opus 4.5 at 5 and 25 dollars respectively, while the repository displays higher numbers for its own April 2026 assumptions.

There is a second limitation that matters even more for power users and anyone trying to reconcile local tracking with actual billing. A public GitHub issue in Anthropic’s Claude Code repository reports that JSONL session transcripts may miss the final message_stop event and undercount output_tokens, with the reporter warning that tools parsing these logs can underestimate spend by a large margin. The issue specifically says this affects cost-tracking tools that rely on local JSONL data and makes it difficult to reconcile session-level estimates with Anthropic’s billing dashboard.

That does not make claude-usage irrelevant. In practice, it still solves a real observability problem for people working heavily with Claude Code. Even imperfect local telemetry can be useful for spotting trends: which projects are consuming the most context, whether one model is being used far more often than expected, how much cache read activity is happening, or how long sessions are growing over time. For engineering teams trying to operationalize AI coding tools, that kind of visibility is often more valuable than a perfectly exact cost figure.

There is also a broader signal here about the maturity of the Claude Code ecosystem. Once a platform starts generating local dashboards, session explorers, usage monitors, and status line extensions from third-party developers, it usually means the tool has moved beyond novelty and into daily operational use. Anthropic’s own documentation already treats local transcripts, hooks, skills, subagents, and session management as normal parts of the Claude Code environment. claude-usage fits naturally into that evolution: it is not changing Claude Code, but it is helping users see what Claude Code is actually doing over time.

In that sense, the project matters less as a polished end product and more as a sign of where AI coding infrastructure is heading. As developer agents become part of everyday work, teams will want more than strong outputs. They will want auditability, usage visibility, budget awareness, and tooling that makes local session data easier to understand. claude-usage is a small but telling example of that next layer taking shape around Claude Code.

FAQ

What is claude-usage?
It is an open-source local dashboard that reads Claude Code session logs and turns them into charts, session history, token summaries, and API-style cost estimates. It stores processed data in a local SQLite database and serves a dashboard on localhost:8080.

Does it send Claude Code data to an external service?
The repository describes it as a local-only tool. It reads JSONL transcripts from ~/.claude/projects/, writes to ~/.claude/usage.db, and serves the dashboard from a local HTTP server.

Are the cost estimates the same as what Pro or Max users actually pay?
No. The project itself notes that its numbers are based on API pricing, while Pro and Max are subscription plans. Anthropic’s pricing page also shows official API rates that may differ from the assumptions used in the repository.

Can the dashboard miss or miscount some usage?
Yes. The repository says Cowork sessions are not captured because they do not write local JSONL transcripts, and a public Claude Code GitHub issue reports that JSONL logs can undercount output tokens if final events are missing.

Scroll to Top