Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (token-learn) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
token-learn
Track your personal Claude Code token costs. Know what you've spent, what's left, and when your window resets.
npm install -g token-learnFor Agents
All commands support --json for machine-readable output.
Check remaining budget before starting work
token-learn budget --json{
"remaining": 464787,
"remaining_pct": 72.2,
"used_pct": 27.8,
"window_start": "2026-03-22T01:12:01.143Z",
"window_end": "2026-03-22T06:12:01.143Z",
"time_left_minutes": 30,
"used": { "billable": 178569, "input_tokens": 25805, "output_tokens": 152764 },
"window_limit": 643356,
"window_hours": 5
}Key fields for agent decisions:
remaining— tokens you can still useremaining_pct— percentage of budget lefttime_left_minutes— when the window resets (0 = fresh window on next message)
Estimate cost before starting a task
token-learn estimate -c code -s bug_fix --json{
"estimated_tokens": 18000,
"recommended_budget": 28800,
"range": { "p25": 8000, "median": 18000, "p75": 32000, "p95": 55000 },
"variance": "high",
"confidence": 0.3
}Decision rule: only proceed if remaining > recommended_budget.
Plan what fits in remaining budget
token-learn plan -b 464787 --json
token-learn plan -b 464787 -t '[{"category":"code","subcategory":"bug_fix"},{"category":"email","subcategory":"reply"}]' --jsonLog a completed task
token-learn log -c code -s bug_fix -t 22400 --json
token-learn log -c email -s reply -t 1500 --jsonAgent decision loop
1. token-learn budget --json → check remaining tokens & time
2. If time_left_minutes == 0, window is fresh — go big
3. If remaining_pct < 10, only trivial tasks or wait
4. token-learn plan -b <remaining> → get recommended tasks
5. Do the top task
6. token-learn log -c <cat> -s <sub> -t <actual_tokens> --json
7. Go to 1Setup
Install
npm install -g token-learnOr from source:
cd token-learn-node && npm install && npm linkCalibrate (required once, repeat when limits change)
Look at your Claude usage bar in the UI, note the percentage, then:
token-learn calibrate -p 25 # if Claude shows 25% usedThis reads your actual JSONL session files, measures tokens consumed, and computes your window limit. Re-calibrate anytime to adjust for Anthropic limit changes.
Import existing sessions (optional)
token-learn scan --hours 24 # import last 24 hours
token-learn scan --all # import all available sessionsCommands
| Command | What it does | Key flags |
|---|---|---|
budget |
Remaining tokens + window timing | --json |
calibrate |
Set limit from Claude UI percentage | -p <pct> |
estimate |
Predict cost of a task | -c <category> -s <sub> |
plan |
Recommend tasks for a budget | -b <tokens> |
log |
Record a completed task | -c <category> -s <sub> -t <tokens> |
scan |
Auto-import from Claude Code sessions | --hours <n>, --all |
stats |
Personal usage dashboard | --json |
config |
View/set settings | --window-hours <n> |
Categories
code: bug_fix, refactor, feature, test, edit, review
email: reply, compose
writing: post, doc, message
social: post
research: summarize, analysis
devops: ci_fix, deploy
other: (anything else)How it works
Window detection
Claude's 5-hour window starts at your first message and runs for 5 hours. The clock resets when you send a message after the window expires.
token-learn detects the actual window start by scanning all JSONL message timestamps and walking backwards to find the first gap longer than 5 hours. Everything after that gap is the current window. This means:
- It finds the real start time, not just "5 hours ago from now"
- It shows you exactly when the window ends
- It tells you how many minutes are left
Token counting
- Billable tokens =
input_tokens+output_tokens(counted toward rate limit) - Cache read/create tokens = tracked but NOT counted (heavily discounted by Anthropic)
- Thinking tokens are included in
output_tokens(Claude doesn't report them separately) - Messages are deduplicated by ID (no double-counting from streaming or subagents)
Estimation
With < 5 logged tasks of a type: uses hardcoded defaults (confidence 0.3). With 5-30 tasks: blends personal data with defaults (confidence 0.4-0.8). With 30+ tasks: almost entirely personal data (confidence 0.8-0.95).
Variance buffers added to estimates:
- Low variance (email replies): +15%
- Medium variance (docs, posts): +35%
- High variance (debugging, refactoring): +60%
Storage
- Config:
~/.token-learn/config.json - History:
~/.token-learn/history.json - Source data: reads
~/.claude/projects/**/*.jsonl(Claude Code's session files) - No network calls. Everything is local.
- 1 npm dependency (commander). No native modules.
CLAUDE.md integration
Add this to your project or global ~/.claude/CLAUDE.md:
## Token Budget Awareness
Before starting significant work, check your token budget:
token-learn budget --json
Read the response:
- remaining: tokens left in this window
- remaining_pct: percentage of budget left
- time_left_minutes: when window resets (0 = next message starts fresh window)
If remaining_pct < 30, prefer small predictable tasks (email, short edits).
If remaining_pct < 10, only do trivial tasks or wait for window reset.
If time_left_minutes == 0, window is expired — next message starts fresh.
Before a task, estimate the cost:
token-learn estimate -c <category> -s <subcategory> --json
Compare estimated_tokens against remaining. Only proceed if remaining > recommended_budget.
After completing work, log the result:
token-learn log -c <category> -s <subcategory> -t <tokens_used> --json
Categories: code, email, writing, research, social, devops, other
Subcategories:
code: bug_fix, refactor, feature, test, edit, review
email: reply, compose
writing: post, doc, message
research: summarize, analysis
devops: ci_fix, deployLicense
MIT