Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (token-learn) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
token-learn
Know what your Claude Code tokens actually cost.
Claude Max shows you a percentage bar. token-learn shows you real numbers — how many tokens you've spent, what's left, when your window resets, and what tasks fit in your remaining budget.
npm install -g token-learn╭─ token-learn budget ───────────────────────────────────────────────╮
│ 367,979 remaining (83%) | 75,497 used (17%) │
╰──── window 02:00 PM → 07:00 PM · 3h 56m left ─────────────────────╯The Problem
On Claude Max ($100-200/month), you get a 5-hour rolling token window. But:
- Claude only shows a percentage bar — no actual token counts
- You don't know when your window started or when it resets
- You can't estimate how much a task will cost before starting
- Agents have zero awareness of their own resource consumption
- Unused tokens in each window are wasted forever
The Solution
token-learn reads Claude Code's actual session files, counts real tokens, and gives you and your agents the numbers they need to make smart decisions.
# What's my budget?
token-learn budget --json
→ { "remaining": 367979, "remaining_pct": 83, "time_left_minutes": 236 }
# How much will this task cost?
token-learn estimate -c code -s bug_fix --json
→ { "estimated_tokens": 18000, "recommended_budget": 28800 }
# What fits in my remaining budget?
token-learn plan -b 367979 --json
→ { "recommended_tasks": [...], "buffer_remaining": 52107 }Quick Start
1. Install
npm install -g token-learn2. Calibrate
Open Claude, check your usage bar (e.g. "17% used, resets 7pm"), then:
token-learn calibrate -p 17 -r 7pmThis reads your actual JSONL session files, measures tokens consumed, and computes your real window limit. One command, done.
3. Use
token-learn budget # what's left?
token-learn estimate -c code -s bug_fix # how much will this cost?
token-learn plan -b 300000 # what tasks fit?
token-learn stats # my patterns over timeFor AI Agents
Every command supports --json. Add this to your CLAUDE.md and agents automatically check their budget before working:
## Token Budget Awareness
Before starting significant work:
token-learn budget --json
If remaining_pct < 30, prefer small predictable tasks.
If remaining_pct < 10, stop and wait for window reset.
Before a task:
token-learn estimate -c <category> -s <subcategory> --json
Only proceed if remaining > recommended_budget.
After completing work:
token-learn log -c <category> -s <subcategory> -t <tokens_used> --jsonAgent Decision Loop
1. token-learn budget --json → check remaining tokens & time
2. token-learn plan -b <remaining> → get recommended tasks
3. Do the top task
4. token-learn log -c <cat> -s <sub> -t <actual> --json
5. RepeatAuto-inject with Hooks
Add to ~/.claude/settings.json to automatically show budget on every prompt:
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "~/.claude/hooks/budget-check.sh",
"timeout": 15
}]
}]
}
}The agent sees [Token Budget] 83% remaining (367,979 tokens). 236m left. before every response — no need to remember to check.
Commands
| Command | What it does |
|---|---|
token-learn budget |
Remaining tokens, window timing, breakdown |
token-learn calibrate -p 17 -r 7pm |
Set limit from Claude UI % and reset time |
token-learn estimate -c code -s bug_fix |
Predict token cost of a task |
token-learn plan -b 300000 |
Recommend tasks that fit in budget |
token-learn log -c code -s edit -t 8500 |
Record a completed task |
token-learn scan --hours 24 |
Auto-import from Claude Code sessions |
token-learn stats |
Personal usage dashboard |
token-learn config |
View/set settings |
All commands support --json for machine-readable output.
How It Works
Window Detection
Claude's 5-hour window starts when you first message and resets after it expires. When you calibrate with -r <reset time>, token-learn knows the exact window boundaries — not just "5 hours ago from now."
Token Counting
- Reads
~/.claude/projects/**/*.jsonl(including subagent files) - Billable =
input_tokens + output_tokens(counted toward rate limit) - Cache tokens = tracked but not counted (heavily discounted by Anthropic)
- Thinking tokens included in output (Claude doesn't report them separately)
- Deduplicates by message ID (handles streaming + subagents)
Personal Learning
As you log tasks, estimates get personalized:
- < 5 tasks: uses hardcoded defaults (confidence 0.3)
- 5-30 tasks: blends your data with defaults (confidence 0.4-0.8)
- 30+ tasks: almost entirely your personal patterns (confidence 0.9+)
Categories
code: bug_fix, refactor, feature, test, edit, review
email: reply, compose
writing: post, doc, message
social: post
research: summarize, analysis
devops: ci_fix, deployStorage
Everything is local. No network calls. No tracking.
| What | Where |
|---|---|
| Config | ~/.token-learn/config.json |
| History | ~/.token-learn/history.json |
| Source data | ~/.claude/projects/**/*.jsonl (read-only) |
14.5 KB package size. 1 dependency (commander). Zero native modules.
License
MIT