Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (tokenomy) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Stop burning tokens on tool chatter.
A surgical hook + analysis toolkit for Claude Code, Codex CLI, Cursor, Windsurf, Cline, and Gemini that transparently trims bloated MCP responses, clamps oversized file reads, bounds verbose shell commands, dedupes repeat calls, compresses agent memory files, and benchmarks waste.
🩻 Why
Long agent sessions burn 100k+ tokens on tool chatter before the agent does any real work. Compaction kicks in too late and is lossy.
Tokenomy plugs the holes the hook contracts let you close — zero proxy, no monkey-patching:
- Live trimming — MCP/Bash/Read/Write hooks shrink responses the moment they fire
- Code-graph MCP — focused context retrieval instead of brute-force
Readsweeps - Agent nudges — redirect waste before it happens (OSS alternatives, prompt classifier)
- Golem — terse-output mode (the one surface other features leave alone — assistant output is billed 5× input on Sonnet)
- Raven — Claude/Codex cross-agent handoff + review bridge
- Observability —
savings.jsonl,tokenomy report,tokenomy analyze, statusline, doctor
Each live trim appends a row to ~/.tokenomy/savings.jsonl with measured bytes-in / bytes-out. Run tokenomy report for a digest, or tokenomy analyze to benchmark historical waste from session transcripts.
⚡ Quickstart
npm install -g tokenomy
tokenomy init --graph-path "$PWD" # patches Claude/Codex/Cursor/Windsurf/Cline/Gemini configs + builds the graph
tokenomy doctor # all checks passing
# restart Claude Code — doneCodex CLI, Cursor, Windsurf, Cline, Gemini auto-detected and registered when each is on PATH. Force one target with --agent <name>; inspect first with tokenomy init --list-agents.
Pre-
1.0. Breaking changes may land before1.0.0(see CHANGELOG). Pin:npm install -g tokenomy@0.1.2. Upgrade:tokenomy update.
Zero-touch interactive install via Claude Code
Paste this into Claude Code. The agent installs Tokenomy with the graph MCP first, then walks you through each opt-in feature one at a time so you can pick what fits your workflow.
Install tokenomy for me, then walk me through the optional features one by one.
Step 1 — Install the core:
1. Run: npm install -g tokenomy
2. Run: tokenomy init --graph-path "$PWD"
(patches Claude Code config, registers the tokenomy-graph MCP server
for this repo, and builds the code graph)
3. Run: tokenomy doctor
Step 2 — Then ask me about each optional feature, one at a time.
For each feature: give me a 2-3 sentence explanation of what it does,
what it costs (tokens / latency / friction), and a yes/no question.
Wait for my answer before moving on. Apply the install command for
features I say yes to. Skip features I say no to.
Walk through these features in order:
a) Golem — terse-output mode plugin. Five modes from `lite`
(drop hedging) up to `recon` (zero banter, info-density only).
Cuts assistant output tokens 20-65 % depending on mode. Ask me
which mode I want, or whether to skip entirely.
Install: tokenomy golem enable --mode=<chosen>
b) Raven bridge — Claude Code + Codex CLI cross-agent handoff +
review packets. Useful only if I have Codex CLI on PATH and I
actually do code review with both agents. Skip if I only use
Claude.
Install: tokenomy raven enable
c) OSS-alternatives nudge (Write intercept) — when I'm about to
create a new utility-ish file > 500 B, Tokenomy reminds me to
check `find_oss_alternatives` first. Catches reinventing-the-wheel
work. Default ON; ask if I want it off.
Disable: tokenomy config set nudge.write_intercept.enabled false
d) Prompt-classifier nudge — fires once per turn when I prompt
"any existing library for X" / "alternative to Y" / "instead of
building Z". Conservative since 0.1.2 (was overly broad before).
Default ON; ask if I want it off.
Disable: tokenomy config set nudge.prompt_classifier.enabled false
e) Pre-call redact — extends secret redaction to PreToolUse on
Bash/Write/Edit. Catches API keys, tokens, JWTs, PEM blocks
before they hit the assistant. Default OFF; ask if I want it on
(recommended if I work with credentials).
Enable: tokenomy config set redact.pre_tool_use true
e.5) Kratos — security shield. Continuously scans my prompts for
prompt-injection ("ignore previous instructions"), data-exfil
requests, secrets I accidentally pasted, hidden zero-width chars.
Also runs `tokenomy kratos scan` to audit installed MCP servers
for read-source ↔ write-sink combos that form an exfil route.
Default OFF. Strongly recommended if I have any third-party MCP
servers (Slack, Gmail, Atlassian, etc.) installed. Ask me yes/no.
Enable: tokenomy kratos enable
One-shot audit: tokenomy kratos scan
f) Budget pre-flight — advisory PreToolUse rule that warns when a
call would push my session past a token cap. Never blocks.
Default OFF; ask if I want it on with what cap.
Enable: tokenomy config set budget.session_cap_tokens 200000
g) Statusline — one-line live savings counter. Shows version,
Golem mode, graph freshness, today's trims, and a `↑` after the
version when an update is available. Already wired by `init`;
just confirm it's working and move on.
Step 3 — Final check:
1. Run: tokenomy doctor
2. Tell me to fully quit Claude Code (Cmd+Q) and reopen so the new
hooks + MCP server + SessionStart preamble load cleanly.🧭 Features
Each feature has its own README. Main README stays small.
| Area | What | Docs |
|---|---|---|
| Live token trimming | MCP / Bash / Read / Write / Edit hooks — multi-stage pipeline, dedup, redact, stacktrace collapse, schema profiles, shape-trim, byte-trim | docs/features/live-trimming.md |
| Code-graph MCP | tokenomy-graph stdio server — get_minimal_context, get_impact_radius, get_review_context, find_usages, find_oss_alternatives |
docs/features/code-graph.md |
| Agent nudges | OSS-alternatives Write nudge, prompt-classifier, repo-search relevance gate | docs/features/agent-nudges.md |
| Golem | Terse output mode — lite / full / ultra / grunt / recon / auto. Safety-gated for code, commands, warnings, numbers |
docs/features/golem.md |
| Raven bridge | Claude-primary + Codex-reviewer handoff packets, deterministic finding compare, PR-readiness verdict | docs/features/raven.md |
| Kratos (0.1.2+) | Security shield — prompt-injection / data-exfil / secret-in-prompt detector + cross-MCP exfil-pair scanner | docs/features/kratos.md |
| Feedback (0.1.2+) | tokenomy feedback "..." files a GitHub issue (via gh or browser fallback). No backend service. |
docs/features/feedback.md |
| Observability | savings.jsonl, report, analyze, diff, learn, budget, bench, status-line, doctor, update |
docs/features/observability.md |
| Compress | tokenomy compress — agent rule file cleanup (CLAUDE.md, AGENTS.md, .cursor/rules) |
docs/features/compress.md |
| Cross-agent install | Per-agent adapters for Claude / Codex / Cursor / Windsurf / Cline / Gemini | docs/features/cross-agent.md |
| Configure | ~/.tokenomy/config.json — full reference + common tweaks |
docs/features/configure.md |
💸 Real savings from one dogfood session
Tokenomy run on its own repo in a fresh Claude Code session (Bash verbose commands, Read on a large file, five graph MCP queries):
Window: 2026-04-20T01:32:56Z → 2026-04-20T01:55:10Z (~22 minutes)
Total events: 14
Bytes trimmed: 1,374,113 → 260,000 (−1,114,113)
Tokens saved (est): 285,000
~USD saved: $0.8550
Top tools by tokens saved
Bash 11 calls 247,500 tok (bash-bound:git-log / :find / :ls-recursive / :ps)
Read 3 calls 37,500 tok (read-clamp on 89 KB package-lock.json + 2 others)~20k tokens saved per event. A dev spending 4 agent-hours a day reclaims ~$10/week. Reproduce via CONTRIBUTING.md's dogfood playbook.
🩺 tokenomy doctor
✓ Node ≥ 20
✓ ~/.claude/settings.json parses
✓ Hook entries present (PostToolUse + PreToolUse)
✓ PreToolUse matcher covers Read + Bash + Write — Read|Bash|Write
✓ Hook binary exists + executable
✓ Smoke spawn hook (empty mcp call) — exit=0 elapsed=74ms
✓ ~/.tokenomy/config.json parses
✓ Log directory writable
✓ Manifest drift — clean
✓ Graph MCP registration — tokenomy-graph configured in ~/.claude.json
✓ Statusline registered — tokenomy status-line
✓ Agent detection — claude-code, codex, cursor
✓ Hook perf budget — p50=5ms p95=12ms max=14ms (n=30, budget 50ms)
16/16 checks passedEvery check has an actionable remediation hint on failure. tokenomy doctor --fix for routine repair.
🛑 Uninstall
tokenomy uninstall --purgeRemoves both hook entries from ~/.claude/settings.json (matched by absolute command path), deletes ~/.tokenomy/. Original settings.json remains backed up at ~/.claude/settings.json.tokenomy-bak-<timestamp>.
🧭 Roadmap
- Phase 1.
PostToolUseMCP trim +PreToolUseRead clamp + CLI + 12-check doctor + savings log (Claude Code). - Phase 2.
tokenomy analyze— walks Claude Code + Codex CLI transcripts, replays rules with a real tokenizer. - Phase 3. Local code-graph MCP server:
tokenomy-graphstdio server. Works with both Claude Code and Codex CLI. - Phase 3.5. Multi-stage PostToolUse pipeline: dedup, secret redact, stacktrace collapse, schema-aware profiles.
- Phase 4.
PreToolUseBash input-bounder. - Phase 4.5. OSS-alternatives-first nudge —
find_oss_alternativesMCP tool + Write context nudge. - Phase 5. Polish — Golem output mode, statusline, prompt-classifier,
compress,bench, cross-agent installers. - Phase 5.5. Codex hook foothold — user-scoped
SessionStart+UserPromptSubmithooks for Golem and prompt-classifier nudges. - Phase 6 (0.1.x). Raven bridge, Kratos security shield, statusline update marker, Raven in report/analyze, Golem
reconmode,tokenomy feedbackcommand. - Phase 7. Language breadth — Python parser plugin, richer benchmark fixtures, npm publish at 1.0.
- Phase 8. Agent operating layer — rule-pack generator, compaction-time memory hygiene, workflow MCP tools, session ledgers, team-ready reports. See docs/NEXT_FEATURES.md.
🤝 Contribute
Dependency-light (zero runtime deps in the hot hook path; @modelcontextprotocol/sdk loaded dynamically; js-tiktoken optional peer for accurate analyze). Test-first.
git clone https://github.com/RahulDhiman93/Tokenomy && cd Tokenomy
npm install && npm run build
npm test # all green
npm run coverage # c8 → coverage/lcov.info + HTML
npm run typecheck # tsc --noEmit
npm link # point `tokenomy` at your local build
tokenomy doctorGuiding principles. (1) Fail-open always — broken hook worse than no hook; never exit 2. (2) Schema invariants over trust — outputs never fabricate keys, flip types, shrink arrays. (3) Path-match over markers — uninstall identifies entries by absolute command path. (4) Measure before bragging — no "X % savings" claims without Phase 2 benchmark data. (5) Small + legible — a dozen lines aren't worth a 200 KB install.
Before opening a PR. npm test green. Touched a rule → add passthrough and trim tests. Touched init/uninstall → extend the round-trip integration test. Added a config field → document it in docs/features/configure.md.
📜 License
MIT — see LICENSE.
Save tokens. Save money.