Package Exports
- promptreports-cli
- promptreports-cli/dist/tools/cli/src/cli.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (promptreports-cli) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
promptreports-cli
The local CLI for PromptReports.ai — AI Token Operations for engineering teams.
One terminal command finds the AI spend your proxy can't see, ships the PR to fix it, and signs the audit bundle. Scans Claude Code session JSONLs, CLAUDE.md, skills, stop_reason, and 22+ provider APIs (Cursor, Copilot, Codex, OpenAI, Anthropic, OpenRouter, Stripe, Vercel, Sentry, GitHub, …) — the editor- and surface-layer signals that proxy tools (Helicone, Datadog) structurally cannot read — and pushes per-turn telemetry to your PromptReports.ai dashboards where 27 calibrated detectors find waste and draft fix PRs.
npx promptreports-cliZero external dependencies. Runs on your machine. Nothing leaves your computer unless you explicitly push.
- Platform: https://www.promptreports.ai/
- Marketplace (VS Code companion): https://marketplace.visualstudio.com/items?itemName=promptreports.promptreports
Get started in 60 seconds
The first three commands need no signup. They run fully on your machine.
# 1. See what the CLI is and what to type next (welcome screen)
npx promptreports-cli
# 2. Inventory your AI stack (sessions, providers, costs) — no auth, no upload
npx promptreports-cli summary
# 3. Run a local Token Forensics scan — 5 portable detectors, no auth
npx promptreports-cli forensicsWhen you want findings + fix-PRs in your dashboard, sign up:
# 4. Sign up / sign in via device-code flow
npx promptreports-cli loginlogin opens your browser to https://www.promptreports.ai/cli-auth with an 8-character code pre-filled. If you don't have an account yet, the page lets you sign up with Google or email first, then approve the CLI session. Credentials land in ~/.promptreports/config.json (chmod 600) — no copy-paste of API keys.
# 5. Push your local telemetry to your dashboards
npx promptreports-cli push
# 6. Open the dashboards
# https://www.promptreports.ai/swarm/token-forensicsThat's it. From here every other command in the reference below is opt-in.
Why this exists
Mid-to-large engineering teams running AI at scale ($50K–$2M/mo across multiple vendors — Claude Code, Cursor, Copilot, Codex, MCP servers, npm packages, customer APIs) have:
- No way to attribute waste. Helicone proxies one vendor; Datadog ships generic LLM telemetry; nobody reads the editor (
CLAUDE.md, skills, sessions,stop_reason, MCP tool invocations) where 60–80% of the waste and risk lives. - No auditor-grade evidence trail. Dashboards are not evidence. Auditors need hash-chained per-item provenance + signed bundles they can verify offline.
- No closed-loop fix path. Findings sit in Slack; nobody owns the PR.
PromptReports.ai is the AI Token Operations hub: secure, optimize, forensic, operations. The CLI is the local read layer; the dashboards are the act layer. Cost is the wedge — security, governance, and per-team unit economics grow on top of the same data plane.
How the CLI feeds the PromptReports.ai dashboards
| CLI command | Dashboard surface | What the dashboard does |
|---|---|---|
summary + push |
Token Forensics (/swarm/token-forensics) |
Per-turn token attribution, session replay, model-cost breakdown, 27 calibrated detector findings |
forensics (with --auth) |
Token Forensics — full detector pass | 5 detectors locally; --auth adds 22 server-side detectors against your full ingest history |
push --editor cursor|copilot|codex|aider|cline |
Token Forensics — multi-editor view | Same dashboard, scoped per editor; portability gate on the detector matrix |
providers + push |
Provider Cost Center (/admin?tab=providers) |
Multi-provider burn-rate over time, anomaly detection, monthly projections |
costs --push-to-app |
Cost Attribution (inside Token Forensics) | Cost per feature / per model / per git commit |
cost-per-quality |
Routing Recommendations | Per-prompt judge-delta scoring; surfaces routes where a cheaper model performs as well as the expensive one |
recommend |
Routing Recommendations queue | Top open RoutingRecommendation rows ≥$1/mo with confidence tiers |
rollup |
PR Cost Decoration | Generates the same per-PR cost rollup the GitHub App uses, locally |
context --ghosts (with --json push) |
Ghost-Token Scanner | Trends silent waste over time, alerts when ghosts spike |
audit + audit claude-md |
Token Forensics Audit | CLAUDE.md slim-down, skill review, env hygiene |
audit-stack |
Codebase Audit | Cross-system audit (env + secrets + dependencies + schema drift) in one pass |
health |
SRE Department | Post-deploy health checks, infra reliability scoring |
git-intel |
Engineering Velocity | Hotspots, churn, ownership, auto-changelogs |
dead-code / deps / schema |
Codebase Audit | Drift detection, security findings, dependency hygiene |
prompts audit |
Prompt Drift | Catches prompt regressions across releases |
campaign launch |
Demand Gen Department (/campaigns) |
Multi-channel campaign generation with brand-playbook gating |
sessions / logs |
Unified Session Replay | Full session history with cross-tool log correlation |
generate / reports list |
Research & Reports (/reports) |
Full SOAR-V verified-research pipeline from terminal |
publish / share |
Public Viewer / Signed Share | /r/<id> public + /s/<token> private with referral attribution |
status / telemetry |
Account & CLI Metrics (/admin/cli-metrics) |
Quota + plan; DAU/WAU/MAU, activation funnel |
export-skill-pack / import-skill-pack |
Skill Pack Marketplace | Bundle and share .claude/skills/ directories across teams |
Free tier covers all local scans. Pushing to dashboards is free up to a soft limit; team dashboards, alerts, and auto-PR generation are pro features.
What's new in 1.8
Push reliability — 1.8.1 and 1.8.2
The push command got hardened against transient timeouts that hit teams pushing thousands of new turns:
- Per-batch timeout: 15s → 60s. Server-side per-turn enrichment (fingerprint + sensitive-data regex scan × 500 turns) can legitimately exceed the old budget on content-heavy batches. Override with
PROMPTREPORTS_TIMEOUT_MS=120000for very slow networks. - Retry with backoff. Each batch retries up to 3 times (0s, 2s, 6s) on
AbortErrorand 5xx responses before giving up. - Batch-level isolation. A single batch's exhausted retry no longer kills the whole push — the loop continues with the next batch.
Multi-editor push — 1.8.0
push now ingests Cursor, Copilot, Codex, Aider, and Cline turns alongside Claude Code:
# Claude Code (default, scans ~/.claude/projects/)
npx promptreports-cli push
# Other editors — export the session JSON, pipe it in
npx promptreports-cli push --editor cursor --from-file ./cursor-export.json
npx promptreports-cli push --editor copilot --from-file ./copilot-export.jsonEach turn gets tagged with its editor so the dashboard's portability matrix gates detectors correctly (some detectors only fire on CC; others are editor-portable).
Full prompt + response capture — 1.8.0
push now ships full prompt and response text by default (capped at 100KB per turn). The dashboard's Submissions tab uses this for the Trace modal and ±2-turn session-history panel. Opt out per push:
npx promptreports-cli push --no-textWith --no-text, the dashboard shows the 120-char preview and a "text not captured" badge. Server-side text storage is provenance-logged behind a reveal route — same SOC2 posture either way.
Token Forensics slash command — 1.8.0
Run the same 5 portable detectors that ship inside Claude Code's /forensics skill, but from any terminal:
# Local-only — 5 detectors, no auth
npx promptreports-cli forensics
npx promptreports-cli forensics --session ~/.claude/projects/foo/sessions/abc.jsonl
# Auth mode — 22 additional server-side detectors against your full ingest history
npx promptreports-cli forensics --auth
npx promptreports-cli forensics --auth --jsonUseful in IDEs that don't run Claude Code (Cursor, Codex, plain terminal).
Full command reference
Every command in 1.8.2, what it does, and when you'd reach for it.
Core (no auth required)
summary — token usage across your AI sessions (default command)
Inventory all ~/.claude/projects/*/sessions/*.jsonl files. Reports total tokens, cost (priced via the bundled model-pricing.json), and the top sessions by burn.
npx promptreports-cli summary
npx promptreports-cli summary --days 30 # lookback window
npx promptreports-cli summary --json # machine-readableWhy: zero-config sanity check on your weekly Claude Code spend. If the number surprises you, run forensics next.
forensics — Token Forensics scan (new in 1.8)
Five portable detectors against your most-recent CC session (or --session <path>). Mirrors the /forensics slash command shipped as a Claude Code skill.
npx promptreports-cli forensics
npx promptreports-cli forensics --session ~/.claude/projects/foo/sessions/abc.jsonl
npx promptreports-cli forensics --auth # adds 22 server-side detectors
npx promptreports-cli forensics --jsonWhy: instant local audit without leaving the terminal. --auth upgrades to the full 27-detector pass against your ingest history.
providers — multi-provider cost scan
Hits 22+ provider APIs in parallel: OpenAI, Anthropic, OpenRouter, Cursor, Copilot, Vercel, Stripe, Railway, Sentry, GitHub, PostHog, Supabase, Pinecone, Tavily, and more. Reports current spend, monthly projection, and anomalies.
npx promptreports-cli providers
npx promptreports-cli providers --days 30
npx promptreports-cli providers --json | jq '.providers[] | select(.monthly_projection > 50)'Why: catches anomalies your provider's own dashboard buries. Set keys via env vars (the CLI reads OPENAI_API_KEY, STRIPE_SECRET_KEY, etc. — only what you've set; nothing required).
doctor — environment health check
Node version, env-var presence, file permissions, network reachability to PromptReports.ai.
npx promptreports-cli doctorWhy: first thing to run if anything else is misbehaving.
audit — Claude Code expert audit
Audits .claude/, skills, env hygiene, and CLAUDE.md size against best-practice checklists.
npx promptreports-cli audit
npx promptreports-cli audit claude-md # deep slim-down analysis
npx promptreports-cli audit claude-md --jsonaudit claude-md detects:
- Duplicated phrases (3+ occurrences)
- Long code examples (>30 lines that could be signatures)
- Dead file references
- Redundant sections (already covered in
LESSONS.md,TECH_STACK.md, etc.) - Filler words ("please", "basically", "it is important to note")
Output includes projected token count, estimated savings per session, prioritized fix list.
Why: every line in CLAUDE.md taxes every turn. Operators report 20–40% prompt-token reduction after one slim-down pass.
audit-stack — full-stack audit (new in 1.8)
Cross-system audit in one pass: secrets in .env*, dependency CVEs, Prisma schema drift vs. live DB, dead routes, stale skills, MCP server health.
npx promptreports-cli audit-stack
npx promptreports-cli audit-stack --jsonWhy: before a deploy, before an external audit, or any time you want one terminal command that covers what deps + dead-code + schema + audit would each say individually.
filter -- <cmd> — compress subprocess output
Wraps any command. Stdout/stderr gets deduplicated, stack-frame-collapsed, and tail-preserved before your agent reads it. Errors and warnings always pass through. Exit code preserved.
npx promptreports-cli filter -- npm test
npx promptreports-cli filter -- playwright test --reporter=list
npx promptreports-cli filter --keep-last 50 -- npm run ciWhat it actually does:
- Strips ANSI color codes
- Collapses N identical lines into
[repeated N-1 more times] - Preserves first 2 + last frame of long stack traces
- Truncates uniform-prefix blocks (hundreds of passing-test lines) to head +
[... N similar lines elided ...] - Always keeps lines matching
error/fail/warn/exception/traceback - Always keeps the last 20 lines (configurable via
--keep-last) - Prints a reduction summary
Flags: --keep-last N (default 20), --max-repeat N (default 2), --truncate-after N (default 50).
Why: giving an AI agent the raw output of npm test burns 4–10× more tokens than it needs. filter keeps signal, drops slop.
context — context-window optimizer
Finds bloat patterns in your sessions.
npx promptreports-cli context
npx promptreports-cli context --ghosts # silent waste scanner
npx promptreports-cli context --ghosts --days 30--ghosts detects:
- Duplicate tool results (same large file read 3+ times)
- Oversized tool schemas with inlined payloads
- Post-compaction residue
- Skills loaded but never invoked (~300 tokens each)
- Bloated CLAUDE.md (>2000 tokens flags
audit claude-md)
Why: ghosts are the single biggest source of wasted tokens that aggregate dashboards miss.
costs — per-feature / per-model / per-commit cost attribution
npx promptreports-cli costs
npx promptreports-cli costs --days 30 --push-to-appWhy: answers "which feature is burning the most tokens?" before someone asks in standup.
cost-per-quality — routing-recommendation analysis (new in 1.8)
Pairs each prompt with judge-delta quality scoring, then surfaces routes where a cheaper model performs as well as the expensive one.
npx promptreports-cli cost-per-quality
npx promptreports-cli cost-per-quality --jsonWhy: the input to the dashboard's RoutingRecommendation queue. If you want to see what could be re-routed before swallowing the change in production, run this first.
recommend — open routing recommendations
Lists open RoutingRecommendation rows with savings, confidence tier, and rationale.
npx promptreports-cli recommend
npx promptreports-cli recommend --jsonWhy: review queue from the terminal. The accept-and-swap action lives in the dashboard; this command is read-only.
models — model router suggestions
Suggest cheaper models per workload using the same heuristics as the dashboard's model-overkill detector.
npx promptreports-cli modelsprompts audit — prompt drift detector
Catches prompt regressions across releases.
npx promptreports-cli prompts auditAuth & Account (no signup required for the CLI itself; you sign up at promptreports.ai via login)
login / logout / whoami
Device-code auth. login opens /cli-auth with an 8-char code pre-filled. After approve, credentials land in ~/.promptreports/config.json (chmod 600). If you don't have an account yet, the page lets you sign up with Google or email first.
npx promptreports-cli login
npx promptreports-cli whoami
npx promptreports-cli logoutToken resolver precedence: --token flag → PROMPTREPORTS_TOKEN env → PROMPTREPORTS_BRAIN_TOKEN env → config file. Existing env-var workflows keep working.
status — account dashboard
Plan, monthly quota with progress bar, remaining reports, upgrade URL when at limit.
npx promptreports-cli statusWhy: one-screen sanity check before generating a report or running a heavy push.
telemetry on|off|status — opt-in usage reporting
Prompted once on first command in an interactive TTY. Non-TTY / CI defaults to OFF. Opting out hard-deletes every past event for your install ID server-side. Fire-and-forget 2-second timeout; never blocks command exit.
npx promptreports-cli telemetry on
npx promptreports-cli telemetry status
npx promptreports-cli telemetry off # hard-deletes server-side historyPush & Ingest
push — ship telemetry to PromptReports.ai
# Default — Claude Code, all sessions in window
npx promptreports-cli push
# Lookback override
npx promptreports-cli push --days 30
# Backfill — bypass delta checkpoint, force re-push everything
npx promptreports-cli push --backfill
# Other editors
npx promptreports-cli push --editor cursor --from-file ./cursor-export.json
npx promptreports-cli push --editor copilot --from-file ./copilot-export.json
npx promptreports-cli push --editor codex --from-file ./codex-export.json
# Suppress full prompt + response text (privacy mode)
npx promptreports-cli push --no-textWhy: push is what turns your local data into dashboard findings. Without push, the platform has nothing to detect against. Use --backfill once after upgrading to refresh older pre-enrichment data.
push-non-coding — push non-coding agent telemetry (new in 1.8)
Subtype of push for non-coding agents (research, content, ops bots). Routes to a separate dashboard scope so coding-agent detectors don't fire on them.
npx promptreports-cli push-non-coding --from-file ./agent-export.jsonrollup — local PR cost rollup (new in 1.8)
Generates the same per-PR cost rollup the GitHub App posts as a comment, but locally — useful for testing changes to the rollup logic without round-tripping through GitHub.
npx promptreports-cli rollup --pr 412
npx promptreports-cli rollup --pr 412 --jsonReports
generate "<topic>" — run a report from the terminal
npx promptreports-cli generate "Q2 AI ops competitive landscape"Hits /api/reports/generate with PAT auth, respects quota, streams progress to stdout, prints report ID + workspace URL on completion.
Why: verify a research topic without opening the browser. Same SOAR-V verification pipeline as the web UI.
reports list — your recent reports
npx promptreports-cli reports list
npx promptreports-cli reports list --jsonpublish <reportId> / unpublish <reportId> — flip visibility
Toggles Report.visibility between PRIVATE and PUBLIC. publish prints the /r/<id> short URL. PRIVATE reports 404 for unauthenticated viewers (no existence leak). Free-tier viewers see an upgrade watermark; Pro+ viewers and report owners get the Export PDF button.
npx promptreports-cli publish rpt_01HVabc...
npx promptreports-cli unpublish rpt_01HVabc...share <reportId> — signed expiring share URL
Hand a PRIVATE report to a specific person without making it public. Returns /s/<token> URL that expires, caps view count, and can be revoked. Plaintext shown exactly once.
npx promptreports-cli share rpt_01HV... --expires-in-hours 72 --max-views 10 --label "Q2 board review"Signed links also attribute signups back to you — a pr_ref cookie is set when someone opens your link and recorded as referral attribution if they sign up.
Brain (shared intelligence layer)
Terminal entry-point into the shared-intelligence layer that every virtual department reads.
npx promptreports-cli brain init # scaffold ~/business/ folder
npx promptreports-cli brain sync # push business context to PromptReports.ai
npx promptreports-cli brain who # inspect account + connected sources
npx promptreports-cli brain context # what the brain knows about your company
npx promptreports-cli brain run "<question>" # one-shot queryCodebase analysis
npx promptreports-cli dead-code # unused routes, components, deps
npx promptreports-cli deps # CVEs, outdated packages, license compliance
npx promptreports-cli schema # Prisma schema stats + drift vs. live DB
npx promptreports-cli git-intel # hotspots, churn, ownership, auto-changelogSessions & logs
npx promptreports-cli sessions # session history, replay, search
npx promptreports-cli logs # unified Sentry / Vercel / PostHog log streamEnvironment & setup
npx promptreports-cli env sync # diff/sync .env.local with Vercel/Railway
npx promptreports-cli setup export --all --encrypt mySecret123 --output team.json
npx promptreports-cli setup import team.json --decrypt mySecret123
npx promptreports-cli health # post-deploy health checkSkill packs (new in 1.8)
Bundle a .claude/skills/ directory into a portable archive other teams can import.
npx promptreports-cli export-skill-pack ./.claude/skills --out ./my-skills.tgz
npx promptreports-cli import-skill-pack ./my-skills.tgz --target ./.claude/skillsDemand gen
npx promptreports-cli campaign launch "V2 export flow"
npx promptreports-cli campaign launch npm:my-package
npx promptreports-cli campaign list
npx promptreports-cli campaign show cmp_01HV9Z...Brand-playbook review happens server-side before anything sends.
Global flags
| Flag | Effect |
|---|---|
--days N |
Lookback period (default: 7) |
--today |
Shorthand for --days 1 |
--json |
Machine-readable output |
--quiet |
Suppress non-essential output |
--dry-run |
Preview changes without writing |
--backfill |
Bypass delta checkpoint (push only); defaults --days to 90 |
--editor <kind> |
Push adapter: claude-code (default), cursor, aider, cline, codex |
--from-file <path> |
Pre-exported turns file (required for non-CC editors) |
--no-text |
Suppress full prompt/response text capture on push |
--token <pat> |
Override token resolver precedence |
Common workflows
Daily token-burn check (zero-config, no signup):
npx promptreports-cliLocal Token Forensics scan:
npx promptreports-cli forensics
npx promptreports-cli audit claude-md # the usual top findingMonthly burn-rate review:
npx promptreports-cli providers --days 30 --json | jq '.providers[] | select(.monthly_projection > 50)'Pre-deploy verification:
npx promptreports-cli doctor && npx promptreports-cli audit-stack && npx promptreports-cli healthCompress noisy CI output for an agent:
npx promptreports-cli filter -- npm run test:e2eOnboard a new teammate:
# On your machine
npx promptreports-cli setup export --all --encrypt mySecret123 --output team.json
# On their machine
npx promptreports-cli setup import team.json --diff --decrypt mySecret123
npx promptreports-cli setup import team.json --decrypt mySecret123Generate and share a report:
npx promptreports-cli login
npx promptreports-cli generate "Q2 AI ops competitive landscape"
npx promptreports-cli reports list
npx promptreports-cli share rpt_01HV... --expires-in-hours 72 --max-views 10Cut a release + announce it:
npm publish && npx promptreports-cli campaign launch npm:my-packageBackfill historical text after upgrading from <1.8:
Older TokenPushTurn rows ingested before 1.8 keep promptText / responseText null and render the "text not captured" badge. The recoverable subset (rows whose metadata stashed text under prompt/promptText/response/completion synonyms) can be backfilled server-side:
# Operator-only — runs against the platform DB, not via the CLI
pnpm tsx scripts/tf-backfill-prompt-fingerprint.ts <userId> --dry-run
pnpm tsx scripts/tf-backfill-prompt-fingerprint.ts <userId>
pnpm tsx scripts/tf-backfill-prompt-fingerprint.ts --allRows whose metadata is empty cannot be recovered — the text is gone.
Privacy: what gets sent on push
Two commands send data to PromptReports.ai. They run only when you opt in — both require an API key, and audit --push additionally asks for one-time interactive consent the first time you run it.
push — Token Forensics ingest
Sends the per-turn telemetry the dashboard's detectors need to work:
- Per-turn token counts, model name, timestamps, session IDs
- Prompt and response text (capped at 100 KB per turn) — omit with
--no-text - Editor name (
claude-code,cursor,copilot,codex,aider,cline)
Server-side text storage is provenance-logged behind a reveal route — only you (the row owner) can re-read it.
audit --push — local environment digest
Sends a small structured digest so the dashboard's audit-source view can show what providers you're configured for:
| Field | What ships | What does NOT ship |
|---|---|---|
envKeys |
["OPENAI_API_KEY", "DATABASE_URL", …] — names only |
Values, comments, file path |
packageDeps |
dependencies / devDependencies / peerDependencies maps |
scripts, author, repository, publishConfig, custom fields |
packageMeta |
{ name, version } |
Anything else from package.json |
vscodeExtensions |
["github.copilot", "anthropic.claude", …] — opt-in via --include-extensions |
Nothing by default; the code subprocess is not spawned unless you pass the flag |
findings / score / recommendations |
The audit results you saw on stdout | Source code, file contents, env values |
This shape is payloadVersion: 2 in the request body. Source: tools/cli/src/commands/audit.ts → buildCliPayload(). The 1.8.3 hardening pass replaced an earlier shape that shipped raw .env.local and full package.json blobs (that's what Socket.dev's AI scanner flagged on 1.8.2 — the hardening closes both the over-share and the redirect vector).
Consent and override
The first time you run audit --push interactively, the CLI prints exactly what's about to be sent and asks you to confirm. Your answer (y/n) is stored in ~/.promptreports/config.json under auditPushConsent so subsequent runs don't re-prompt. Non-TTY shells / CI default to false (no push, no prompt) — flip the bit explicitly in the config file if you want to script it.
URL hardening
PROMPTREPORTS_URL (and the legacy PROMPTREPORTS_API_URL) is allowlisted to *.promptreports.ai and localhost. Any other host is rejected with a stderr warning and the CLI falls back to https://www.promptreports.ai. Self-hosted operators set PROMPTREPORTS_ALLOW_CUSTOM_HOST=1 to opt out of the allowlist — the CLI never silently honors an arbitrary host. This closes the redirect vector flagged by Socket.dev on 1.8.2.
Telemetry (separate from push)
The opt-in telemetry command sends only command name, duration, success flag, CLI version, Node version, OS — never prompt content, file paths, env vars, or anything you typed. telemetry off hard-deletes every past event for your install ID server-side.
Configuration
Auth (only needed for push, generate, publish, share, campaign, brain sync)
Fastest path is device-code login (see Get started). Prefer env vars? Generate a PAT at https://www.promptreports.ai/settings/api-keys and set one of:
export PROMPTREPORTS_TOKEN=pat_...
# or the legacy names (still supported)
export PROMPTREPORTS_API_KEY=pk_...
export PROMPTREPORTS_BRAIN_TOKEN=pat_...
# optional — override the API base for self-hosted / staging
export PROMPTREPORTS_URL=https://www.promptreports.aiPush tuning
# Per-batch HTTP timeout, default 60000 (60s). Bump for slow networks.
export PROMPTREPORTS_TIMEOUT_MS=120000Telemetry
telemetry off hard-deletes server-side history. Non-TTY / CI defaults to OFF.
Installation
No install needed — use npx:
npx promptreports-cliOr install globally:
npm install -g promptreports-cli
promptreports-cliNode ≥18 required.
License
MIT. See LICENSE.