JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 21
  • Score
    100M100P100Q52768F
  • License MIT

Your AI is reading too much code. CTO analyzes your project and selects the optimal files for AI context — saving tokens, improving output quality, and ensuring type definitions are included.

Package Exports

  • cto-ai-cli
  • cto-ai-cli/engine
  • cto-ai-cli/govern
  • cto-ai-cli/interact

Readme

CTO — Stop sending your entire codebase to AI

License: MIT Tests npm

CTO analyzes your project and selects the minimum set of files your AI needs — saving tokens, reducing cost, and producing code that actually compiles.

npx cto-ai-cli

Runs in <1 second. No API keys. No data leaves your machine.


The Problem

When you ask an AI to help with code, it needs context. Most approaches:

  • Send everything — expensive, noisy, AI gets confused
  • Send open files — misses types, dependencies, config
  • Let the AI pick — it doesn't know your dependency graph

The result: AI generates code that doesn't compile because it never saw your type definitions.

The Fix

$ npx cto-ai-cli ./my-project
  ⚡ cto-score — analyzing your project...

  ╔══════════════════════════════════════════════════╗
  ║                                                  ║
  ║   🟢 Context Score™  88 / 100   Grade: A-       ║
  ║                                                  ║
  ║   Efficiency     ████████████████░░░░  80%       ║
  ║   Coverage       ████████████████████ 100%       ║
  ║   Risk Control   ████████████████████ 100%       ║
  ║   Structure      █░░░░░░░░░░░░░░░░░░   5%       ║
  ║   Governance     ██████████████████░  90%        ║
  ║                                                  ║
  ║   💰 vs. Sending Everything:                     ║
  ║   Tokens saved: 392K (88%)                       ║
  ║   Monthly savings: ~$943                         ║
  ║                                                  ║
  ╚══════════════════════════════════════════════════╝

  Scanned in 0.6s · 199 files · 443K tokens

What each number means

Metric What it measures Why it matters
Context Score (88/100) Overall AI-readiness of your project Higher = AI tools produce better output with your code
Efficiency (80%) How much CTO can compress without losing value 80% means we send 20% of tokens for the same quality
Coverage (100%) % of important files included in the selection 100% = every dependency and type file is captured
Risk Control (100%) Are high-risk files (hubs, complex code) prioritized? Ensures AI sees the files most likely to cause bugs
Structure (5%) How well-organized your codebase is for AI Low = too many large files, poor modularity
Governance (90%) Audit logging, policy enforcement, secret scanning Enterprise readiness
Tokens saved (88%) Reduction vs. sending every file Directly reduces your API costs
Monthly savings ($943) Estimated cost reduction at 800 interactions/month Based on average GPT-4o pricing

Quick Start

Score your project

npx cto-ai-cli                     # Analyze current directory
npx cto-ai-cli ./my-project         # Analyze a specific project
npx cto-ai-cli --json               # Machine-readable JSON output

Generate optimized context for AI

npx cto-ai-cli --fix

Creates .cto/context.md — paste this into any AI chat for optimal context. Also generates .cto/config.json and .cto/.cteignore.

npx cto-ai-cli --context "refactor the auth middleware"

Generates task-specific context — only files relevant to auth, including types, dependencies, and related tests.

Example output:

  📋 Context for: "refactor the auth middleware"

  Selected 12 files (8.2K tokens):

  ┌─ Core (3 files) ─────────────────────────────
  │  src/middleware/auth.ts          2,100 tokens
  │  src/types/auth.ts                 450 tokens
  │  src/config/jwt.ts                 320 tokens
  │
  ├─ Dependencies (5 files) ─────────────────────
  │  src/models/user.ts              1,200 tokens
  │  src/services/token.ts             890 tokens
  │  ...
  │
  └─ Tests (2 files) ────────────────────────────
     tests/auth.test.ts              1,800 tokens
     tests/middleware.test.ts          940 tokens

  Saved to .cto/context.md (8.2K tokens — 97% smaller than full project)

Security audit

npx cto-ai-cli --audit

Scans for API keys, tokens, passwords, and PII before they end up in an AI prompt. 45+ patterns (AWS, Stripe, GitHub, OpenAI, etc.) plus Shannon entropy analysis for unknown formats.

  🔴 CRITICAL src/config/stripe.ts:8
             api-key: sk_l********************yZ
  🔴 CRITICAL src/config/database.ts:14
             connection-string: post********************db
  🟠 HIGH     src/utils/email.ts:22
             pii: admi**********om

  🚨 3 critical findings. Rotate credentials immediately.

Run in CI to block PRs with secrets: CI=true npx cto-ai-cli --audit

Code review intelligence

npx cto-ai-cli --review

Analyzes your git diff and generates a structured review:

  📊 Review Quality: 82/100 (B+)

  Breaking Changes:
    🔴 Removed export: UserService.findById (used by 4 files)
    🟡 Changed signature: authenticate(token) → authenticate(token, opts)

  Missing Files:
    ⚠️  No test file for src/services/auth.ts
    ⚠️  src/types/user.ts changed but barrel index not updated

  Impact Radius:
    Direct: 4 files  |  Transitive: 12 files  |  Tests: 3 files

  Saved review prompt to .cto/review-prompt.md
What it detects Example
Breaking changes Removed exports, changed function signatures, deleted files
Missing files Tests, type files, barrel exports, importers of changed code
Impact radius How many files are affected (direct + transitive via BFS)
Review quality Score based on PR size, focus, breaking changes, completeness

Learning mode

npx cto-ai-cli --learn               # View feedback model & stats
npx cto-ai-cli --predict              # Predict relevant files for a task
npx cto-ai-cli --learn --json         # Export learning data for team sharing

CTO learns from your usage patterns over time. Uses EWMA temporal decay (recent feedback weighs more) and Bayesian confidence (Wilson score — avoids over-trusting sparse data).

Quality gate for CI/CD

npx cto-ai-cli --ci                   # Run quality gate (exits 1 on failure)
npx cto-ai-cli --ci --threshold 80    # Custom minimum score
npx cto-ai-cli --ci --json            # JSON for pipeline parsing

Block merges when context quality drops below your threshold. Tracks baselines and detects regressions.

Monorepo support

npx cto-ai-cli --monorepo             # Analyze all packages
npx cto-ai-cli --monorepo --package api  # Focus on one package

Detects npm/yarn/pnpm workspaces, Turborepo, Nx, and Lerna. Shows cross-package dependencies, isolation scores, and shared package analysis.


All CLI Flags

# Analysis
npx cto-ai-cli [path]                 # Score a project
npx cto-ai-cli --json                 # JSON output
npx cto-ai-cli --benchmark            # CTO vs naive vs random comparison
npx cto-ai-cli --compare              # Compare vs popular OSS projects
npx cto-ai-cli --report               # Markdown report + badge

# Context generation
npx cto-ai-cli --fix                  # Auto-generate .cto/context.md
npx cto-ai-cli --context "task"       # Task-specific context

# Security
npx cto-ai-cli --audit                # Secret & PII detection
npx cto-ai-cli --audit --full-scan    # Scan all files (ignore cache)
npx cto-ai-cli --audit --init-hook    # Install pre-commit hook

# Code review
npx cto-ai-cli --review               # PR review analysis
npx cto-ai-cli --review --json        # Review data as JSON

# Learning
npx cto-ai-cli --learn                # Feedback model dashboard
npx cto-ai-cli --predict              # File predictions for a task
npx cto-ai-cli --learn --json         # Export learning data

# CI/CD
npx cto-ai-cli --ci                   # Quality gate
npx cto-ai-cli --ci --threshold 80    # Custom threshold

# Monorepo
npx cto-ai-cli --monorepo             # Full monorepo analysis
npx cto-ai-cli --monorepo --package X # Single package

# Gateway (AI proxy)
npx cto-gateway                       # Start proxy server
npx cto-gateway --budget-daily 10     # With budget enforcement

MCP Server (for AI Editors)

CTO works as an MCP server — plug it into Claude, Windsurf, or Cursor.

Windsurf — add to ~/.codeium/windsurf/mcp_config.json:

{
  "mcpServers": {
    "cto": { "command": "cto-mcp" }
  }
}

Claude Desktop:

{
  "mcpServers": {
    "cto": { "command": "npx", "args": ["-y", "cto-ai-cli", "--mcp"] }
  }
}

Tools available: cto_analyze, cto_select_context, cto_score, cto_benchmark, cto_risk, and more.


Programmatic API

import { analyzeProject, computeContextScore, selectContext } from 'cto-ai-cli';

// Analyze a project
const analysis = await analyzeProject('./my-project');

// Get the Context Score
const score = await computeContextScore(analysis);
console.log(`Score: ${score.overall}/100 (${score.grade})`);
console.log(`Tokens saved: ${score.comparison.savedPercent}%`);

// Select optimal files for a task
const selection = await selectContext({
  task: 'refactor the auth middleware',
  analysis,
  budget: 50_000,  // 50K token budget
});

console.log(`Selected ${selection.files.length} files`);
console.log(`Coverage: ${selection.coverage.score}%`);
for (const file of selection.files) {
  console.log(`  ${file.relativePath} (${file.tokens} tokens, risk: ${file.riskScore})`);
}

How It Works

  1. Scan — walks your project, parses imports, builds a dependency graph
  2. Score — computes risk for each file (complexity, hub score, centrality, recency)
  3. Select — deterministic greedy algorithm: picks highest-risk files first within token budget
  4. Prove — measures coverage (% of important files included), compares vs naive strategies

No AI is used for selection. Same input always produces the same output. Fully reproducible.


Honest Limitations

  • TypeScript/JavaScript gets the deepest analysis. Other languages (Python, Go, Rust, Java) get basic file + import analysis.
  • Benchmarks use simple baselines (alphabetical, random). We haven't compared against Cursor's or Copilot's internal context selection.
  • Savings are estimates based on average API pricing. Actual savings depend on your model and usage.
  • Risk scoring uses a complexity proxy instead of real git churn data (planned improvement).

Contributing

git clone https://github.com/cto-ai/cto-ai-cli.git
cd cto-ai-cli
npm install
npm run build
npm test              # 376 tests
npm run typecheck     # strict TypeScript, zero errors

Full API docs, MCP server reference, and architecture are in DOCS.md.

License

MIT