Package Exports
- cto-ai-cli
- cto-ai-cli/cli
- cto-ai-cli/engine
- cto-ai-cli/govern
- cto-ai-cli/interact
- cto-ai-cli/mcp
Readme
CTO — Your AI is reading too much code. We fix that.
Early access — This is a test version. We'd love your feedback.
Try it now (zero install)
npx cto-ai-cliThat's it. Run it on any project. You'll see something like this:
⚡ cto-score — analyzing your project...
╔══════════════════════════════════════════════════╗
║ ║
║ 🟢 Context Score™ 87 / 100 Grade: A- ║
║ ║
║ Efficiency ███████████████░░░░░ 74% ║
║ Coverage ████████████████████ 100% ║
║ Risk Control ████████████████████ 100% ║
║ ║
║ 💰 vs. Sending Everything: ║
║ Tokens saved: 289K (85%) ║
║ Monthly savings: ~$695 ║
║ ║
╚══════════════════════════════════════════════════╝
Scanned in 11.7s · 177 files · 340K tokensRun npx cto-ai-cli --benchmark to see how CTO compares to naive (alphabetical) and random file selection.
No data leaves your machine. No API keys. MIT licensed.
What problem does CTO solve?
When you ask an AI assistant to help with code, it needs context — your files. The question is: which files?
Most tools today either send everything (expensive, noisy) or pick files based on what's open (misses dependencies). Neither approach is great.
CTO analyzes your project — dependencies, file importance, risk of excluding each file — and picks the best subset that fits your token budget. It's like a smart assistant that knows which files matter for each task.
A simple example
You ask the AI: "refactor the auth middleware"
| Approach | What gets sent | Result |
|---|---|---|
| Send everything | 340K tokens (all 177 files) | Expensive. AI drowns in irrelevant code. |
| Send open files | Whatever you have open | Might miss types, dependencies, config. |
| CTO | 50K tokens (93 relevant files) | 85% cheaper. Includes types, deps, related files. |
Why does it matter?
We tested something specific: when the AI generates code, does it have the type definitions it needs?
| CTO | Without CTO | |
|---|---|---|
| Type files included | 5 out of 6 | 0 out of 6 |
| TypeScript compiler | ✅ Compiles | ❌ 4 errors |
We ran this on 5 different tasks. Same result every time. CTO context compiles. Naive context doesn't.
Without type definitions, the AI invents interfaces — wrong property names, wrong shapes. The code doesn't compile. (Details)
Getting started
Option 1: Quick score (no install)
npx cto-ai-cli # Score your project
npx cto-ai-cli ./my-project # Score a specific project
npx cto-ai-cli --fix # Auto-generate optimized context files
npx cto-ai-cli --context "your task" # Task-specific context for AI prompts
npx cto-ai-cli --audit # Security audit: detect secrets & PII
npx cto-ai-cli --report # Shareable report + README badge
npx cto-ai-cli --compare # Compare your score vs popular projects
npx cto-ai-cli --benchmark # CTO vs naive vs random comparison
npx cto-ai-cli --json # Machine-readable output (for CI)Option 2: Full install
npm install -g cto-ai-cli
cto2 init # Set up for your project
cto2 analyze # See structure + risk profile
cto2 interact "refactor the auth middleware" # Get optimized context for a taskOption 3: Use with your AI editor (MCP)
CTO works as an MCP server — plug it into Claude, Windsurf, or Cursor.
Windsurf — add to ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"cto": { "command": "cto2-mcp" }
}
}Claude Desktop — add to your MCP config:
{
"mcpServers": {
"cto": { "command": "node", "args": ["/path/to/dist/mcp/v2.js"] }
}
}Once connected, your AI editor can use tools like cto_analyze, cto_select_context, cto_score, and cto_benchmark automatically.
How it works (the short version)
- Scans your project — files, imports, dependencies, structure
- Scores each file — how important is it? What breaks if we exclude it?
- Selects the best files for your task — within your token budget
- Proves the result — coverage score, benchmark comparison, cost savings
CTO doesn't use AI for selection. It uses dependency analysis, risk modeling, and optimization algorithms. Same input always produces the same output.
Real numbers
We ran CTO on three open-source projects. No cherry-picking — you can reproduce these with npx cto-ai-cli --benchmark.
| Project | Files | Score | What CTO does |
|---|---|---|---|
| Zod | 441 files, 804K tokens | 92/100 (A) | Selects 64 files, 100% coverage, $1,809/mo savings |
| This project | 177 files, 340K tokens | 87/100 (A-) | Selects 93 files, 100% coverage, $695/mo savings |
| Express.js | 158 files, 171K tokens | 74/100 (B-) | Needs only 895 tokens for full coverage |
"Coverage" means: all the files that are important for your task are included. "Savings" is estimated based on 800 AI interactions per month.
Detailed comparison: CTO vs Naive vs Random
Budget: 50K tokens · Task: "refactor the core module"
| Project | Strategy | Files | Tokens | Coverage | High-Risk Included |
|---|---|---|---|---|---|
| Zod | CTO | 64 | 50.0K | 100% | 6/6 |
| Naive (alphabetical) | 71 | 50.0K | 16% | 2/6 | |
| Random | 45 | 50.0K | 10% | 1/6 | |
| CTO | CTO | 163 | 47.4K | 100% | 11/11 |
| Naive | 25 | 50.0K | 15% | 0/11 | |
| Random | 38 | 50.0K | 23% | 6/11 | |
| Express | CTO | 158 | 0.9K | 100% | n/a |
| Naive | 64 | 50.0K | 41% | n/a | |
| Random | 61 | 50.0K | 39% | n/a |
Note: "Naive" means alphabetical file order (a common default). "Random" is random selection. These are simple baselines — real-world tools like Cursor use smarter heuristics, so we don't claim CTO beats them. We just show the difference between informed and uninformed selection.
Compile Proof: real TypeScript compiler output
We ran the actual tsc compiler to verify this isn't just theory.
How it works:
- Copy only the selected files (CTO or naive) to a temp directory
- Generate TypeScript code that imports and uses the project's types
- Run
tsc --noEmit - Count real compiler errors
| Task | CTO | Naive | Naive missing |
|---|---|---|---|
| Refactor selector | ✅ 0 errors | ❌ 4 errors | All type files |
| Optimize risk scoring | ✅ 0 errors | ❌ 4 errors | All type files |
| MCP error handling | ✅ 0 errors | ❌ 4 errors | All type files |
| Cache invalidation | ✅ 0 errors | ❌ 4 errors | All type files |
| Add semantic tool | ✅ 0 errors | ❌ 4 errors | All type files |
The naive selection (alphabetical) consistently misses all type definition files. The compiler output:
error TS2307: Cannot find module './src/types/engine.js'
error TS2307: Cannot find module './src/types/config.js'
error TS2307: Cannot find module './src/types/govern.js'
error TS2307: Cannot find module './src/types/interact.js'Without these files, the AI has to guess the shape of AnalyzedFile, ContextSelection, TaskType, etc. It will get them wrong.
🔒 Security Audit — detect secrets before AI sees them
Every time you send code to an AI, there's a risk: API keys, tokens, passwords, and PII hiding in your codebase.
CTO now scans your entire project for secrets — before they end up in an AI prompt.
npx cto-ai-cli --audit 🔍 Running security audit...
╔══════════════════════════════════════════════════╗
║ ║
║ 🔴 Security Audit: CRITICAL ISSUES FOUND ║
║ ║
║ Files scanned: 179 ║
║ Files affected: 12 ║
║ Total findings: 51 ║
║ ║
╠══════════════════════════════════════════════════╣
║ ║
║ 🔴 Critical: 34 ║
║ 🟠 High: 5 ║
║ 🟡 Medium: 12 ║
║ ║
╚══════════════════════════════════════════════════╝
Findings:
🔴 CRITICAL src/config/stripe.ts:8
api-key: sk_l********************yZ
🔴 CRITICAL src/config/database.ts:14
connection-string: post********************db
🟠 HIGH src/utils/email.ts:22
pii: admi**********om
Recommendations:
🚨 CRITICAL: Rotate all detected credentials immediately.
💡 Use environment variables for API keys.
💡 Add a .gitignore entry for .env files.
📁 Audit artifacts:
📋 .cto/audit/2026-02-24.jsonl Audit log (append-only)
📊 .cto/audit/report.md Full report
📝 .cto/.env.example Template for environment variablesWhat it detects
| Category | Examples | Severity |
|---|---|---|
| API Keys | OpenAI, Anthropic, Stripe, Google, SendGrid, Azure | 🔴 Critical |
| Cloud credentials | AWS Access Keys, AWS Secrets | 🔴 Critical |
| Tokens | GitHub, GitLab, Slack, npm, JWT | 🔴 Critical |
| Private keys | RSA, SSH, EC private keys | 🔴 Critical |
| Database | Connection strings (Postgres, MongoDB, Redis, MySQL) | 🔴 Critical |
| Passwords | Hardcoded passwords, DB passwords | 🟠 High |
| PII | Email addresses, possible SSNs | 🟡 Medium |
| High-entropy strings | Random strings that look like secrets (Shannon entropy analysis) | 🟡 Medium |
How it works
- 30+ regex patterns — battle-tested patterns for known secret formats (AWS, Stripe, Slack, GitHub, etc.)
- Shannon entropy analysis — detects random-looking strings that may be secrets, even if they don't match a known pattern
- Smart filtering — skips placeholders (
${API_KEY}), test files, comments, and common false positives - Auto-redaction — secrets are NEVER shown in full. All output uses redacted values (
sk_l**********yZ)
What it generates
| File | Purpose |
|---|---|
.cto/audit/YYYY-MM-DD.jsonl |
Append-only audit log (run it daily, keep history) |
.cto/audit/report.md |
Full markdown report — share with your team or compliance |
.cto/.env.example |
Auto-generated template with all detected env variable names |
CI/CD integration
Set CI=true and the audit will exit with code 1 if critical or high-severity secrets are found:
CI=true npx cto-ai-cli --auditPerfect for pre-commit hooks or CI pipelines — block PRs that contain secrets before they reach production or an AI prompt.
Why this matters
Every day, developers accidentally send secrets to AI tools:
- Copilot autocompletes with your
.envvalues in context - You paste a file into ChatGPT that has a hardcoded API key
- Cursor reads your database config with connection strings
CTO catches these before they leave your machine. Zero external calls. Everything runs locally.
🌐 Context Gateway — AI proxy for your entire team
Every AI API call from your team passes through the Gateway. It sits between your app and any LLM provider, automatically optimizing context, redacting secrets, and tracking costs.
npx cto-gateway ⚡ CTO Context Gateway v4.0.0
🌐 Proxy: http://127.0.0.1:8787
📊 Dashboard: http://127.0.0.1:8787/__cto
📁 Project: /your/project
✅ Context optimization
✅ Secret redaction
✅ Cost tracking
⬜ Daily budget (unlimited)
How to connect:
OPENAI_BASE_URL=http://127.0.0.1:8787
+ set header: x-cto-target: https://api.openai.com/v1/chat/completions
Waiting for requests...
18:52:34 openai/gpt-4o 1200 tokens $0.0075 (saved 5.2K tokens, $0.0130) [2 secrets redacted] 152msWhat it does
| Feature | Description |
|---|---|
| Secret redaction | Scans every message for API keys, tokens, passwords → auto-redacts before sending to the LLM |
| Secret blocking | Optional hard block — reject requests that contain critical secrets |
| Context optimization | Injects CTO-selected files, type definitions, and hub modules into system prompts |
| Cost tracking | Tracks per-request cost by model and provider. Persistent JSONL logs. |
| Budget enforcement | Set daily/monthly limits. Gateway returns 429 when exceeded. |
| Live dashboard | Dark-theme web UI at /__cto — today's stats, monthly breakdown, model costs |
| SSE streaming | Full passthrough of streaming responses with zero-copy. No added latency. |
| Multi-provider | OpenAI, Anthropic, Google AI, Azure OpenAI, and any OpenAI-compatible API |
Supported providers & models
| Provider | Models | Pricing tracked |
|---|---|---|
| OpenAI | GPT-4o, GPT-4o Mini, o1, o1-mini, o3-mini | ✅ |
| Anthropic | Claude Sonnet 4, Claude 3.5 Haiku, Claude 3 Opus | ✅ |
| Gemini 2.5 Pro, Gemini 2.0 Flash, Gemini 1.5 Pro | ✅ | |
| Azure OpenAI | Same as OpenAI (different hosting) | ✅ |
| Custom | Any OpenAI-compatible API (Ollama, LiteLLM, etc.) | Manual |
Configuration
cto-gateway --port 9000 # Custom port
cto-gateway --block-secrets # Hard block on critical secrets
cto-gateway --budget-daily 10 # Max $10/day
cto-gateway --budget-monthly 200 # Max $200/month
cto-gateway --project ./my-app # Analyze a specific project
cto-gateway --no-optimize # Disable context injection
cto-gateway --no-redact # Disable secret redactionWhat you can do with CTO
| Use case | How |
|---|---|
| Score your project | npx cto-ai-cli |
| Auto-optimize context | npx cto-ai-cli --fix → generates .cto/context.md to paste into AI |
| Task-specific context | npx cto-ai-cli --context "refactor auth" → optimized for your task |
| Security audit | npx cto-ai-cli --audit → detect secrets & PII before AI sees them |
| AI proxy (Gateway) | npx cto-gateway → proxy with secret redaction + cost tracking |
| Shareable report | npx cto-ai-cli --report → markdown report + README badge |
| Compare vs open source | npx cto-ai-cli --compare → your score vs Zod, Next.js, Express |
| Compare strategies | npx cto-ai-cli --benchmark → CTO vs naive vs random |
| Get context for a task | cto2 interact "your task" |
| Use in your AI editor | Add MCP server (see setup above) |
| Block secrets in CI | CI=true npx cto-ai-cli --audit |
| Budget control | cto-gateway --budget-daily 10 --budget-monthly 200 |
| JSON output (scripting) | npx cto-ai-cli --json |
Honest limitations
This is an early test version. Here's what we know:
- TypeScript/JavaScript projects work best. We support other languages (Python, Go, Rust, Java) for basic analysis, but TypeScript gets the deepest understanding.
- Our benchmarks use simple baselines (alphabetical, random). We haven't compared against Cursor's or Copilot's internal context selection.
- The savings numbers are estimates based on average API pricing. Your actual savings depend on your model, pricing tier, and usage patterns.
- We need more projects to test on. If you try it and share your score, that helps us a lot.
What's next
We're working on:
- Context Gateway — proxy between your team and any AI, with automatic context optimization and cost tracking
- Monorepo intelligence — package-aware selection for large monorepos (60-80% more token savings)
- CI Quality Gate — GitHub Action that posts context score and secret audit on every PR
- VS Code extension — live score, risk indicators, and context suggestions inline
- Learning mode — CTO improves based on which AI suggestions you accept/reject
- More language support — deeper analysis for Python, Go, and Rust
- Your feedback — open an issue or reach out
For contributors
git clone <repo-url>
cd cto
npm install
npm run build
npm test # 573 tests
npm run typecheck # strict TypeScriptFull CLI docs, MCP server setup, API server, and programmatic API are documented in DOCS.md.