JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 2808
  • Score
    100M100P100Q126392F
  • License MIT

Multi-model AI code review server using OpenRouter - get diverse perspectives from multiple LLMs in parallel

Package Exports

  • @klitchevo/code-council
  • @klitchevo/code-council/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@klitchevo/code-council) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

Code Council

npm version License: MIT CI codecov

Code Council

Your AI Code Review Council - Get diverse perspectives from multiple AI models in parallel.

An MCP (Model Context Protocol) server that provides AI-powered code review using multiple models from OpenRouter. Think of it as assembling a council of AI experts to review your code, each bringing their unique perspective.

Features

  • 🔍 Multi-Model Code Review - Get diverse perspectives by running reviews across multiple AI models simultaneously
  • 🎨 Frontend Review - Specialized reviews for accessibility, performance, and UX
  • 🔒 Backend Review - Security, architecture, and performance analysis
  • 📋 Plan Review - Review implementation plans before writing code
  • 📝 Git Changes Review - Review staged, unstaged, branch diffs, or specific commits
  • 💬 Council Discussions - Multi-turn conversations with the AI council for deeper exploration
  • 🏭 TPS Audit - Toyota Production System analysis for flow, waste, bottlenecks, and quality
  • Parallel Execution - All models run concurrently for fast results

Quick Start

The easiest way to use this MCP server is via npx. Configure your MCP client with environment variable for the API key:

Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "code-council": {
      "command": "npx",
      "args": ["-y", "@klitchevo/code-council"],
      "env": {
        "OPENROUTER_API_KEY": "your-api-key-here"
      }
    }
  }
}

With custom models:

{
  "mcpServers": {
    "code-council": {
      "command": "npx",
      "args": ["-y", "@klitchevo/code-council"],
      "env": {
        "OPENROUTER_API_KEY": "your-api-key-here",
        "CODE_REVIEW_MODELS": ["anthropic/claude-sonnet-4.5", "openai/gpt-4o"],
        "FRONTEND_REVIEW_MODELS": ["anthropic/claude-sonnet-4.5"],
        "BACKEND_REVIEW_MODELS": ["openai/gpt-4o", "google/gemini-2.0-flash-exp"]
      }
    }
  }
}

Cursor

Add to your Cursor MCP settings (.cursor/mcp.json or similar):

{
  "mcpServers": {
    "code-council": {
      "command": "npx",
      "args": ["-y", "@klitchevo/code-council"],
      "env": {
        "OPENROUTER_API_KEY": "your-api-key-here"
      }
    }
  }
}

Other MCP Clients

For any MCP client that supports environment variables:

{
  "command": "npx",
  "args": ["-y", "@klitchevo/code-council"],
  "env": {
    "OPENROUTER_API_KEY": "your-openrouter-api-key"
  }
}

Installation (Alternative)

If you prefer to install globally:

npm install -g @klitchevo/code-council

Then configure without npx:

{
  "mcpServers": {
    "code-council": {
      "command": "@klitchevo/code-council",
      "env": {
        "OPENROUTER_API_KEY": "your-api-key-here"
      }
    }
  }
}

Getting an API Key

  1. Sign up at OpenRouter
  2. Go to Keys in your dashboard
  3. Create a new API key
  4. Add credits to your account at Credits

Security Best Practices

⚠️ CRITICAL SECURITY WARNING: Never commit your OpenRouter API key to git!

MCP Config File Locations (Safe - Not in Git)

MCP client configurations are stored outside your project directory and won't be committed:

  • Claude Desktop:
    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Windows: %APPDATA%\Claude\claude_desktop_config.json
    • Linux: ~/.config/Claude/claude_desktop_config.json
  • Cursor: Global settings (not in project)
  • Other MCP Clients: Typically in user config directories

These files are safe to put your API key in because they're not in your git repository.

✅ SAFE:

  • Putting the API key in MCP client config files (they're outside git)
  • Using system environment variables and referencing them
  • Keeping configs in user directories (~/.config/, ~/Library/, etc.)

❌ NEVER DO:

  • Don't create .mcp.json or config files inside your project directory
  • Don't commit any file containing your API key to git
  • Don't share config files containing API keys
  • Don't hardcode API keys in code

Using Environment Variables (Extra Security)

For added security, store the key in your shell environment:

# Add to ~/.zshrc or ~/.bashrc
export OPENROUTER_API_KEY="sk-or-v1-..."

Then reference it in your MCP config:

{
  "env": {
    "OPENROUTER_API_KEY": "${OPENROUTER_API_KEY}"
  }
}

Available Tools

review_code

Review code for quality, bugs, performance, and security issues.

Parameters:

  • code (required): The code to review
  • language (optional): Programming language
  • context (optional): Additional context about the code

Example usage in Claude:

Use review_code to check this TypeScript function:
[paste your code]

review_frontend

Review frontend code with focus on accessibility, performance, and UX.

Parameters:

  • code (required): The frontend code to review
  • framework (optional): Framework name (e.g., react, vue, svelte)
  • review_type (optional): accessibility, performance, ux, or full (default)
  • context (optional): Additional context

Example usage in Claude:

Use review_frontend with review_type=accessibility to check this React component:
[paste your component]

review_backend

Review backend code for security, performance, and architecture.

Parameters:

  • code (required): The backend code to review
  • language (optional): Language/framework (e.g., node, python, go, rust)
  • review_type (optional): security, performance, architecture, or full (default)
  • context (optional): Additional context

Example usage in Claude:

Use review_backend with review_type=security to analyze this API endpoint:
[paste your code]

review_plan

Review implementation plans BEFORE coding to catch issues early.

Parameters:

  • plan (required): The implementation plan to review
  • review_type (optional): feasibility, completeness, risks, timeline, or full (default)
  • context (optional): Project constraints or context

Example usage in Claude:

Use review_plan to analyze this implementation plan:
[paste your plan]

review_git_changes

Review git changes directly from your repository.

Parameters:

  • review_type (optional): staged, unstaged, diff, or commit (default: staged)
    • staged - Review staged changes (git diff --cached)
    • unstaged - Review unstaged changes (git diff)
    • diff - Review branch diff (git diff main..HEAD)
    • commit - Review a specific commit (requires commit_hash)
  • commit_hash (optional): Commit hash to review (required when review_type is commit)
  • context (optional): Additional context about the changes

Example usage in Claude:

Use review_git_changes to review my staged changes
Use review_git_changes with review_type=commit and commit_hash=abc123 to review that commit

discuss_with_council

Have multi-turn conversations with the AI council. Start a discussion, get feedback from all models, then ask follow-up questions while maintaining context.

Parameters:

  • message (required): Your message or question for the council
  • session_id (optional): Session ID to continue an existing discussion (omit to start new)
  • discussion_type (optional): code_review, plan_review, or general (default: general)
  • context (optional): Additional context (code snippets, plan details, etc.)

Example usage in Claude:

Use discuss_with_council to ask: What's the best way to implement error handling in a Node.js API?

Continuing a discussion:

Use discuss_with_council with session_id=<id-from-previous-response> to ask: Can you elaborate on the circuit breaker pattern you mentioned?

Key features:

  • Each model maintains its own conversation history for authentic diverse perspectives
  • Sessions persist for 30 minutes of inactivity
  • Rate limited to 10 requests per minute per session
  • Context windowing keeps conversations efficient

tps_audit

Analyze any codebase using Toyota Production System (TPS) principles. Generates beautiful HTML reports with scores for flow, waste, bottlenecks, and quality.

Parameters:

  • path (optional): Path to repository root (auto-detects git root if not provided)
  • focus_areas (optional): Specific areas to focus on (e.g., ["flow", "security", "performance"])
  • max_files (optional): Maximum files to analyze (default: 50, max: 100)
  • file_types (optional): File extensions to include (default: common source files)
  • include_sensitive (optional): Include potentially sensitive files (default: false)
  • output_format (optional): html, markdown, or json (default: html)

Example usage in Claude:

Use tps_audit to analyze this repository
Use tps_audit with output_format=markdown and focus_areas=["security", "performance"]

What it analyzes:

  • Flow: How data and control flow through the system, entry points, pathways
  • Muda (Waste): The 7 wastes - defects, overproduction, waiting, transportation, inventory, motion, extra-processing
  • Bottlenecks: Where flow is constrained, severity and impact
  • Jidoka: Built-in quality, fail-fast patterns, error handling
  • Recommendations: Prioritized improvements with effort/impact ratings

Security features:

  • Automatically skips sensitive files (.env, credentials, keys, tokens)
  • Scans file contents for embedded secrets (AWS keys, GitHub PATs, etc.)
  • Validates paths to prevent directory traversal attacks
  • Enforces size limits to prevent resource exhaustion

Output: Reports are saved to .code-council/ directory:

  • tps-audit.html - Interactive styled report with glass-morphism dark theme
  • tps-audit.md - Markdown version
  • tps-audit.json - Raw JSON data

list_review_config

Show which AI models are currently configured for each review type.

Configuration

Customizing Models

You can customize which AI models are used for reviews by setting environment variables in your MCP client configuration. Each review type can use different models.

Available Environment Variables:

  • CODE_REVIEW_MODELS - Models for general code reviews
  • FRONTEND_REVIEW_MODELS - Models for frontend reviews
  • BACKEND_REVIEW_MODELS - Models for backend reviews
  • PLAN_REVIEW_MODELS - Models for plan reviews
  • DISCUSSION_MODELS - Models for council discussions
  • TPS_AUDIT_MODELS - Models for TPS codebase audits
  • TEMPERATURE - Control response randomness (0.0-2.0, default: 0.3)
  • MAX_TOKENS - Maximum response tokens (default: 16384)

Format: Model arrays use JSON array format

Example:

{
  "mcpServers": {
    "code-council": {
      "command": "npx",
      "args": ["-y", "@klitchevo/code-council"],
      "env": {
        "OPENROUTER_API_KEY": "your-api-key",
        "CODE_REVIEW_MODELS": ["anthropic/claude-sonnet-4.5", "openai/gpt-4o", "google/gemini-2.0-flash-exp"],
        "FRONTEND_REVIEW_MODELS": ["anthropic/claude-sonnet-4.5"],
        "BACKEND_REVIEW_MODELS": ["openai/gpt-4o", "anthropic/claude-sonnet-4.5"],
        "TEMPERATURE": "0.5",
        "MAX_TOKENS": "32000"
      }
    }
  }
}

Default Models: If you don't specify models, the server uses these defaults:

  • minimax/minimax-m2.1 - Fast, cost-effective reasoning
  • z-ai/glm-4.7 - Strong multilingual capabilities
  • moonshotai/kimi-k2-thinking - Advanced reasoning with thinking
  • deepseek/deepseek-v3.2 - State-of-the-art open model

Finding Models: Browse all available models at OpenRouter Models. Popular choices include:

  • anthropic/claude-sonnet-4.5 - Latest Sonnet, excellent for code review
  • anthropic/claude-opus-4.5 - Frontier reasoning model for complex tasks
  • openai/gpt-4o - Latest GPT-4 Omni model
  • google/gemini-2.0-flash-exp - Fast and affordable
  • meta-llama/llama-3.3-70b-instruct - Latest open source option

Local Development

  1. Clone the repository:
git clone <your-repo-url>
cd multi-agent
  1. Install dependencies:
npm install
  1. Create .env file:
cp .env.example .env
# Edit .env and add your OPENROUTER_API_KEY
  1. Build:
npm run build
  1. Run:
npm start
# or use the convenience script:
./run.sh
  1. For development with auto-rebuild:
npm run dev

How It Works

  1. The MCP server exposes tools that Claude (or other MCP clients) can call
  2. When you ask Claude to review code, it calls the appropriate tool
  3. The server sends your code to multiple AI models via OpenRouter in parallel
  4. Results from all models are aggregated and returned
  5. Claude presents you with diverse perspectives from different AI models

Cost Considerations

  • Each review runs across multiple models simultaneously
  • Costs vary by model - check OpenRouter pricing
  • You can reduce costs by:
    • Using fewer models in your configuration
    • Choosing cheaper models
    • Using specific review_type options instead of full reviews
    • Lowering MAX_TOKENS (default: 16384) for shorter responses

Troubleshooting

"OPENROUTER_API_KEY environment variable is required"

Make sure you've added the API key to the env section of your MCP client configuration, not just in a separate .env file.

Reviews are slow

  • This is expected when using multiple models in parallel
  • Consider using fewer models or faster models
  • Check OpenRouter status at status.openrouter.ai

Models returning errors

  • Check that you have sufficient credits in your OpenRouter account
  • Some models may have rate limits or temporary availability issues
  • The server will show which models succeeded and which failed

Requirements

  • Node.js >= 18.0.0
  • OpenRouter API key
  • MCP-compatible client (Claude Desktop, Cursor, etc.)

License

MIT

Contributing

Contributions welcome! Please open an issue or PR.