Package Exports
- ultimate-mcp-server
- ultimate-mcp-server/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (ultimate-mcp-server) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Ultimate MCP Server v2.0
The definitive all-in-one Model Context Protocol (MCP) server for AI-assisted coding across 30+ platforms.
๐ Features
Core Capabilities
- ๐ค 50+ AI Models: Latest models including GPT-4o, Claude 3 Opus, Gemini 2.5, DeepSeek V3, Grok-4, and more
- ๐ Multi-Transport Support: STDIO, SSE, HTTP/REST, WebSocket - works everywhere
- ๐ RAG System: Document-based AI with vector search and embeddings
- ๐ง Cognitive Memory: Knowledge graphs with persistent context
- ๐ป Code Intelligence: AST parsing, symbol extraction, dependency analysis
- ๐ Universal Search: Cross-platform file and code search
- ๐ Advanced Analytics: Performance monitoring and cost optimization
- ๐ Browser Automation: Playwright and Puppeteer integration
- ๐จ UI Understanding: Visual analysis and design system extraction
Platform Compatibility
โ Fully Tested: Claude Desktop, Claude Code, Cursor, VS Code (Continue), Cline, Windsurf, Google AI Studio
๐งช In Testing: Smithery, Trae, Visual Studio 2022, Zed, BoltAI, Augment Code, Roo Code, and 20+ more
๐ฆ Installation
Quick Start (Recommended)
npx ultimate-mcp-serverAdd to Claude Code
claude mcp add npx ultimate-mcp-serverGlobal Installation
npm install -g ultimate-mcp-server
ultimate-mcp-serverLocal Installation
npm install ultimate-mcp-server๐ง Configuration
Claude Desktop
Add to your Claude Desktop config file:
{
"mcpServers": {
"ultimate-mcp": {
"command": "npx",
"args": ["ultimate-mcp-server"]
}
}
}Cursor/Windsurf
Add to .cursorrules or .windsurfrules:
{
"mcpServers": {
"ultimate-mcp": {
"command": "npx",
"args": ["ultimate-mcp-server"]
}
}
}Environment Variables
Create a .env file with your API keys:
# Required (at least one)
OPENAI_API_KEY=your-key
ANTHROPIC_API_KEY=your-key
GOOGLE_API_KEY=your-key
# Optional (for specific features)
PERPLEXITY_API_KEY=your-key # For research features
XAI_API_KEY=your-key # For Grok models
MISTRAL_API_KEY=your-key # For Mistral models๐ ๏ธ Available Tools
Debugging & Analysis
analyze_error- Analyze errors with AI-powered suggestionsexplain_code- Get detailed code explanationssuggest_optimizations- Performance and code quality improvementsdebugging_session- Interactive debugging with context
AI Orchestration
ask- Query any AI model directlyorchestrate- Coordinate multiple models for complex taskscompare_models- Compare responses across different models
Code Generation
generate_code- Create code with best practicesanalyze_codebase- Large-scale codebase analysisfind_in_codebase- Pattern search across files
Advanced Features
rag_search- Semantic search in documentsbuild_knowledge_graph- Create knowledge representationsanalyze_ui_screenshot- Understand UI/UX from imagescapture_webpage_screenshot- Browser automationextract_webpage_data- Web scraping
๐ฏ Use Cases
1. Code Review & Debugging
// Analyze an error
await mcp.use_tool('analyze_error', {
error: 'TypeError: Cannot read property of undefined',
code: problemCode,
language: 'javascript'
});2. UI/UX Analysis
// Analyze a design screenshot
await mcp.use_tool('analyze_ui_screenshot', {
url: 'https://example.com',
analysis_type: 'comprehensive',
extract_design_system: true
});3. Large Codebase Analysis
// Analyze entire project
await mcp.use_tool('analyze_large_codebase', {
rootDir: '/path/to/project',
pattern: '.*\\.ts$',
query: 'How is authentication implemented?'
});4. Multi-Model Orchestration
// Get consensus from multiple models
await mcp.use_tool('orchestrate', {
prompt: 'Design a scalable microservices architecture',
strategy: 'consensus',
models: ['gpt-4o', 'claude-3-opus', 'gemini-2.5-pro']
});๐๏ธ Architecture
Smart Lazy Loading
- Tools are registered immediately but loaded on-demand
- Reduces startup time from 5s to <500ms
- Maintains full functionality without "tools not available" issues
Performance Optimization
- Intelligent model routing based on task complexity
- Automatic cost optimization with quality thresholds
- Built-in caching and rate limiting
- Token usage tracking and optimization
Extensibility
- Plugin architecture for custom tools
- Support for custom embedding providers
- Configurable vector stores (Pinecone, Weaviate, ChromaDB)
- Custom model integrations
๐ Performance Metrics
- Startup Time: <500ms (with lazy loading)
- Response Time: <1s for most operations
- Memory Usage: <512MB under normal load
- Bundle Size: ~45MB (optimized)
- Context Window: Up to 2M tokens (model dependent)
๐ Security
- No data persistence by default
- API keys stored locally only
- Sandboxed code execution
- Rate limiting and abuse prevention
- Audit logging for compliance
๐ค Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
๐ License
MIT License - see LICENSE for details.
๐ Acknowledgments
Ultimate MCP incorporates the best features from:
- agentset-ai - RAG capabilities
- contentful-mcp - Content management
- cognee-mcp - Cognitive memory
- code-context-provider - Code analysis
- code-assistant - Autonomous exploration
- mcp-enhance-prompt - Prompt engineering
- mcp-everything-search - Universal search
- consult7 - Large context analysis
๐ What's New in v2.0
- 50+ Latest AI Models: Including Grok-4, DeepSeek V3, Gemini 2.5 Flash
- Browser Automation: Playwright and Puppeteer integration
- UI/UX Understanding: Analyze designs from screenshots or URLs
- Smart Lazy Loading: 10x faster startup with full functionality
- Cost Optimization: Automatic model selection based on budget
- Performance Monitoring: Real-time metrics and insights
- Large Context Analysis: Handle massive codebases (Consult7-style)
- Multi-Transport: Works with every MCP-compatible platform
Built with โค๏ธ by the Ultimate MCP Team