Package Exports
- openai-assistants-mcp
- openai-assistants-mcp/dist/worker.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (openai-assistants-mcp) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Jezweb MCP Core v3.0.1 - Adaptable Multi-Provider Architecture
A production-ready Model Context Protocol (MCP) server featuring an adaptable, provider-agnostic architecture that supports multiple LLM providers through a unified interface. Built with a "Shared Core with Thin Adapters" architecture for maximum flexibility and simplicity.
๐ Universal MCP Server - Three Ways to Connect
Choose the deployment option that best fits your needs:
๐ Option 1: Cloudflare Workers (Production Ready - v3.0 Unified Architecture)
Production URL: https://openai-assistants-mcp.jezweb.ai/mcp/{api-key}
- โ Adaptable Architecture - Support for multiple LLM providers (OpenAI, Claude, etc.)
- โ Simple Configuration - Environment-first configuration, no complex setup
- โ Lightweight & Fast - Sub-100ms response times with global edge distribution
- โ Zero Dependencies - No local setup required
- โ LIVE & OPERATIONAL - v3.0 unified architecture deployed and tested
๐ฆ Option 2: NPM Package (Local Stdio - v3.0 Deployment Adapter)
Package: jezweb-mcp-core@3.0.1
- โ Provider-Agnostic - Unified core with deployment-specific adapter
- โ Simple Configuration - Environment variables and sensible defaults
- โ Direct stdio transport - No proxy required
- โ Local execution - Full control over environment
- โ 100% Backward Compatible - Seamless upgrade from OpenAI-specific versions
๐ง Option 3: Local Development Server
Local Build: Clone and run locally
- โ Full source code access
- โ Customizable implementation
- โ Development and testing
- โ Private deployment options
โจ Key Features - Jezweb MCP Core v3.0
๐๏ธ Adaptable Multi-Provider Architecture
- Provider-Agnostic Design - Support for OpenAI, Anthropic Claude, Google, and more
- Extensible Provider System - Easy to add new LLM providers
- Unified Interface - Same tools and resources across all providers
- Smart Provider Selection - Automatic fallback and load balancing
- Simple Configuration - Environment-first setup with sensible defaults
๐ Core Capabilities
- Complete Assistant API Coverage - All 22 tools for full assistant, thread, message, and run management
- Universal Deployment - Three deployment options with identical functionality
- Production Ready - Deployed on Cloudflare Workers with modern architecture
- Lightweight - Minimal dependencies and fast execution
- Type Safe - Full TypeScript implementation with comprehensive type definitions
๐ฏ Enhanced User Experience
- Enhanced Tool Descriptions - Workflow-oriented descriptions with practical examples
- MCP Resources - 9 comprehensive resources including templates, workflows, and documentation
- Improved Validation - Detailed error messages with examples and suggestions
- Tool Annotations - Proper MCP annotations for better client understanding
- Assistant Templates - Pre-configured templates for common use cases
๐ง Technical Excellence
- Secure Authentication - URL-based API key authentication (Workers) or environment variables (NPM)
- Advanced Error Handling - Context-aware error messages with actionable guidance
- CORS Support - Ready for web-based MCP clients
- Real-time Operations - Support for streaming and real-time assistant interactions
- Comprehensive Testing - Built-in test suites for both deployment options
๐ Architecture Overview
Provider System
Jezweb MCP Core uses a sophisticated provider registry system that abstracts away provider-specific details:
// Multiple providers supported
const providers = {
openai: { /* OpenAI configuration */ },
anthropic: { /* Claude configuration */ },
google: { /* Gemini configuration */ }
};
// Automatic provider selection
const provider = registry.selectProvider({
strategy: 'capability-based',
requiredCapabilities: ['assistants', 'threads']
});Simple Configuration
Environment-first configuration with sensible defaults:
# Cloudflare Workers - via Wrangler secrets
wrangler secret put OPENAI_API_KEY
wrangler secret put ANTHROPIC_API_KEY
# NPM Package - via environment variables
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"Unified Architecture
shared/ # Unified shared core (single source of truth)
โโโ core/ # Core business logic and handlers
โโโ services/ # Provider registry and LLM service abstraction
โ โโโ llm-service.ts # Generic LLM provider interface
โ โโโ provider-registry.ts # Provider management and selection
โ โโโ providers/ # Individual provider implementations
โโโ types/ # Unified type definitions
src/ # Cloudflare Workers deployment
โโโ worker.ts # Cloudflare Workers entry point
โโโ mcp-handler.ts # Worker-specific MCP handler
npm-package/ # NPM package deployment
โโโ src/ # NPM-specific implementation
โโโ universal-mcp-server.cjs # NPM package entry point๐ Quick Start - Choose Your Installation Method
Prerequisites
- API key for your chosen LLM provider (OpenAI, Anthropic, etc.)
- Node.js 18+ (for NPM package or local development)
- MCP client (Claude Desktop, Roo, or other MCP-compatible client)
๐ Getting Started with LLM Providers
OpenAI Setup
- Visit the OpenAI API Keys page
- Create a new API key
- Monitor usage at OpenAI Dashboard
Anthropic Claude Setup
- Visit the Anthropic Console
- Create an API key
- Review Claude API documentation
๐ฆ Option 1: NPM Package (Recommended for Most Users)
Installation
# Option A: Use directly with npx (recommended for latest fixes)
npx jezweb-mcp-core@latest
# Option B: Install globally
npm install -g jezweb-mcp-core@latest
# Option C: Install locally in your project
npm install jezweb-mcp-core@latestClaude Desktop Configuration
Add to your claude_desktop_config.json:
{
"mcpServers": {
"jezweb-mcp-core": {
"command": "npx",
"args": ["jezweb-mcp-core@latest"],
"env": {
"OPENAI_API_KEY": "your-openai-api-key-here",
"ANTHROPIC_API_KEY": "your-anthropic-api-key-here"
}
}
}
}Roo Configuration
Add to your Roo configuration file:
{
"mcpServers": {
"jezweb-mcp-core": {
"command": "npx",
"args": ["jezweb-mcp-core@latest"],
"env": {
"OPENAI_API_KEY": "your-openai-api-key-here",
"ANTHROPIC_API_KEY": "your-anthropic-api-key-here"
},
"alwaysAllow": [
"assistant-create",
"assistant-list",
"assistant-get",
"assistant-update",
"assistant-delete",
"thread-create",
"thread-get",
"thread-update",
"thread-delete",
"message-create",
"message-list",
"message-get",
"message-update",
"message-delete",
"run-create",
"run-list",
"run-get",
"run-update",
"run-cancel",
"run-submit-tool-outputs",
"run-step-list",
"run-step-get"
]
}
}
}โ๏ธ Option 2: Cloudflare Workers (Zero Setup)
Claude Desktop Configuration
- Install the MCP proxy:
npm install -g mcp-proxy- Add to your
claude_desktop_config.json:
{
"mcpServers": {
"jezweb-mcp-core": {
"command": "npx",
"args": [
"mcp-proxy",
"https://openai-assistants-mcp.jezweb.ai/mcp/YOUR_OPENAI_API_KEY_HERE"
]
}
}
}๐ง Option 3: Local Development Server
Setup
- Clone the repository:
git clone https://github.com/jezweb/openai-assistants-mcp.git
cd openai-assistants-mcp- Install dependencies:
npm install- Set up environment variables:
# Add your API keys to wrangler.toml or use wrangler secrets
wrangler secret put OPENAI_API_KEY
wrangler secret put ANTHROPIC_API_KEY- Start development server:
npm run dev๐ ๏ธ Available Tools
Assistant Management
- assistant-create - Create a new assistant with instructions and tools
- assistant-list - List all assistants with pagination and sorting
- assistant-get - Get detailed information about a specific assistant
- assistant-update - Update assistant instructions, tools, or metadata
- assistant-delete - Delete an assistant permanently
Thread Management
- thread-create - Create a new conversation thread
- thread-get - Get thread details and metadata
- thread-update - Update thread metadata
- thread-delete - Delete a thread permanently
Message Management
- message-create - Add a message to a thread
- message-list - List messages in a thread with pagination
- message-get - Get details of a specific message
- message-update - Update message metadata
- message-delete - Delete a message from a thread
Run Management
- run-create - Start a new assistant run on a thread
- run-list - List runs for a thread with filtering
- run-get - Get run details and status
- run-update - Update run metadata
- run-cancel - Cancel a running assistant execution
- run-submit-tool-outputs - Submit tool call results to continue a run
Advanced Operations
- run-step-list - List steps in a run execution
- run-step-get - Get details of a specific run step
๐ MCP Resources Available
This server provides 9 comprehensive MCP resources to help you get started quickly:
๐ฏ Assistant Templates (4 resources)
assistant://templates/coding-assistant- Pre-configured coding assistantassistant://templates/writing-assistant- Professional writing assistantassistant://templates/data-analyst- Data analysis assistantassistant://templates/customer-support- Customer support assistant
๐ Workflow Examples (2 resources)
examples://workflows/create-and-run- Complete workflow examplesexamples://workflows/batch-processing- Efficient batch processing
๐ Documentation (3 resources)
docs://jezweb-mcp-core-api- Comprehensive API referencedocs://error-handling- Common errors and solutionsdocs://best-practices- Guidelines for optimal usage
๐ Enhanced Usage Examples
Multi-Provider Usage
# Create an assistant (automatically selects best available provider)
"Create an assistant named 'Code Helper' with instructions to help with programming tasks"
# Use specific provider
"Create an assistant using OpenAI's GPT-4 model"
"Create an assistant using Anthropic's Claude model"Assistant Management
# List all assistants
"List my assistants"
# Get assistant details
"Get details of assistant asst_abc123"
# Update an assistant
"Update assistant asst_abc123 to include the code_interpreter tool"Thread and Message Management
# Create a new thread
"Create a new conversation thread"
# Add a message to a thread
"Add the message 'Hello, how can you help me?' to thread thread_abc123"
# List messages in a thread
"List all messages in thread thread_abc123"Run Management
# Start an assistant run
"Start a run with assistant asst_abc123 on thread thread_abc123"
# Get run status
"Get status of run run_abc123"
# Cancel a running execution
"Cancel run run_abc123"๐ Deployment Option Parity
All deployment options provide identical functionality with all 22 tools working seamlessly:
โ Functional Parity
- Identical Tools: All 22 tools work exactly the same way
- Same API Surface: Identical tool names, parameters, and responses
- Consistent Behavior: Error handling, validation, and responses are uniform
- Multi-Provider Support: All deployment options support multiple LLM providers
๐ Transport Differences
| Feature | Cloudflare Workers | NPM Package |
|---|---|---|
| Transport | HTTP/SSE via mcp-proxy | Direct stdio |
| Setup | Zero setup required | Node.js 18+ required |
| Performance | Sub-100ms global edge | Direct process communication |
| Dependencies | No local dependencies | Local Node.js execution |
| API Key | URL-based authentication | Environment variable |
| Scaling | Automatic global scaling | Single process |
| Offline | Requires internet | Works offline (after setup) |
๐๏ธ Architecture - Provider-Agnostic Design
Core Design Principles
- Adaptable - Support for multiple LLM providers through unified interface
- Simple - Environment-first configuration with sensible defaults
- Lightweight - Minimal dependencies and fast execution
- Extensible - Easy to add new providers and capabilities
- Reliable - Comprehensive error handling and fallback mechanisms
Provider System Architecture
// Provider Registry manages multiple LLM providers
interface LLMProvider {
createAssistant(request: GenericCreateAssistantRequest): Promise<GenericAssistant>;
listAssistants(request?: GenericListRequest): Promise<GenericListResponse<GenericAssistant>>;
// ... all assistant API methods
}
// Providers implement the same interface
class OpenAIProvider implements LLMProvider { /* ... */ }
class AnthropicProvider implements LLMProvider { /* ... */ }
class GoogleProvider implements LLMProvider { /* ... */ }Configuration System
Simple, environment-first configuration using standard environment variables:
# Required - at least one provider API key
export OPENAI_API_KEY="your-openai-key-here"
export ANTHROPIC_API_KEY="your-anthropic-key-here"
# Optional configuration
export JEZWEB_LOG_LEVEL="info"
export JEZWEB_DEFAULT_PROVIDER="openai"The system automatically detects the deployment environment and applies appropriate defaults.
๐งช Testing Infrastructure
Comprehensive Test Suites
Both deployment options include robust testing:
NPM Package Testing
cd npm-package
npm testCloudflare Workers Testing
node test-validation-only.jsManual Testing
Test the Cloudflare Workers deployment:
# List available tools
curl -X POST "https://openai-assistants-mcp.jezweb.ai/mcp/YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'Test the NPM Package:
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | npx jezweb-mcp-core@latest๐ง Development
Local Development
- Clone and install:
git clone https://github.com/jezweb/openai-assistants-mcp.git
cd openai-assistants-mcp
npm install- Set up environment:
wrangler secret put OPENAI_API_KEY
wrangler secret put ANTHROPIC_API_KEY- Start development:
npm run devAdding New Providers
- Implement the
LLMProviderinterface - Create a provider factory
- Register with the provider registry
- Add configuration schema
Example:
class MyCustomProvider implements LLMProvider {
// Implement all required methods
}
const factory: LLMProviderFactory = {
create: (config) => new MyCustomProvider(config),
getMetadata: () => ({ name: 'my-provider', ... }),
validateConfig: (config) => true
};
registry.registerFactory(factory);๐ Enhanced Validation & Error Handling
Intelligent Error Messages
- Format Examples: Error messages include correct format examples
- Documentation References: Errors link to relevant documentation
- Suggestion Guidance: Invalid values show supported alternatives
- Provider Context: Errors include provider-specific guidance
Validation Features
- ID Format Validation: Strict format checking with helpful messages
- Provider Validation: Validates provider availability and capabilities
- Configuration Validation: Comprehensive config validation
- Parameter Validation: Type and range checking with examples
๐ Security
- API Key Protection - Secure handling of multiple provider API keys
- Enhanced Input Validation - Comprehensive validation with helpful feedback
- Provider Isolation - Each provider operates in isolation
- CORS Security - Proper CORS headers for web clients
- Rate Limiting - Inherits provider-specific rate limits
๐ Performance
- Global Edge - Deployed on Cloudflare's global network
- Sub-100ms - Typical response times under 100ms
- Provider Selection - Smart provider selection for optimal performance
- Efficient - Minimal memory footprint and fast execution
๐ค Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
๐ License
MIT License - see LICENSE for details.
๐ฏ Migration Guide
From OpenAI Assistants MCP v2.x
The migration is seamless - just update your package name:
# Old
npx openai-assistants-mcp@latest
# New
npx jezweb-mcp-core@latestAll existing tools and functionality remain identical. The new version adds multi-provider support while maintaining 100% backward compatibility.
Configuration Migration
Old environment variables continue to work:
# Still supported
OPENAI_API_KEY=your-key-here
# New multi-provider support
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-keyReady to get started? Choose your preferred installation method from the Quick Start guide above and begin building with multiple LLM providers through a unified interface!