JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 30
  • Score
    100M100P100Q71123F
  • License MIT

Jezweb MCP Core v3.0 - Adaptable, provider-agnostic MCP server with unified 'Shared Core with Thin Adapters' architecture. Features multi-provider support (OpenAI, Anthropic, Google), simple environment-first configuration, comprehensive testing infrastructure, and seamless deployment across Cloudflare Workers and NPM with 100% backward compatibility.

Package Exports

  • openai-assistants-mcp
  • openai-assistants-mcp/dist/worker.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (openai-assistants-mcp) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

Jezweb MCP Core v3.0.1 - Adaptable Multi-Provider Architecture

A production-ready Model Context Protocol (MCP) server featuring an adaptable, provider-agnostic architecture that supports multiple LLM providers through a unified interface. Built with a "Shared Core with Thin Adapters" architecture for maximum flexibility and simplicity.

๐ŸŒŸ Universal MCP Server - Three Ways to Connect

Choose the deployment option that best fits your needs:

๐Ÿš€ Option 1: Cloudflare Workers (Production Ready - v3.0 Unified Architecture)

Production URL: https://openai-assistants-mcp.jezweb.ai/mcp/{api-key}

  • โœ… Adaptable Architecture - Support for multiple LLM providers (OpenAI, Claude, etc.)
  • โœ… Simple Configuration - Environment-first configuration, no complex setup
  • โœ… Lightweight & Fast - Sub-100ms response times with global edge distribution
  • โœ… Zero Dependencies - No local setup required
  • โœ… LIVE & OPERATIONAL - v3.0 unified architecture deployed and tested

๐Ÿ“ฆ Option 2: NPM Package (Local Stdio - v3.0 Deployment Adapter)

Package: jezweb-mcp-core@3.0.1

  • โœ… Provider-Agnostic - Unified core with deployment-specific adapter
  • โœ… Simple Configuration - Environment variables and sensible defaults
  • โœ… Direct stdio transport - No proxy required
  • โœ… Local execution - Full control over environment
  • โœ… 100% Backward Compatible - Seamless upgrade from OpenAI-specific versions

๐Ÿ”ง Option 3: Local Development Server

Local Build: Clone and run locally

  • โœ… Full source code access
  • โœ… Customizable implementation
  • โœ… Development and testing
  • โœ… Private deployment options

โœจ Key Features - Jezweb MCP Core v3.0

๐Ÿ—๏ธ Adaptable Multi-Provider Architecture

  • Provider-Agnostic Design - Support for OpenAI, Anthropic Claude, Google, and more
  • Extensible Provider System - Easy to add new LLM providers
  • Unified Interface - Same tools and resources across all providers
  • Smart Provider Selection - Automatic fallback and load balancing
  • Simple Configuration - Environment-first setup with sensible defaults

๐Ÿš€ Core Capabilities

  • Complete Assistant API Coverage - All 22 tools for full assistant, thread, message, and run management
  • Universal Deployment - Three deployment options with identical functionality
  • Production Ready - Deployed on Cloudflare Workers with modern architecture
  • Lightweight - Minimal dependencies and fast execution
  • Type Safe - Full TypeScript implementation with comprehensive type definitions

๐ŸŽฏ Enhanced User Experience

  • Enhanced Tool Descriptions - Workflow-oriented descriptions with practical examples
  • MCP Resources - 9 comprehensive resources including templates, workflows, and documentation
  • Improved Validation - Detailed error messages with examples and suggestions
  • Tool Annotations - Proper MCP annotations for better client understanding
  • Assistant Templates - Pre-configured templates for common use cases

๐Ÿ”ง Technical Excellence

  • Secure Authentication - URL-based API key authentication (Workers) or environment variables (NPM)
  • Advanced Error Handling - Context-aware error messages with actionable guidance
  • CORS Support - Ready for web-based MCP clients
  • Real-time Operations - Support for streaming and real-time assistant interactions
  • Comprehensive Testing - Built-in test suites for both deployment options

๐Ÿ“Š Architecture Overview

Provider System

Jezweb MCP Core uses a sophisticated provider registry system that abstracts away provider-specific details:

// Multiple providers supported
const providers = {
  openai: { /* OpenAI configuration */ },
  anthropic: { /* Claude configuration */ },
  google: { /* Gemini configuration */ }
};

// Automatic provider selection
const provider = registry.selectProvider({
  strategy: 'capability-based',
  requiredCapabilities: ['assistants', 'threads']
});

Simple Configuration

Environment-first configuration with sensible defaults:

# Cloudflare Workers - via Wrangler secrets
wrangler secret put OPENAI_API_KEY
wrangler secret put ANTHROPIC_API_KEY

# NPM Package - via environment variables
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"

Unified Architecture

shared/                    # Unified shared core (single source of truth)
โ”œโ”€โ”€ core/                  # Core business logic and handlers
โ”œโ”€โ”€ services/              # Provider registry and LLM service abstraction
โ”‚   โ”œโ”€โ”€ llm-service.ts     # Generic LLM provider interface
โ”‚   โ”œโ”€โ”€ provider-registry.ts # Provider management and selection
โ”‚   โ””โ”€โ”€ providers/         # Individual provider implementations
โ””โ”€โ”€ types/                 # Unified type definitions

src/                       # Cloudflare Workers deployment
โ”œโ”€โ”€ worker.ts              # Cloudflare Workers entry point
โ””โ”€โ”€ mcp-handler.ts         # Worker-specific MCP handler

npm-package/               # NPM package deployment
โ”œโ”€โ”€ src/                   # NPM-specific implementation
โ””โ”€โ”€ universal-mcp-server.cjs # NPM package entry point

๐Ÿš€ Quick Start - Choose Your Installation Method

Prerequisites

  • API key for your chosen LLM provider (OpenAI, Anthropic, etc.)
  • Node.js 18+ (for NPM package or local development)
  • MCP client (Claude Desktop, Roo, or other MCP-compatible client)

๐Ÿ”‘ Getting Started with LLM Providers

OpenAI Setup

Anthropic Claude Setup


Installation

# Option A: Use directly with npx (recommended for latest fixes)
npx jezweb-mcp-core@latest

# Option B: Install globally
npm install -g jezweb-mcp-core@latest

# Option C: Install locally in your project
npm install jezweb-mcp-core@latest

Claude Desktop Configuration

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "jezweb-mcp-core": {
      "command": "npx",
      "args": ["jezweb-mcp-core@latest"],
      "env": {
        "OPENAI_API_KEY": "your-openai-api-key-here",
        "ANTHROPIC_API_KEY": "your-anthropic-api-key-here"
      }
    }
  }
}

Roo Configuration

Add to your Roo configuration file:

{
  "mcpServers": {
    "jezweb-mcp-core": {
      "command": "npx",
      "args": ["jezweb-mcp-core@latest"],
      "env": {
        "OPENAI_API_KEY": "your-openai-api-key-here",
        "ANTHROPIC_API_KEY": "your-anthropic-api-key-here"
      },
      "alwaysAllow": [
        "assistant-create",
        "assistant-list",
        "assistant-get",
        "assistant-update",
        "assistant-delete",
        "thread-create",
        "thread-get",
        "thread-update",
        "thread-delete",
        "message-create",
        "message-list",
        "message-get",
        "message-update",
        "message-delete",
        "run-create",
        "run-list",
        "run-get",
        "run-update",
        "run-cancel",
        "run-submit-tool-outputs",
        "run-step-list",
        "run-step-get"
      ]
    }
  }
}

โ˜๏ธ Option 2: Cloudflare Workers (Zero Setup)

Claude Desktop Configuration

  1. Install the MCP proxy:
npm install -g mcp-proxy
  1. Add to your claude_desktop_config.json:
{
  "mcpServers": {
    "jezweb-mcp-core": {
      "command": "npx",
      "args": [
        "mcp-proxy",
        "https://openai-assistants-mcp.jezweb.ai/mcp/YOUR_OPENAI_API_KEY_HERE"
      ]
    }
  }
}

๐Ÿ”ง Option 3: Local Development Server

Setup

  1. Clone the repository:
git clone https://github.com/jezweb/openai-assistants-mcp.git
cd openai-assistants-mcp
  1. Install dependencies:
npm install
  1. Set up environment variables:
# Add your API keys to wrangler.toml or use wrangler secrets
wrangler secret put OPENAI_API_KEY
wrangler secret put ANTHROPIC_API_KEY
  1. Start development server:
npm run dev

๐Ÿ› ๏ธ Available Tools

Assistant Management

  1. assistant-create - Create a new assistant with instructions and tools
  2. assistant-list - List all assistants with pagination and sorting
  3. assistant-get - Get detailed information about a specific assistant
  4. assistant-update - Update assistant instructions, tools, or metadata
  5. assistant-delete - Delete an assistant permanently

Thread Management

  1. thread-create - Create a new conversation thread
  2. thread-get - Get thread details and metadata
  3. thread-update - Update thread metadata
  4. thread-delete - Delete a thread permanently

Message Management

  1. message-create - Add a message to a thread
  2. message-list - List messages in a thread with pagination
  3. message-get - Get details of a specific message
  4. message-update - Update message metadata
  5. message-delete - Delete a message from a thread

Run Management

  1. run-create - Start a new assistant run on a thread
  2. run-list - List runs for a thread with filtering
  3. run-get - Get run details and status
  4. run-update - Update run metadata
  5. run-cancel - Cancel a running assistant execution
  6. run-submit-tool-outputs - Submit tool call results to continue a run

Advanced Operations

  1. run-step-list - List steps in a run execution
  2. run-step-get - Get details of a specific run step

๐Ÿ“š MCP Resources Available

This server provides 9 comprehensive MCP resources to help you get started quickly:

๐ŸŽฏ Assistant Templates (4 resources)

  • assistant://templates/coding-assistant - Pre-configured coding assistant
  • assistant://templates/writing-assistant - Professional writing assistant
  • assistant://templates/data-analyst - Data analysis assistant
  • assistant://templates/customer-support - Customer support assistant

๐Ÿ”„ Workflow Examples (2 resources)

  • examples://workflows/create-and-run - Complete workflow examples
  • examples://workflows/batch-processing - Efficient batch processing

๐Ÿ“– Documentation (3 resources)

  • docs://jezweb-mcp-core-api - Comprehensive API reference
  • docs://error-handling - Common errors and solutions
  • docs://best-practices - Guidelines for optimal usage

๐Ÿ“– Enhanced Usage Examples

Multi-Provider Usage

# Create an assistant (automatically selects best available provider)
"Create an assistant named 'Code Helper' with instructions to help with programming tasks"

# Use specific provider
"Create an assistant using OpenAI's GPT-4 model"
"Create an assistant using Anthropic's Claude model"

Assistant Management

# List all assistants
"List my assistants"

# Get assistant details
"Get details of assistant asst_abc123"

# Update an assistant
"Update assistant asst_abc123 to include the code_interpreter tool"

Thread and Message Management

# Create a new thread
"Create a new conversation thread"

# Add a message to a thread
"Add the message 'Hello, how can you help me?' to thread thread_abc123"

# List messages in a thread
"List all messages in thread thread_abc123"

Run Management

# Start an assistant run
"Start a run with assistant asst_abc123 on thread thread_abc123"

# Get run status
"Get status of run run_abc123"

# Cancel a running execution
"Cancel run run_abc123"

๐Ÿ”„ Deployment Option Parity

All deployment options provide identical functionality with all 22 tools working seamlessly:

โœ… Functional Parity

  • Identical Tools: All 22 tools work exactly the same way
  • Same API Surface: Identical tool names, parameters, and responses
  • Consistent Behavior: Error handling, validation, and responses are uniform
  • Multi-Provider Support: All deployment options support multiple LLM providers

๐Ÿš€ Transport Differences

Feature Cloudflare Workers NPM Package
Transport HTTP/SSE via mcp-proxy Direct stdio
Setup Zero setup required Node.js 18+ required
Performance Sub-100ms global edge Direct process communication
Dependencies No local dependencies Local Node.js execution
API Key URL-based authentication Environment variable
Scaling Automatic global scaling Single process
Offline Requires internet Works offline (after setup)

๐Ÿ—๏ธ Architecture - Provider-Agnostic Design

Core Design Principles

  • Adaptable - Support for multiple LLM providers through unified interface
  • Simple - Environment-first configuration with sensible defaults
  • Lightweight - Minimal dependencies and fast execution
  • Extensible - Easy to add new providers and capabilities
  • Reliable - Comprehensive error handling and fallback mechanisms

Provider System Architecture

// Provider Registry manages multiple LLM providers
interface LLMProvider {
  createAssistant(request: GenericCreateAssistantRequest): Promise<GenericAssistant>;
  listAssistants(request?: GenericListRequest): Promise<GenericListResponse<GenericAssistant>>;
  // ... all assistant API methods
}

// Providers implement the same interface
class OpenAIProvider implements LLMProvider { /* ... */ }
class AnthropicProvider implements LLMProvider { /* ... */ }
class GoogleProvider implements LLMProvider { /* ... */ }

Configuration System

Simple, environment-first configuration using standard environment variables:

# Required - at least one provider API key
export OPENAI_API_KEY="your-openai-key-here"
export ANTHROPIC_API_KEY="your-anthropic-key-here"

# Optional configuration
export JEZWEB_LOG_LEVEL="info"
export JEZWEB_DEFAULT_PROVIDER="openai"

The system automatically detects the deployment environment and applies appropriate defaults.


๐Ÿงช Testing Infrastructure

Comprehensive Test Suites

Both deployment options include robust testing:

NPM Package Testing

cd npm-package
npm test

Cloudflare Workers Testing

node test-validation-only.js

Manual Testing

Test the Cloudflare Workers deployment:

# List available tools
curl -X POST "https://openai-assistants-mcp.jezweb.ai/mcp/YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'

Test the NPM Package:

echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | npx jezweb-mcp-core@latest

๐Ÿ”ง Development

Local Development

  1. Clone and install:
git clone https://github.com/jezweb/openai-assistants-mcp.git
cd openai-assistants-mcp
npm install
  1. Set up environment:
wrangler secret put OPENAI_API_KEY
wrangler secret put ANTHROPIC_API_KEY
  1. Start development:
npm run dev

Adding New Providers

  1. Implement the LLMProvider interface
  2. Create a provider factory
  3. Register with the provider registry
  4. Add configuration schema

Example:

class MyCustomProvider implements LLMProvider {
  // Implement all required methods
}

const factory: LLMProviderFactory = {
  create: (config) => new MyCustomProvider(config),
  getMetadata: () => ({ name: 'my-provider', ... }),
  validateConfig: (config) => true
};

registry.registerFactory(factory);

๐Ÿ” Enhanced Validation & Error Handling

Intelligent Error Messages

  • Format Examples: Error messages include correct format examples
  • Documentation References: Errors link to relevant documentation
  • Suggestion Guidance: Invalid values show supported alternatives
  • Provider Context: Errors include provider-specific guidance

Validation Features

  • ID Format Validation: Strict format checking with helpful messages
  • Provider Validation: Validates provider availability and capabilities
  • Configuration Validation: Comprehensive config validation
  • Parameter Validation: Type and range checking with examples

๐Ÿ”’ Security

  • API Key Protection - Secure handling of multiple provider API keys
  • Enhanced Input Validation - Comprehensive validation with helpful feedback
  • Provider Isolation - Each provider operates in isolation
  • CORS Security - Proper CORS headers for web clients
  • Rate Limiting - Inherits provider-specific rate limits

๐Ÿš€ Performance

  • Global Edge - Deployed on Cloudflare's global network
  • Sub-100ms - Typical response times under 100ms
  • Provider Selection - Smart provider selection for optimal performance
  • Efficient - Minimal memory footprint and fast execution

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

๐Ÿ“„ License

MIT License - see LICENSE for details.

๐ŸŽฏ Migration Guide

From OpenAI Assistants MCP v2.x

The migration is seamless - just update your package name:

# Old
npx openai-assistants-mcp@latest

# New
npx jezweb-mcp-core@latest

All existing tools and functionality remain identical. The new version adds multi-provider support while maintaining 100% backward compatibility.

Configuration Migration

Old environment variables continue to work:

# Still supported
OPENAI_API_KEY=your-key-here

# New multi-provider support
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key

Ready to get started? Choose your preferred installation method from the Quick Start guide above and begin building with multiple LLM providers through a unified interface!