Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (openai-plugins) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
openai-mcp
A TypeScript library that provides an OpenAI-compatible client for the Model Context Protocol (MCP).
Installation
npm install openai-mcp
Features
- OpenAI API compatibility - works as a drop-in replacement for the OpenAI client
- Connects to local or remote Model Context Protocol servers
- Supports tool use and function calling
- Rate limiting and retry logic built in
- Configurable logging
- TypeScript type definitions included
Usage
import { OpenAI } from 'openai-mcp';
// Create an OpenAI-compatible client connected to an MCP server
const openai = new OpenAI({
mcp: {
// MCP server URL(s) to connect to
serverUrls: ['http://localhost:3000/mcp'],
// Optional: set log level (debug, info, warn, error)
logLevel: 'info',
// Additional configuration options
// modelName: 'gpt-4', // Default model to use
// disconnectAfterUse: true, // Auto-disconnect after use
// maxToolCalls: 15, // Max number of tool calls per conversation
// toolTimeoutSec: 60, // Timeout for tool calls
}
});
// Use the client like a standard OpenAI client
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello, how are you today?' }
]
});
console.log(response.choices[0].message.content);
Logging Configuration
import { setMcpLogLevel } from 'openai-mcp';
// Set log level to one of: 'debug', 'info', 'warn', 'error'
setMcpLogLevel('info');
Environment Variables
The library also supports configuration through environment variables:
# MCP Server URL(s) - comma separated for multiple servers
MCP_SERVER_URL=http://localhost:3000/mcp
# API Keys for different model providers
OPENAI_API_KEY=your-openai-api-key
ANTHROPIC_API_KEY=your-anthropic-api-key
GEMINI_API_KEY=your-gemini-api-key
Multi-Model Support
The library supports routing requests to different model providers based on the model name:
import { OpenAI } from 'openai-mcp';
const openai = new OpenAI();
// Uses OpenAI API
const gpt4Response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello GPT-4' }]
});
// Uses Anthropic API
const claudeResponse = await openai.chat.completions.create({
model: 'claude-3',
messages: [{ role: 'user', content: 'Hello Claude' }]
});
// Uses Google Gemini API
const geminiResponse = await openai.chat.completions.create({
model: 'gemini-pro',
messages: [{ role: 'user', content: 'Hello Gemini' }]
});
Examples
The examples/
directory contains various usage examples:
- Basic Usage: Simple chat completion request
- Streaming: Stream responses token by token
- Multi-Model: Use OpenAI, Anthropic, and Gemini models
- Tools Usage: Function/tool calling with MCP
- Custom Logging: Configure and use the logging system
See the Examples README for more details on running these examples.
Development
To build the library:
npm run build
To run tests:
npm test