Package Exports
- @smithery/sdk
- @smithery/sdk/config.d.ts
- @smithery/sdk/config.js
- @smithery/sdk/index.d.ts
- @smithery/sdk/index.js
- @smithery/sdk/integrations/llm/anthropic.d.ts
- @smithery/sdk/integrations/llm/anthropic.js
- @smithery/sdk/integrations/llm/openai.d.ts
- @smithery/sdk/integrations/llm/openai.js
- @smithery/sdk/registry-types.d.ts
- @smithery/sdk/registry-types.js
- @smithery/sdk/registry.d.ts
- @smithery/sdk/registry.js
- @smithery/sdk/types.d.ts
- @smithery/sdk/types.js
Readme
Smithery Typescript Framework 
Smithery is a Typescript framework that easily connects language models (LLMs) to Model Context Protocols (MCPs), allowing you to build agents that use resources and tools without being overwhelmed by JSON schemas.
⚠️ This repository is work in progress and in alpha. Not recommended for production use yet. ⚠️
Key Features
- Connect to multiple MCPs with a single client
- Adapters to transform MCP resposnes for OpenAI and Anthropic clients
- Supports chaining tool calls until LLM completes
Quickstart
Installation
npm install @smithery/sdkUsage
In this example, we'll connect use OpenAI client with Exa search capabilities.
npm install @smithery/mcp-exaThe following code sets up OpenAI and connects to an Exa MCP server. In this case, we're running the server locally within the same process, so it's just a simple passthrough.
import { MultiClient } from "@smithery/sdk"
import { OpenAIChatAdapter } from "@smithery/sdk/integrations/llm/openai"
import * as exa from "@smithery/mcp-exa"
import { OpenAI } from "openai"
import { createTransport } from "@smithery/sdk/registry"
const openai = new OpenAI()
const exaServer = exa.createServer({
apiKey: process.env.EXA_API_KEY,
})
const sequentialThinking = await createTransport(
"@modelcontextprotocol/server-sequential-thinking",
)
const client = new MultiClient()
await client.connectAll({
exa: exaServer,
sequentialThinking: sequentialThinking,
})Now you can make your LLM aware of the available tools from Exa.
// Create an adapter
const adapter = new OpenAIChatAdapter(client)
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "In 2024, did OpenAI release GPT-5?" }],
// Pass the tools to OpenAI call
tools: await adapter.listTools(),
})
// Obtain the tool outputs as new messages
const toolMessages = await adapter.callTool(response)Using this, you can easily enable your LLM to call tools and obtain the results.
However, it's often the case where your LLM needs to call a tool, see its response, and continue processing output of the tool in order to give you a final response.
In this case, you have to loop your LLM call and update your messages until there are no more toolMessages to continue.
Example:
let messages = [
{
role: "user",
content:
"Deduce Obama's age in number of days. It's November 28, 2024 today. Search to ensure correctness.",
},
]
const adapter = new OpenAIChatAdapter(client)
while (!isDone) {
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages,
tools: await adapter.listTools(),
})
// Handle tool calls
const toolMessages = await adapter.callTool(response)
// Append new messages
messages.push(response.choices[0].message)
messages.push(...toolMessages)
isDone = toolMessages.length === 0
}See a full example in the examples directory.
Troubleshooting
Error: ReferenceError: EventSource is not definedThis event means you're trying to use EventSource API (which is typically used in the browser) from Node. You'll have to install the following to use it:
npm install eventsource
npm install -D @types/eventsourcePatch the global EventSource object:
import EventSource from "eventsource"
global.EventSource = EventSource as any