Package Exports
- @oai2lmapi/opencode-provider
Readme
@oai2lmapi/opencode-provider
OpenAI-compatible provider for OpenCode, built with the Vercel AI SDK.
Features
- Auto-Discovery: Automatically discovers models from your API's
/modelsendpoint - Smart Configuration: Automatically detects model capabilities (tool calling, vision, context limits)
- Flexible Overrides: Per-model configuration via OpenCode settings
- Based on AI SDK: Built on top of Vercel AI SDK's
@ai-sdk/openai-compatible
Note: Advanced features like chain-of-thought handling (
<think>tags) and prompt-based tool calling are planned for future releases.
Installation
npm install @oai2lmapi/opencode-provider
# or
pnpm add @oai2lmapi/opencode-provider
# or
yarn add @oai2lmapi/opencode-providerUsage
Basic Setup
Create a provider configuration file for OpenCode (e.g., opencode.config.ts):
import { createOAI2LMProvider } from '@oai2lmapi/opencode-provider';
export default {
providers: {
myapi: createOAI2LMProvider({
apiKey: process.env.MY_API_KEY,
baseURL: 'https://api.example.com/v1',
}),
},
};With Model Auto-Discovery
The provider will automatically fetch available models on initialization:
import { createOAI2LMProvider } from '@oai2lmapi/opencode-provider';
const provider = await createOAI2LMProvider({
apiKey: process.env.MY_API_KEY,
baseURL: 'https://api.example.com/v1',
autoDiscoverModels: true, // default
});
// Use with OpenCode
const result = await generateText({
model: provider('gpt-4'),
prompt: 'Hello, world!',
});Model Overrides
Configure per-model settings:
const provider = createOAI2LMProvider({
apiKey: process.env.MY_API_KEY,
baseURL: 'https://api.example.com/v1',
modelOverrides: {
'deepseek-*': {
// Use prompt-based tool calling for DeepSeek models
usePromptBasedToolCalling: true,
// Strip chain-of-thought tags
suppressChainOfThought: true,
},
'o1-*': {
// Enable reasoning capture for o1 models
captureReasoning: true,
},
'gpt-4-vision': {
// Override capabilities
supportsImageInput: true,
maxInputTokens: 128000,
},
},
});Chain-of-Thought Handling
For reasoning models that output <think> tags:
const provider = createOAI2LMProvider({
apiKey: process.env.MY_API_KEY,
baseURL: 'https://api.example.com/v1',
modelOverrides: {
'reasoning-model-*': {
// Capture and expose chain-of-thought
captureReasoning: true,
// Or suppress it from output
suppressChainOfThought: false,
},
},
});
const result = await generateText({
model: provider('reasoning-model-v1'),
prompt: 'Solve this puzzle...',
});
// Access reasoning if captured
console.log(result.reasoning); // Chain-of-thought content
console.log(result.text); // Final answer without <think> tagsPrompt-Based Tool Calling
For models without native function calling:
const provider = createOAI2LMProvider({
apiKey: process.env.MY_API_KEY,
baseURL: 'https://api.example.com/v1',
modelOverrides: {
'legacy-model': {
usePromptBasedToolCalling: true,
},
},
});
// Tools are automatically converted to XML format in system prompt
const result = await generateText({
model: provider('legacy-model'),
prompt: 'What is the weather in Tokyo?',
tools: {
getWeather: {
description: 'Get current weather',
parameters: z.object({
location: z.string(),
}),
execute: async ({ location }) => {
// ... fetch weather
},
},
},
});Configuration Options
Provider Settings
interface OAI2LMProviderSettings {
/** API key for authentication */
apiKey: string;
/** Base URL for API calls (e.g., 'https://api.example.com/v1') */
baseURL: string;
/** Provider name (defaults to 'oai2lm') */
name?: string;
/** Custom headers */
headers?: Record<string, string>;
/** Auto-discover models on initialization (default: true) */
autoDiscoverModels?: boolean;
/** Per-model configuration overrides */
modelOverrides?: Record<string, ModelOverride>;
/** Custom fetch implementation */
fetch?: typeof fetch;
}Model Override Options
interface ModelOverride {
/** Max input tokens */
maxInputTokens?: number;
/** Max output tokens */
maxOutputTokens?: number;
/** Supports native tool/function calling */
supportsToolCalling?: boolean;
/** Supports image inputs */
supportsImageInput?: boolean;
/** Default temperature */
temperature?: number;
/** Use XML-based prompt engineering for tools */
usePromptBasedToolCalling?: boolean;
/** Strip <think>...</think> blocks from output */
suppressChainOfThought?: boolean;
/** Capture reasoning content separately */
captureReasoning?: boolean;
/** Thinking level: token budget or 'low'/'medium'/'high' */
thinkingLevel?: number | 'low' | 'medium' | 'high' | 'auto';
}How It Works
- Model Discovery: On initialization, the provider fetches the
/modelsendpoint - Capability Detection: Analyzes model metadata to determine capabilities
- Shared Metadata Registry: Uses
@oai2lmapi/model-metadatafor fallback model info - Metadata Caching: Model info is cached to reduce API calls
- Override Application: User-defined overrides are applied on top of discovered capabilities
- Request Translation: Converts AI SDK requests to OpenAI-compatible format
- Response Parsing: Handles special formats like
<think>tags and XML tool calls
Configuration with OpenCode
This provider integrates with OpenCode's data directory for configuration. By default (following the XDG base directory spec), it looks for a config file at:
~/.local/share/opencode/oai2lm.json(primary location, checked first — corresponds to$XDG_DATA_HOME/opencode/oai2lm.jsonwith the common default of~/.local/share)~/.config/opencode/oai2lm.json(alternative location, checked second as a fallback — corresponds to$XDG_CONFIG_HOME/opencode/oai2lm.jsonwith the common default of~/.config)
If you are on a non-standard system or use custom XDG paths, you can override these locations by setting XDG_DATA_HOME and/or XDG_CONFIG_HOME. The provider will then resolve the config file as $XDG_DATA_HOME/opencode/oai2lm.json and $XDG_CONFIG_HOME/opencode/oai2lm.json respectively, and it will still search these locations in the order shown, using the first config file it finds (so the data directory location takes precedence if both files exist).
Config File Format
{
"apiKey": "your-api-key",
"baseURL": "https://api.example.com/v1",
"name": "my-provider",
"autoDiscoverModels": true,
"modelOverrides": {
"deepseek-*": {
"usePromptBasedToolCalling": true,
"suppressChainOfThought": true
},
"gpt-4-vision": {
"supportsImageInput": true,
"maxInputTokens": 128000
}
}
}Using Config File
import { createOAI2LMProviderFromConfig, OAI2LMProvider } from '@oai2lmapi/opencode-provider';
// Create provider from config file
const provider = createOAI2LMProviderFromConfig();
// Or use the static method
const provider2 = OAI2LMProvider.fromConfig();
// Override specific settings
const provider3 = createOAI2LMProviderFromConfig({
baseURL: 'https://api.custom.com/v1', // Override base URL
});Environment Variables
You can also configure via environment variables:
OAI2LM_API_KEY- API key for authenticationOAI2LM_BASE_URL- Base URL for API calls
Priority order (highest to lowest):
- Explicit settings passed to function
- Environment variables
- Config file values
Integration with OpenCode
This provider is designed to work seamlessly with OpenCode's configuration system:
// ~/.opencode/config.js
export default {
providers: {
myapi: {
type: '@oai2lmapi/opencode-provider',
apiKey: process.env.MY_API_KEY,
baseURL: 'https://api.example.com/v1',
modelOverrides: {
// Configure models as needed
},
},
},
models: {
default: 'myapi:gpt-4',
},
};Examples
Using with Multiple Providers
import { createOAI2LMProvider } from '@oai2lmapi/opencode-provider';
const openai = createOAI2LMProvider({
name: 'openai',
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://api.openai.com/v1',
});
const deepseek = createOAI2LMProvider({
name: 'deepseek',
apiKey: process.env.DEEPSEEK_API_KEY,
baseURL: 'https://api.deepseek.com/v1',
modelOverrides: {
'*': {
usePromptBasedToolCalling: true,
},
},
});
// Use either provider
await generateText({ model: openai('gpt-4'), prompt: '...' });
await generateText({ model: deepseek('deepseek-chat'), prompt: '...' });License
MIT
Contributing
Contributions are welcome! Please see the main repository for guidelines.