Package Exports
- @aeye/openrouter
- @aeye/openrouter/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@aeye/openrouter) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
@aeye/openrouter
OpenRouter provider for the @aeye (AI TypeScript) framework. OpenRouter provides unified access to multiple AI providers through a single API, with automatic fallbacks, routing optimization, and competitive pricing.
Features
- Multi-Provider Access: Access models from OpenAI, Anthropic, Google, Meta, and more through one API
- Automatic Fallbacks: Seamless failover to alternative providers if primary fails
- Intelligent Routing: Automatic selection of best provider for each request
- Cost Tracking: Built-in cost information in responses
- ZDR Support: Zero Data Retention mode for privacy-sensitive applications
- Provider Preferences: Control which providers to use, prefer, or avoid
- Reasoning Models: Support for advanced reasoning with o1, Claude, and others
- Streaming: Full streaming support across compatible models
- Extends OpenAI: Built on the OpenAI provider for compatibility
Installation
npm install @aeye/openrouter @aeye/openai @aeye/ai @aeye/core openai zodQuick Start
import { OpenRouterProvider } from '@aeye/openrouter';
import { AI } from '@aeye/ai';
// Create provider instance
const openrouter = new OpenRouterProvider({
apiKey: process.env.OPENROUTER_API_KEY!,
defaultParams: {
siteUrl: 'https://yourapp.com',
appName: 'Your App Name',
},
});
// Use with AI instance
const ai = AI.with()
.providers({ openrouter })
.create();
// Make a request (OpenRouter picks the best provider)
const response = await ai.chat.get([
{ role: 'user', content: 'Explain TypeScript generics' }
], {
metadata: { model: 'openai/gpt-4-turbo' }
});
console.log(response.content);
console.log('Cost:', response.usage.cost); // Actual cost from OpenRouterConfiguration
Basic Configuration
import { OpenRouterProvider, OpenRouterConfig } from '@aeye/openrouter';
const config: OpenRouterConfig = {
apiKey: process.env.OPENROUTER_API_KEY!,
defaultParams: {
siteUrl: 'https://yourapp.com', // Your app URL (helps with rankings)
appName: 'My AI App', // Your app name (shown in OpenRouter dashboard)
},
};
const provider = new OpenRouterProvider(config);Advanced Configuration
const provider = new OpenRouterProvider({
apiKey: process.env.OPENROUTER_API_KEY!,
defaultParams: {
siteUrl: 'https://yourapp.com',
appName: 'My AI App',
// Automatic fallbacks
allowFallbacks: true, // Enable fallback to alternative providers
// Provider preferences
providers: {
prefer: ['openai', 'anthropic'], // Prefer these providers
allow: ['openai', 'anthropic', 'google'], // Only use these providers
deny: ['provider-x'], // Never use these providers
ignore: ['low-quality-provider'], // Exclude from consideration
order: ['openai', 'anthropic'], // Preferred order
quantizations: ['int8', 'fp16'], // Preferred quantization levels
dataCollection: 'deny', // or 'allow' - control data collection
},
// Privacy & security
requireParameters: true, // Require all providers to support parameters
dataCollection: 'deny', // Global data collection setting
// Model transformations
transforms: ['middle-out'], // Apply transformations to requests
},
});Usage Examples
Basic Chat with Cost Tracking
const response = await ai.chat.get([
{ role: 'user', content: 'What is the capital of France?' }
], {
metadata: { model: 'openai/gpt-3.5-turbo' }
});
console.log(response.content);
console.log('Tokens:', response.usage.totalTokens);
console.log('Cost: $', response.usage.cost); // Actual cost from OpenRouterUsing Multiple Models
OpenRouter allows you to access models from different providers using the format provider/model:
// OpenAI GPT-4
const gpt4 = await ai.chat.get(messages, {
metadata: { model: 'openai/gpt-4-turbo' }
});
// Anthropic Claude
const claude = await ai.chat.get(messages, {
metadata: { model: 'anthropic/claude-3-opus' }
});
// Google Gemini
const gemini = await ai.chat.get(messages, {
metadata: { model: 'google/gemini-pro' }
});
// Meta Llama
const llama = await ai.chat.get(messages, {
metadata: { model: 'meta-llama/llama-3-70b' }
});Automatic Model Selection
Let OpenRouter choose the best model based on your criteria:
const ai = AI.with()
.providers({ openrouter })
.create({
defaultMetadata: {
required: ['chat', 'streaming'],
weights: {
speed: 0.3,
cost: 0.4,
quality: 0.3,
},
},
});
// OpenRouter will select the best model matching criteria
const response = await ai.chat.get([
{ role: 'user', content: 'Explain quantum computing' }
]);Provider Fallbacks
// Primary provider fails? OpenRouter automatically falls back
const provider = new OpenRouterProvider({
apiKey: process.env.OPENROUTER_API_KEY!,
defaultParams: {
allowFallbacks: true,
providers: {
order: ['openai', 'anthropic', 'google'], // Try in this order
},
},
});
// If OpenAI is down, automatically tries Anthropic, then Google
const response = await ai.chat.get(messages, {
metadata: { model: 'openai/gpt-4-turbo' }
});Zero Data Retention (ZDR)
// Only use ZDR-compliant models
const ai = AI.with()
.providers({ openrouter })
.create({
defaultMetadata: {
required: ['chat', 'zdr'], // Require ZDR support
},
});
// Also configure at provider level
const provider = new OpenRouterProvider({
apiKey: process.env.OPENROUTER_API_KEY!,
defaultParams: {
dataCollection: 'deny', // Ensure no data collection
},
});Streaming with Cost Tracking
let totalCost = 0;
for await (const chunk of ai.chat.stream([
{ role: 'user', content: 'Write a story about a robot' }
], {
metadata: { model: 'anthropic/claude-3-sonnet' }
})) {
if (chunk.content) {
process.stdout.write(chunk.content);
}
// Cost is sent in final chunk
if (chunk.usage?.cost) {
totalCost = chunk.usage.cost;
}
}
console.log(`\n\nTotal cost: $${totalCost}`);Reasoning Models
// Use OpenAI o1 through OpenRouter
const response = await ai.chat.get([
{
role: 'user',
content: 'Solve: If a train leaves NYC at 3pm going 60mph, and another leaves Boston at 4pm going 80mph, when do they meet?'
}
], {
metadata: { model: 'openai/o1-preview' }
});
console.log('Reasoning:', response.reasoning);
console.log('Answer:', response.content);
console.log('Cost:', response.usage.cost);Function Calling
import z from 'zod';
const response = await ai.chat.get([
{ role: 'user', content: 'What is the weather in Paris?' }
], {
metadata: { model: 'openai/gpt-4-turbo' },
tools: [
{
name: 'get_weather',
description: 'Get current weather for a location',
parameters: z.object({
location: z.string(),
unit: z.enum(['celsius', 'fahrenheit']).optional(),
}),
}
],
toolChoice: 'auto',
});Fetching Available Models
OpenRouter provides a model source that can fetch all available models:
import { OpenRouterModelSource } from '@aeye/openrouter';
const source = new OpenRouterModelSource({
apiKey: process.env.OPENROUTER_API_KEY,
includeZDR: true, // Include ZDR information
});
const models = await source.fetchModels();
models.forEach(model => {
console.log(`${model.id}`);
console.log(` Pricing: $${model.pricing.text.input}/1M input tokens`);
console.log(` Context: ${model.contextWindow} tokens`);
console.log(` ZDR: ${model.capabilities.has('zdr')}`);
});Automatic Model Registration
// Automatically fetch and register OpenRouter models
const ai = AI.with()
.providers({ openrouter })
.create({
fetchOpenRouterModels: true, // Auto-fetch all available models
});
// Now all OpenRouter models are available for automatic selection
const response = await ai.chat.get(messages);Configuration Options
OpenRouterConfig
Extends OpenAIConfig from @aeye/openai.
| Property | Type | Required | Description |
|---|---|---|---|
apiKey |
string |
Yes | OpenRouter API key |
baseURL |
string |
No | Custom base URL (defaults to https://openrouter.ai/api/v1) |
defaultParams |
object |
No | Default parameters for all requests |
defaultParams
| Property | Type | Description |
|---|---|---|
siteUrl |
string |
Your app URL (helps with rankings on OpenRouter) |
appName |
string |
Your app name (shown in OpenRouter dashboard) |
allowFallbacks |
boolean |
Enable automatic provider fallbacks |
requireParameters |
boolean |
Require all providers support parameters |
dataCollection |
'deny' | 'allow' |
Control data collection |
order |
string[] |
Preferred provider order |
providers |
object |
Provider preferences (see below) |
transforms |
string[] |
Request transformations to apply |
providers Options
| Property | Type | Description |
|---|---|---|
allow |
string[] |
Whitelist of allowed providers |
deny |
string[] |
Blacklist of denied providers |
prefer |
string[] |
Preferred providers (given priority) |
ignore |
string[] |
Providers to ignore |
order |
string[] |
Explicit provider order |
quantizations |
string[] |
Preferred quantization levels |
dataCollection |
'deny' | 'allow' |
Provider-level data collection |
Cost Information
OpenRouter includes actual cost information in every response:
const response = await ai.chat.get(messages, {
metadata: { model: 'openai/gpt-4-turbo' }
});
// Cost is automatically included
console.log('Input tokens:', response.usage.text.input);
console.log('Output tokens:', response.usage.text.output);
console.log('Cost: $', response.usage.cost); // Actual USD cost
// Cost calculation is skipped when provider returns cost
// No need to manually calculate costs!The cost is provided by OpenRouter's API in the total_cost field and automatically extracted into the usage.cost field.
Model Format
OpenRouter uses the format provider/model-name:
openai/gpt-4-turboanthropic/claude-3-opusgoogle/gemini-prometa-llama/llama-3-70b-instructmistralai/mistral-largecohere/command-r-plus
See the OpenRouter Models page for the full list.
Privacy & Data Retention
Zero Data Retention (ZDR)
OpenRouter supports ZDR mode where your data is not stored:
// Require ZDR-compliant models
const ai = AI.with()
.providers({ openrouter })
.create({
defaultMetadata: {
required: ['chat', 'zdr'],
},
});
// Or configure at provider level
const provider = new OpenRouterProvider({
apiKey: process.env.OPENROUTER_API_KEY!,
defaultParams: {
dataCollection: 'deny',
providers: {
dataCollection: 'deny',
},
},
});Models with ZDR support have the zdr capability tag.
Error Handling
OpenRouter uses the same error types as the OpenAI provider:
import { ProviderError, RateLimitError } from '@aeye/openrouter';
try {
const response = await ai.chat.get(messages, {
metadata: { model: 'openai/gpt-4-turbo' }
});
} catch (error) {
if (error instanceof RateLimitError) {
console.error('Rate limit exceeded');
console.log(`Retry after ${error.retryAfter} seconds`);
} else if (error instanceof ProviderError) {
console.error(`Provider error: ${error.message}`);
}
}Best Practices
Set Site URL and App Name: This helps with OpenRouter rankings and analytics
defaultParams: { siteUrl: 'https://yourapp.com', appName: 'Your App', }
Use Cost Tracking: Monitor costs with the built-in
usage.costfieldconsole.log('Cost: $', response.usage.cost);
Enable Fallbacks: Use automatic fallbacks for reliability
defaultParams: { allowFallbacks: true, providers: { order: ['openai', 'anthropic', 'google'], }, }
Specify Provider Preferences: Control which providers are used
providers: { prefer: ['openai', 'anthropic'], deny: ['low-quality-provider'], }
Use ZDR for Sensitive Data: Enable Zero Data Retention for privacy
defaultParams: { dataCollection: 'deny', }
Monitor Model Availability: OpenRouter's model list changes frequently
fetchOpenRouterModels: true // Auto-fetch latest models
API Reference
OpenRouterProvider
Extends OpenAIProvider from @aeye/openai.
Constructor: new OpenRouterProvider(config: OpenRouterConfig)
Methods:
- Inherits all methods from
OpenAIProvider createExecutor<TContext, TMetadata>(config?)- Returns executor with cost trackingcreateStreamer<TContext, TMetadata>(config?)- Returns streamer with cost trackinglistModels(config?)- Fetches models from OpenRouter API with ZDR info
Overridden Methods:
createClient(config)- Creates OpenAI client with OpenRouter endpointcustomizeChatParams(params, config, request)- Adds OpenRouter-specific params
OpenRouterModelSource
Model source for fetching OpenRouter models.
Constructor: new OpenRouterModelSource(config?: OpenRouterSourceConfig)
Methods:
fetchModels(config?): Promise<ModelInfo[]>- Fetch all available models
Config Options:
apiKey?: string- OpenRouter API key (optional for public models)includeZDR?: boolean- Include ZDR information (default: true)
Supported Features
- ✅ Chat completions
- ✅ Streaming
- ✅ Function calling
- ✅ Vision (for supported models)
- ✅ Reasoning models
- ✅ Cost tracking
- ✅ Automatic fallbacks
- ✅ Provider routing
- ✅ ZDR mode
- ✅ Multi-modal (for supported models)
- ❌ Image generation (not supported by OpenRouter)
- ❌ Speech synthesis (not supported by OpenRouter)
- ❌ Transcription (not supported by OpenRouter)
- ❌ Embeddings (not supported by OpenRouter)
For image generation, speech, transcription, and embeddings, use provider-specific packages like @aeye/openai, @aeye/anthropic, etc.
Related Packages
- @aeye/core: Core @aeye framework types and interfaces
- @aeye/ai: AI abstractions and utilities
- @aeye/openai: OpenAI provider (base class)
- @aeye/anthropic: Anthropic Claude provider
- @aeye/google: Google AI provider
Links
License
GPL-3.0
Contributing
Contributions are welcome! Please see the main @aeye repository for contribution guidelines.
Support
For issues and questions:
- GitHub Issues: https://github.com/ClickerMonkey/aeye/issues
- Documentation: https://github.com/ClickerMonkey/aeye
- OpenRouter Discord: https://discord.gg/openrouter