Package Exports
- @gorets/ai-providers
- @gorets/ai-providers/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@gorets/ai-providers) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
🤖 AI Providers Database
Comprehensive, up-to-date database of AI model providers, models, pricing, and capabilities.
📋 Features
- Comprehensive Provider Data: Information about 9 major AI providers (OpenAI, Anthropic, Google, xAI, Mistral, Meta, DeepSeek, Z.AI, Alibaba Cloud)
- 60+ Models: Including latest GPT-5.1, Claude 4.5, Gemini 2.5, Qwen3 Max, and more
- Detailed Model Information: Context windows, pricing, capabilities, tags, and lifecycle management
- TypeScript Support: Fully typed for excellent IDE support
- MCP Support: Models with Model Context Protocol (MCP) server support marked
- Embedding Models: OpenAI text-embedding models included
- JSON Access: Direct access to JSON data via GitHub for non-NPM usage
- Regular Updates: Kept up-to-date with the latest models and pricing
- Easy to Extend: Simple structure for adding new providers and models
✨ Latest Updates
GPT-5.1 Models (Nov 12, 2025)
- GPT-5.1 Instant - Most-used model with adaptive reasoning
- GPT-5.1 Thinking - Advanced reasoning model for complex tasks
OpenAI Embeddings
- text-embedding-3-large - 3072 dimensions, best quality
- text-embedding-3-small - 1536 dimensions, 5x cheaper than ada-002
- text-embedding-ada-002 - Legacy embedding model
Prompt Caching Support
Both OpenAI and Anthropic models support prompt caching for reduced costs on repeated context:
- OpenAI: GPT-5.1, GPT-5, GPT-4o, GPT-4o mini, O1 models (50% discount on cached tokens)
- Anthropic: Claude 4.5 models (90% discount on cached tokens)
MCP (Model Context Protocol) Support
Claude 4.5 models (Haiku, Sonnet, Opus) now support MCP servers for connecting to external tools and data sources.
New Providers
- xAI - Grok models with real-time X (Twitter) data access
- Mistral AI - Efficient mixture-of-experts models
- Meta - Llama 3.x open-source models
- DeepSeek - Ultra cost-effective models with reasoning
- Z.AI - Chinese AI provider with competitive pricing
- Alibaba Cloud - Qwen models with trillion-parameter flagship, vision, and audio capabilities
📦 Installation
npm install @gorets/ai-providersor
yarn add @gorets/ai-providers📡 Direct JSON Access (Without NPM)
You can access the JSON data files directly from GitHub without installing the package:
Available JSON Files
All data is available at https://raw.githubusercontent.com/gorets/ai-providers/main/data/:
- database.json - Complete database (providers + models + metadata)
- providers.json - All providers only
- models.json - All models only
- metadata.json - Version and update info
Models by Provider
- models-openai.json - OpenAI models (GPT-5.1, GPT-5, GPT-4o, O1, embeddings)
- models-anthropic.json - Anthropic models (Claude 4.5, Claude 3.x)
- models-google.json - Google models (Gemini 2.5, Gemini 1.5)
- models-xai.json - xAI models (Grok)
- models-mistral.json - Mistral models (Mistral, Mixtral, Codestral)
- models-meta.json - Meta models (Llama 3.x)
- models-deepseek.json - DeepSeek models
- models-zai.json - Z.AI models
- models-alibaba.json - Alibaba Cloud models (Qwen series)
Example: Fetch JSON in Browser/Node.js
// Fetch all models
const response = await fetch('https://raw.githubusercontent.com/gorets/ai-providers/main/data/models.json');
const models = await response.json();
// Fetch OpenAI models only
const openaiResponse = await fetch('https://raw.githubusercontent.com/gorets/ai-providers/main/data/models-openai.json');
const openaiModels = await openaiResponse.json();
// Find GPT-5.1
const gpt51 = openaiModels.find(m => m.id === 'gpt-5.1-instant' || m.shortName === 'gpt-5.1');
console.log(`${gpt51.name}: $${gpt51.pricing.input}/1M input tokens`);Example: Python
import requests
# Fetch all models
response = requests.get('https://raw.githubusercontent.com/gorets/ai-providers/main/data/models.json')
models = response.json()
# Find models with vision capability
vision_models = [m for m in models if 'vision' in m['capabilities']]
print(f"Found {len(vision_models)} models with vision support")Example: curl
# Download all models
curl -O https://raw.githubusercontent.com/gorets/ai-providers/main/data/models.json
# Download specific provider models
curl -O https://raw.githubusercontent.com/gorets/ai-providers/main/data/models-openai.json🚀 Usage
TypeScript/JavaScript
import { getDatabase, getModelById, getModelsByIds, getModelsByProvider } from '@gorets/ai-providers';
// Get complete database
const db = getDatabase();
console.log(`Total providers: ${db.providers.length}`);
console.log(`Total models: ${db.models.length}`);
// Get specific model
const gpt5 = getModelById('gpt-5.1-instant');
console.log(`${gpt5.name}: $${gpt5.pricing.input}/1M input tokens`);
// Get multiple models by IDs
const models = getModelsByIds(['gpt-5.1-instant', 'claude-sonnet-4-5', 'gemini-2.5-flash']);
models.forEach(model => {
if (model) {
console.log(`${model.name}: $${model.pricing?.input}/1M input tokens`);
}
});
// Get all models from a provider
const anthropicModels = getModelsByProvider('anthropic');
console.log(`Anthropic has ${anthropicModels.length} models`);Filtering Models
import { getModelsByTag, getModelsByCapability } from '@gorets/ai-providers';
// Find all cost-effective models
const cheapModels = getModelsByTag('cost-effective');
// Find all models with vision capabilities
const visionModels = getModelsByCapability('vision');
// Find models with reasoning
const reasoningModels = getModelsByTag('reasoning');Utility Functions
The library includes powerful helper functions for common tasks:
Cost Calculation
import { calculateCost, calculateCostWithCache, compareCosts } from '@gorets/ai-providers';
// Calculate cost for specific usage
const cost = calculateCost('gpt-4o-mini-2024-07-18', 50000, 10000);
console.log(`Total cost: $${cost.totalCost.toFixed(4)}`);
// Calculate with prompt caching (Anthropic & OpenAI models)
const costWithCache = calculateCostWithCache(
'claude-sonnet-4-5',
50000, // new input tokens
200000, // cached input tokens
10000 // output tokens
);
// Works with OpenAI models too
const openaiCached = calculateCostWithCache(
'gpt-4o-2024-08-06',
30000, // new input tokens
100000, // cached input tokens
5000 // output tokens
);
// Compare costs across multiple models
const comparison = compareCosts(
['gpt-4o-mini-2024-07-18', 'claude-haiku-4-5', 'gemini-2.5-flash'],
100000,
20000
);Advanced Search
import { searchModels, getCheapestModel, getRecommendedModels } from '@gorets/ai-providers';
// Search with multiple criteria
const results = searchModels({
provider: 'openai',
capabilities: ['vision', 'function-calling'],
maxPrice: 3.0,
minContextWindow: 100000,
});
// Find cheapest model with specific requirements
const cheapest = getCheapestModel({
capabilities: ['vision', 'chat'],
activeOnly: true,
});
// Get recommended models for a use case
const recommended = getRecommendedModels({
budget: 'low',
priority: 'speed',
capabilities: ['chat', 'function-calling'],
});Deprecation Management
import {
isModelDeprecated,
getReplacementModel,
getModelsShuttingDownSoon,
getActiveModels
} from '@gorets/ai-providers';
// Check if model is deprecated
const status = isModelDeprecated('gpt-3.5-turbo-0125');
if (status.isDeprecated) {
console.log(`Deprecated on: ${status.deprecationDate}`);
console.log(`Shuts down: ${status.shutdownDate}`);
// Get replacement model
const replacement = getReplacementModel('gpt-3.5-turbo-0125');
console.log(`Use instead: ${replacement.name}`);
}
// Find models shutting down in next 90 days
const shuttingDown = getModelsShuttingDownSoon(90);
// Get only active models
const activeModels = getActiveModels();Context and Features
import { getModelsWithLargestContext, getModelById, getModelsByIds } from '@gorets/ai-providers';
// Find models with largest context windows
const largestContext = getModelsWithLargestContext(5);
// Search by alias
const model = getModelById('claude-sonnet-4-5'); // Works with aliases!
// Get multiple models at once
const selectedModels = getModelsByIds([
'gpt-5.1-instant',
'claude-sonnet-4-5',
'gemini-2.5-flash',
'grok-3'
]);
// Returns array of ModelInfo | undefined in the same orderDirect JSON Access (No Installation Required)
You can access the latest data directly from GitHub:
// Complete database
const response = await fetch(
'https://raw.githubusercontent.com/gorets/ai-providers/main/dist/data/database.json'
);
const database = await response.json();
// Just models
const modelsResponse = await fetch(
'https://raw.githubusercontent.com/gorets/ai-providers/main/dist/data/models.json'
);
const models = await modelsResponse.json();
// Models by provider
const openaiModels = await fetch(
'https://raw.githubusercontent.com/gorets/ai-providers/main/dist/data/models-openai.json'
);📊 Data Structure
Provider Information
interface ProviderInfo {
id: string;
name: string;
website: string;
apiDocsUrl: string;
icon: string;
color: string;
description: string;
features: string[];
}Model Information
interface ModelInfo {
id: string; // Model identifier for API calls
aliases?: string[]; // Alternative IDs (e.g., 'claude-sonnet-4-5' for 'claude-sonnet-4-5-20250929')
name: string; // Human-readable name
provider: string; // Provider ID
releaseDate?: string; // ISO 8601 date
status: 'stable' | 'beta' | 'experimental' | 'deprecated' | 'disabled' | 'preview';
capabilities: string[]; // e.g., ['chat', 'vision', 'function-calling']
tags: string[]; // e.g., ['fast', 'cost-effective']
limits: {
contextWindow: number; // Max context in tokens
maxOutputTokens: number; // Max output per request
};
pricing?: {
input: number; // Per 1M input tokens (USD)
output: number; // Per 1M output tokens (USD)
cachedInput?: number; // Cached input cost (if supported)
};
description?: string;
docsUrl?: string;
deprecationDate?: string; // When model was deprecated (ISO 8601)
shutdownDate?: string; // When model stops working (ISO 8601)
replacementModel?: string; // ID of replacement model
}🏷️ Model Tags
flagship- Top-tier model from the providerfast- Optimized for speedcost-effective- Budget-friendly optionbalanced- Good balance of quality and costexperimental- Experimental/preview versionlong-context- Extended context window (>100K tokens)multimodal- Supports multiple input types (text, images, audio)reasoning- Enhanced reasoning capabilitiescoding- Optimized for code generationdeprecated- No longer recommended for new projects
📊 Model Status
Models go through different lifecycle stages:
stable- Production-ready, fully supportedbeta- Feature-complete but may have minor issuesexperimental- Early access, may change significantlypreview- Pre-release version for testingdeprecated- Still works but superseded by newer modelsdisabled- No longer available, API calls will fail
Deprecated Models: When a model is deprecated, check these fields:
deprecationDate- When it was marked as deprecatedshutdownDate- When it will stop working (if known)replacementModel- Recommended model to migrate to
Model Aliases: Some models have multiple identifiers. The main ID is the simple name, with dated versions in aliases:
{
id: 'claude-sonnet-4-5', // Simple name as main ID
aliases: ['claude-sonnet-4-5-20250929', 'claude-sonnet-4.5'], // Dated version in aliases
// ... other fields
}You can use either the main ID or any alias when searching for models.
🎯 Model Capabilities
text-generation- Generate textchat- Conversational AIcode-generation- Code writing and completionvision- Image understandingimage-generation- Image creationfunction-calling- Tool/function callingstreaming- Streaming responsesjson-mode- Structured JSON outputreasoning- Advanced reasoningembeddings- Text embeddingsaudio-input- Audio understandingaudio-output- Speech synthesismcp-servers- Model Context Protocol (MCP) server support
🔄 Available Endpoints
All JSON files are available at:
https://raw.githubusercontent.com/gorets/ai-providers/main/dist/data/
database.json- Complete databaseproviders.json- All providersmodels.json- All modelsmodels-{provider}.json- Models by provider (e.g.,models-openai.json)metadata.json- Version and update information
🤝 Contributing
Contributions are welcome! To add a new provider or model:
- Add provider info to
src/providers.ts - Create a new file in
src/models/(or update existing) - Update
src/models/index.tsto export your models - Run
npm run generateto create JSON files - Submit a pull request
Adding a New Model
// src/models/yourprovider.ts
import { ModelInfo } from '../types';
export const YOUR_PROVIDER_MODELS: ModelInfo[] = [
{
id: 'model-id',
name: 'Model Name',
provider: 'yourprovider',
releaseDate: '2025-01-01',
status: 'stable',
capabilities: ['chat', 'code-generation'],
tags: ['balanced'],
limits: {
contextWindow: 128000,
maxOutputTokens: 4096,
},
pricing: {
input: 1.0,
output: 3.0,
},
description: 'Model description',
},
];🛠️ Development & Contributing
Local Development
- Clone the repository:
git clone https://github.com/gorets/ai-providers.git
cd ai-providers- Install dependencies:
npm installMake changes to source files in
src/Generate JSON files:
npm run generateThis will:
- Compile TypeScript (
npm run build) - Generate JSON files in
data/directory
Automated Data Generation
JSON files in data/ are auto-generated from TypeScript sources:
- ✅ On merge to
main: GitHub Actions automatically regeneratesdata/files - ✅ In Pull Requests: Workflow verifies data files are in sync with source code
You don't need to manually regenerate data/, but if you want to preview changes locally:
npm run generateAdding New Models
- Edit the appropriate file in
src/models/(e.g.,openai.ts,anthropic.ts) - Add your model following the
ModelInfointerface - Run
npm run generateto update JSON files - Commit both source changes and generated JSON files
- Create a pull request
Adding New Providers
- Create a new file in
src/models/(e.g.,newprovider.ts) - Export a constant array of models
- Add provider info to
src/providers.ts - Import and include in
src/models/index.ts - Update type in
src/types.ts(LLMProviderunion) - Run
npm run generate - Update README with new provider info
Project Structure
ai-providers/
├── src/ # TypeScript source files
│ ├── models/ # Model definitions by provider
│ │ ├── openai.ts
│ │ ├── anthropic.ts
│ │ └── ...
│ ├── providers.ts # Provider metadata
│ ├── types.ts # TypeScript type definitions
│ ├── utils.ts # Utility functions
│ └── index.ts # Main entry point
├── data/ # Auto-generated JSON files (committed to git)
│ ├── database.json
│ ├── models.json
│ └── ...
├── dist/ # Compiled TypeScript (gitignored)
├── scripts/
│ └── build-json.js # JSON generation script
└── .github/workflows/ # GitHub Actions for automation🚀 Releases & Publishing
Stable Releases
This package uses manual versioning with automated publishing:
- Update version in
package.json(manually or withnpm version) - Create GitHub Release with tag (e.g.,
v1.1.0) - GitHub Actions automatically publishes to npm
See RELEASE.md for detailed release process.
NPM Package
- Stable releases: Published on npm
- Installation:
npm install @gorets/ai-providers - Versioning: Follows Semantic Versioning
- Provenance: All packages published with supply chain security
Development Versions (Optional)
You can enable automatic @next releases for every merge to main:
# Rename workflow to enable
mv .github/workflows/publish-next.yml.disabled .github/workflows/publish-next.yml
# Users can then install latest dev version
npm install @gorets/ai-providers@nextSee .github/workflows/ for alternative versioning strategies.
📝 License
MIT License - see LICENSE file for details
🙏 Acknowledgments
Data is collected from official provider documentation and APIs. Pricing and availability may change. Always verify with the official provider documentation.
📮 Support
- Report issues: GitHub Issues
- Questions: Create a discussion on GitHub
Note: This library provides information as-is. Always check official provider documentation for the most current pricing and availability.