JSPM

@gorets/ai-providers

1.0.10
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 14
  • Score
    100M100P100Q79028F
  • License MIT

Comprehensive database of AI model providers, models, pricing, and capabilities

Package Exports

  • @gorets/ai-providers
  • @gorets/ai-providers/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@gorets/ai-providers) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

🤖 AI Providers Database

Comprehensive, up-to-date database of AI model providers, models, pricing, and capabilities.

npm version License: MIT

📋 Features

  • Comprehensive Provider Data: Information about 9 major AI providers (OpenAI, Anthropic, Google, xAI, Mistral, Meta, DeepSeek, Z.AI, Alibaba Cloud)
  • 60+ Models: Including latest GPT-5.1, Claude 4.5, Gemini 2.5, Qwen3 Max, and more
  • Detailed Model Information: Context windows, pricing, capabilities, tags, and lifecycle management
  • TypeScript Support: Fully typed for excellent IDE support
  • MCP Support: Models with Model Context Protocol (MCP) server support marked
  • Embedding Models: OpenAI text-embedding models included
  • JSON Access: Direct access to JSON data via GitHub for non-NPM usage
  • Regular Updates: Kept up-to-date with the latest models and pricing
  • Easy to Extend: Simple structure for adding new providers and models

✨ Latest Updates

GPT-5.1 Models (Nov 12, 2025)

  • GPT-5.1 Instant - Most-used model with adaptive reasoning
  • GPT-5.1 Thinking - Advanced reasoning model for complex tasks

OpenAI Embeddings

  • text-embedding-3-large - 3072 dimensions, best quality
  • text-embedding-3-small - 1536 dimensions, 5x cheaper than ada-002
  • text-embedding-ada-002 - Legacy embedding model

Prompt Caching Support

Both OpenAI and Anthropic models support prompt caching for reduced costs on repeated context:

  • OpenAI: GPT-5.1, GPT-5, GPT-4o, GPT-4o mini, O1 models (50% discount on cached tokens)
  • Anthropic: Claude 4.5 models (90% discount on cached tokens)

MCP (Model Context Protocol) Support

Claude 4.5 models (Haiku, Sonnet, Opus) now support MCP servers for connecting to external tools and data sources.

New Providers

  • xAI - Grok models with real-time X (Twitter) data access
  • Mistral AI - Efficient mixture-of-experts models
  • Meta - Llama 3.x open-source models
  • DeepSeek - Ultra cost-effective models with reasoning
  • Z.AI - Chinese AI provider with competitive pricing
  • Alibaba Cloud - Qwen models with trillion-parameter flagship, vision, and audio capabilities

📦 Installation

npm install @gorets/ai-providers

or

yarn add @gorets/ai-providers

📡 Direct JSON Access (Without NPM)

You can access the JSON data files directly from GitHub without installing the package:

Available JSON Files

All data is available at https://raw.githubusercontent.com/gorets/ai-providers/main/data/:

Models by Provider

Example: Fetch JSON in Browser/Node.js

// Fetch all models
const response = await fetch('https://raw.githubusercontent.com/gorets/ai-providers/main/data/models.json');
const models = await response.json();

// Fetch OpenAI models only
const openaiResponse = await fetch('https://raw.githubusercontent.com/gorets/ai-providers/main/data/models-openai.json');
const openaiModels = await openaiResponse.json();

// Find GPT-5.1
const gpt51 = openaiModels.find(m => m.id === 'gpt-5.1-instant' || m.shortName === 'gpt-5.1');
console.log(`${gpt51.name}: $${gpt51.pricing.input}/1M input tokens`);

Example: Python

import requests

# Fetch all models
response = requests.get('https://raw.githubusercontent.com/gorets/ai-providers/main/data/models.json')
models = response.json()

# Find models with vision capability
vision_models = [m for m in models if 'vision' in m['capabilities']]
print(f"Found {len(vision_models)} models with vision support")

Example: curl

# Download all models
curl -O https://raw.githubusercontent.com/gorets/ai-providers/main/data/models.json

# Download specific provider models
curl -O https://raw.githubusercontent.com/gorets/ai-providers/main/data/models-openai.json

🚀 Usage

TypeScript/JavaScript

import { getDatabase, getModelById, getModelsByIds, getModelsByProvider } from '@gorets/ai-providers';

// Get complete database
const db = getDatabase();
console.log(`Total providers: ${db.providers.length}`);
console.log(`Total models: ${db.models.length}`);

// Get specific model
const gpt5 = getModelById('gpt-5.1-instant');
console.log(`${gpt5.name}: $${gpt5.pricing.input}/1M input tokens`);

// Get multiple models by IDs
const models = getModelsByIds(['gpt-5.1-instant', 'claude-sonnet-4-5', 'gemini-2.5-flash']);
models.forEach(model => {
  if (model) {
    console.log(`${model.name}: $${model.pricing?.input}/1M input tokens`);
  }
});

// Get all models from a provider
const anthropicModels = getModelsByProvider('anthropic');
console.log(`Anthropic has ${anthropicModels.length} models`);

Filtering Models

import { getModelsByTag, getModelsByCapability } from '@gorets/ai-providers';

// Find all cost-effective models
const cheapModels = getModelsByTag('cost-effective');

// Find all models with vision capabilities
const visionModels = getModelsByCapability('vision');

// Find models with reasoning
const reasoningModels = getModelsByTag('reasoning');

Utility Functions

The library includes powerful helper functions for common tasks:

Cost Calculation

import { calculateCost, calculateCostWithCache, compareCosts } from '@gorets/ai-providers';

// Calculate cost for specific usage
const cost = calculateCost('gpt-4o-mini-2024-07-18', 50000, 10000);
console.log(`Total cost: $${cost.totalCost.toFixed(4)}`);

// Calculate with prompt caching (Anthropic & OpenAI models)
const costWithCache = calculateCostWithCache(
  'claude-sonnet-4-5',
  50000,   // new input tokens
  200000,  // cached input tokens
  10000    // output tokens
);

// Works with OpenAI models too
const openaiCached = calculateCostWithCache(
  'gpt-4o-2024-08-06',
  30000,   // new input tokens
  100000,  // cached input tokens
  5000     // output tokens
);

// Compare costs across multiple models
const comparison = compareCosts(
  ['gpt-4o-mini-2024-07-18', 'claude-haiku-4-5', 'gemini-2.5-flash'],
  100000,
  20000
);
import { searchModels, getCheapestModel, getRecommendedModels } from '@gorets/ai-providers';

// Search with multiple criteria
const results = searchModels({
  provider: 'openai',
  capabilities: ['vision', 'function-calling'],
  maxPrice: 3.0,
  minContextWindow: 100000,
});

// Find cheapest model with specific requirements
const cheapest = getCheapestModel({
  capabilities: ['vision', 'chat'],
  activeOnly: true,
});

// Get recommended models for a use case
const recommended = getRecommendedModels({
  budget: 'low',
  priority: 'speed',
  capabilities: ['chat', 'function-calling'],
});

Deprecation Management

import {
  isModelDeprecated,
  getReplacementModel,
  getModelsShuttingDownSoon,
  getActiveModels
} from '@gorets/ai-providers';

// Check if model is deprecated
const status = isModelDeprecated('gpt-3.5-turbo-0125');
if (status.isDeprecated) {
  console.log(`Deprecated on: ${status.deprecationDate}`);
  console.log(`Shuts down: ${status.shutdownDate}`);

  // Get replacement model
  const replacement = getReplacementModel('gpt-3.5-turbo-0125');
  console.log(`Use instead: ${replacement.name}`);
}

// Find models shutting down in next 90 days
const shuttingDown = getModelsShuttingDownSoon(90);

// Get only active models
const activeModels = getActiveModels();

Context and Features

import { getModelsWithLargestContext, getModelById, getModelsByIds } from '@gorets/ai-providers';

// Find models with largest context windows
const largestContext = getModelsWithLargestContext(5);

// Search by alias
const model = getModelById('claude-sonnet-4-5'); // Works with aliases!

// Get multiple models at once
const selectedModels = getModelsByIds([
  'gpt-5.1-instant',
  'claude-sonnet-4-5',
  'gemini-2.5-flash',
  'grok-3'
]);
// Returns array of ModelInfo | undefined in the same order

Direct JSON Access (No Installation Required)

You can access the latest data directly from GitHub:

// Complete database
const response = await fetch(
  'https://raw.githubusercontent.com/gorets/ai-providers/main/dist/data/database.json'
);
const database = await response.json();

// Just models
const modelsResponse = await fetch(
  'https://raw.githubusercontent.com/gorets/ai-providers/main/dist/data/models.json'
);
const models = await modelsResponse.json();

// Models by provider
const openaiModels = await fetch(
  'https://raw.githubusercontent.com/gorets/ai-providers/main/dist/data/models-openai.json'
);

📊 Data Structure

Provider Information

interface ProviderInfo {
  id: string;
  name: string;
  website: string;
  apiDocsUrl: string;
  icon: string;
  color: string;
  description: string;
  features: string[];
}

Model Information

interface ModelInfo {
  id: string;                    // Model identifier for API calls
  aliases?: string[];            // Alternative IDs (e.g., 'claude-sonnet-4-5' for 'claude-sonnet-4-5-20250929')
  name: string;                  // Human-readable name
  provider: string;              // Provider ID
  releaseDate?: string;          // ISO 8601 date
  status: 'stable' | 'beta' | 'experimental' | 'deprecated' | 'disabled' | 'preview';
  capabilities: string[];        // e.g., ['chat', 'vision', 'function-calling']
  tags: string[];                // e.g., ['fast', 'cost-effective']
  limits: {
    contextWindow: number;       // Max context in tokens
    maxOutputTokens: number;     // Max output per request
  };
  pricing?: {
    input: number;               // Per 1M input tokens (USD)
    output: number;              // Per 1M output tokens (USD)
    cachedInput?: number;        // Cached input cost (if supported)
  };
  description?: string;
  docsUrl?: string;
  deprecationDate?: string;      // When model was deprecated (ISO 8601)
  shutdownDate?: string;         // When model stops working (ISO 8601)
  replacementModel?: string;     // ID of replacement model
}

🏷️ Model Tags

  • flagship - Top-tier model from the provider
  • fast - Optimized for speed
  • cost-effective - Budget-friendly option
  • balanced - Good balance of quality and cost
  • experimental - Experimental/preview version
  • long-context - Extended context window (>100K tokens)
  • multimodal - Supports multiple input types (text, images, audio)
  • reasoning - Enhanced reasoning capabilities
  • coding - Optimized for code generation
  • deprecated - No longer recommended for new projects

📊 Model Status

Models go through different lifecycle stages:

  • stable - Production-ready, fully supported
  • beta - Feature-complete but may have minor issues
  • experimental - Early access, may change significantly
  • preview - Pre-release version for testing
  • deprecated - Still works but superseded by newer models
  • disabled - No longer available, API calls will fail

Deprecated Models: When a model is deprecated, check these fields:

  • deprecationDate - When it was marked as deprecated
  • shutdownDate - When it will stop working (if known)
  • replacementModel - Recommended model to migrate to

Model Aliases: Some models have multiple identifiers. The main ID is the simple name, with dated versions in aliases:

{
  id: 'claude-sonnet-4-5',  // Simple name as main ID
  aliases: ['claude-sonnet-4-5-20250929', 'claude-sonnet-4.5'],  // Dated version in aliases
  // ... other fields
}

You can use either the main ID or any alias when searching for models.

🎯 Model Capabilities

  • text-generation - Generate text
  • chat - Conversational AI
  • code-generation - Code writing and completion
  • vision - Image understanding
  • image-generation - Image creation
  • function-calling - Tool/function calling
  • streaming - Streaming responses
  • json-mode - Structured JSON output
  • reasoning - Advanced reasoning
  • embeddings - Text embeddings
  • audio-input - Audio understanding
  • audio-output - Speech synthesis
  • mcp-servers - Model Context Protocol (MCP) server support

🔄 Available Endpoints

All JSON files are available at: https://raw.githubusercontent.com/gorets/ai-providers/main/dist/data/

  • database.json - Complete database
  • providers.json - All providers
  • models.json - All models
  • models-{provider}.json - Models by provider (e.g., models-openai.json)
  • metadata.json - Version and update information

🤝 Contributing

Contributions are welcome! To add a new provider or model:

  1. Add provider info to src/providers.ts
  2. Create a new file in src/models/ (or update existing)
  3. Update src/models/index.ts to export your models
  4. Run npm run generate to create JSON files
  5. Submit a pull request

Adding a New Model

// src/models/yourprovider.ts
import { ModelInfo } from '../types';

export const YOUR_PROVIDER_MODELS: ModelInfo[] = [
  {
    id: 'model-id',
    name: 'Model Name',
    provider: 'yourprovider',
    releaseDate: '2025-01-01',
    status: 'stable',
    capabilities: ['chat', 'code-generation'],
    tags: ['balanced'],
    limits: {
      contextWindow: 128000,
      maxOutputTokens: 4096,
    },
    pricing: {
      input: 1.0,
      output: 3.0,
    },
    description: 'Model description',
  },
];

🛠️ Development & Contributing

Local Development

  1. Clone the repository:
git clone https://github.com/gorets/ai-providers.git
cd ai-providers
  1. Install dependencies:
npm install
  1. Make changes to source files in src/

  2. Generate JSON files:

npm run generate

This will:

  • Compile TypeScript (npm run build)
  • Generate JSON files in data/ directory

Automated Data Generation

JSON files in data/ are auto-generated from TypeScript sources:

  • On merge to main: GitHub Actions automatically regenerates data/ files
  • In Pull Requests: Workflow verifies data files are in sync with source code

You don't need to manually regenerate data/, but if you want to preview changes locally:

npm run generate

Adding New Models

  1. Edit the appropriate file in src/models/ (e.g., openai.ts, anthropic.ts)
  2. Add your model following the ModelInfo interface
  3. Run npm run generate to update JSON files
  4. Commit both source changes and generated JSON files
  5. Create a pull request

Adding New Providers

  1. Create a new file in src/models/ (e.g., newprovider.ts)
  2. Export a constant array of models
  3. Add provider info to src/providers.ts
  4. Import and include in src/models/index.ts
  5. Update type in src/types.ts (LLMProvider union)
  6. Run npm run generate
  7. Update README with new provider info

Project Structure

ai-providers/
├── src/                    # TypeScript source files
│   ├── models/            # Model definitions by provider
│   │   ├── openai.ts
│   │   ├── anthropic.ts
│   │   └── ...
│   ├── providers.ts       # Provider metadata
│   ├── types.ts          # TypeScript type definitions
│   ├── utils.ts          # Utility functions
│   └── index.ts          # Main entry point
├── data/                  # Auto-generated JSON files (committed to git)
│   ├── database.json
│   ├── models.json
│   └── ...
├── dist/                  # Compiled TypeScript (gitignored)
├── scripts/
│   └── build-json.js     # JSON generation script
└── .github/workflows/    # GitHub Actions for automation

🚀 Releases & Publishing

Stable Releases

This package uses manual versioning with automated publishing:

  1. Update version in package.json (manually or with npm version)
  2. Create GitHub Release with tag (e.g., v1.1.0)
  3. GitHub Actions automatically publishes to npm

See RELEASE.md for detailed release process.

NPM Package

  • Stable releases: Published on npm
  • Installation: npm install @gorets/ai-providers
  • Versioning: Follows Semantic Versioning
  • Provenance: All packages published with supply chain security

Development Versions (Optional)

You can enable automatic @next releases for every merge to main:

# Rename workflow to enable
mv .github/workflows/publish-next.yml.disabled .github/workflows/publish-next.yml

# Users can then install latest dev version
npm install @gorets/ai-providers@next

See .github/workflows/ for alternative versioning strategies.

📝 License

MIT License - see LICENSE file for details

🙏 Acknowledgments

Data is collected from official provider documentation and APIs. Pricing and availability may change. Always verify with the official provider documentation.

📮 Support

  • Report issues: GitHub Issues
  • Questions: Create a discussion on GitHub

Note: This library provides information as-is. Always check official provider documentation for the most current pricing and availability.