JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 508
  • Score
    100M100P100Q0F
  • License MIT

A unified client for various AI providers - JavaScript/TypeScript SDK

Package Exports

  • indoxrouter
  • indoxrouter/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (indoxrouter) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

indoxrouter

npm version License: MIT

A unified TypeScript/JavaScript client for various AI providers through the IndoxRouter API.

Features

  • Unified API: Single interface for multiple AI providers (OpenAI, Anthropic, Google, Mistral, xAI, etc.)
  • TypeScript Support: Full TypeScript definitions included
  • Chat Completions: Generate conversational responses
  • Text Completions: Generate text completions
  • Embeddings: Generate vector embeddings for text
  • Image Generation: Create images from text prompts
  • Text-to-Speech: Convert text to audio
  • Speech-to-Text: Transcribe audio to text
  • Audio Translation: Translate audio to English text
  • BYOK Support: Bring Your Own Keys for direct provider access
  • Streaming Support: Real-time streaming responses
  • Error Handling: Comprehensive error handling with specific exception types

Installation

npm install indoxrouter

Quick Start

// Using CommonJS
const { Client } = require("indoxrouter");

// Or using ES modules
import { Client } from "indoxrouter";

// Or using default import
import Client from "indoxrouter";

// Initialize client with API key
const client = new Client("your_api_key");

// Generate a chat completion
const response = await client.chat(
  [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "Tell me a joke." },
  ],
  {
    model: "openai/gpt-4o-mini",
  }
);

console.log(response.data);

Client Initialization

Basic Initialization

import { Client } from "indoxrouter";

// With API key as parameter
const client = new Client("your_api_key");

// With environment variable (INDOX_ROUTER_API_KEY)
const client = new Client();

Model Specification

IndoxRouter uses a consistent format for specifying models: provider/model_name. This allows you to easily switch between providers while keeping your code structure the same.

Examples:

  • openai/gpt-4o-mini
  • anthropic/claude-3-sonnet-20240229
  • mistral/mistral-large-latest
  • google/gemini-1.5-pro

API Methods

Chat Completions

Generate conversational responses from chat models.

const response = await client.chat(
  [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "Explain quantum computing in simple terms." },
  ],
  {
    model: "openai/gpt-4o-mini",
    temperature: 0.7,
    max_tokens: 1000,
  }
);

console.log(response.data);

Streaming Chat

const stream = await client.chat(
  [{ role: "user", content: "Write a short story." }],
  {
    model: "openai/gpt-4o-mini",
    stream: true,
  }
);

for await (const chunk of stream) {
  if (chunk.data) {
    process.stdout.write(chunk.data);
  }
}

Text Completions

Generate text completions from prompt models.

const response = await client.completion(
  "The future of artificial intelligence is",
  {
    model: "openai/gpt-3.5-turbo-instruct",
    temperature: 0.8,
    max_tokens: 200,
  }
);

console.log(response.data);

Embeddings

Generate vector embeddings for text.

const response = await client.embeddings(
  ["Hello world", "How are you?", "Goodbye"],
  { model: "openai/text-embedding-ada-002" }
);

console.log(response.data); // Array of embedding vectors

Image Generation

Generate images from text prompts.

const response = await client.images("A beautiful sunset over mountains", {
  model: "openai/dall-e-3",
  size: "1024x1024",
  quality: "standard",
  n: 1,
});

console.log(response.data[0].url); // Image URL

Google Imagen Example

const response = await client.images("A futuristic cityscape at night", {
  model: "google/imagen-3.0-generate-002",
  aspect_ratio: "16:9",
  quality: "high",
});

Text-to-Speech

Convert text to audio.

const response = await client.textToSpeech("Hello, welcome to IndoxRouter!", {
  model: "openai/tts-1",
  voice: "alloy",
  response_format: "mp3",
  speed: 1.0,
});

// response.data contains audio buffer
fs.writeFileSync("output.mp3", response.data);

Speech-to-Text

Transcribe audio files to text.

const response = await client.speechToText("path/to/audio.mp3", {
  model: "openai/whisper-1",
  language: "en",
  response_format: "json",
});

console.log(response.data); // Transcribed text

With Audio Buffer

const audioBuffer = fs.readFileSync("audio.wav");
const response = await client.speechToText(audioBuffer, {
  model: "openai/whisper-1",
  filename: "audio.wav",
});

Audio Translation

Translate audio to English text.

const response = await client.translateAudio("path/to/foreign_audio.mp3", {
  model: "openai/whisper-1",
  response_format: "text",
});

console.log(response.data); // English translation

BYOK (Bring Your Own Key)

Use your own API keys for direct provider access. No credit deduction from your IndoxRouter account.

const response = await client.chat([{ role: "user", content: "Hello!" }], {
  model: "openai/gpt-4",
  byok_api_key: "sk-your-openai-key-here",
});

Model Management

List Available Models

const models = await client.models();
console.log(models.data); // Array of available models

List Models by Provider

const openaiModels = await client.models("openai");
console.log(openaiModels.data);

Get Model Information

const modelInfo = await client.getModelInfo("openai", "gpt-4o-mini");
console.log(modelInfo.data);

Usage Statistics

Get your usage statistics and credit consumption.

const usage = await client.getUsage();
console.log(usage.data);

Error Handling

The client provides specific exception classes for different error types:

import {
  Client,
  ModelNotFoundError,
  ProviderError,
  AuthenticationError,
  RateLimitError,
  InsufficientCreditsError,
} from "indoxrouter";

try {
  const client = new Client("your_api_key");
  const response = await client.chat([{ role: "user", content: "Hello!" }], {
    model: "nonexistent-provider/nonexistent-model",
  });
} catch (error) {
  if (error instanceof ModelNotFoundError) {
    console.log("Model not found:", error.message);
  } else if (error instanceof ProviderError) {
    console.log("Provider error:", error.message);
  } else if (error instanceof AuthenticationError) {
    console.log("Authentication failed:", error.message);
  } else if (error instanceof RateLimitError) {
    console.log("Rate limit exceeded:", error.message);
  } else if (error instanceof InsufficientCreditsError) {
    console.log("Insufficient credits:", error.message);
  } else {
    console.log("Other error:", error.message);
  }
}

Connection Testing

Test your connection to the IndoxRouter server.

const connectionStatus = await client.testConnection();
if (connectionStatus.status === "connected") {
  console.log("Successfully connected to IndoxRouter");
} else {
  console.log("Connection failed:", connectionStatus.error);
}

Configuration

Environment Variables

Set the API key using environment variables:

export INDOX_ROUTER_API_KEY=your_api_key_here

Debug Logging

Enable debug logging:

const client = new Client("your_api_key");
client.enableDebug("debug");

TypeScript Support

This package includes full TypeScript definitions. All methods are properly typed for better development experience.

import { Client, Message, APIResponse } from "indoxrouter";

const client = new Client("your_api_key");

const messages: Message[] = [{ role: "user", content: "Hello!" }];

const response: APIResponse<string> = await client.chat(messages);

Common Parameters

All API methods accept common parameters:

  • model: Model to use in format provider/model_name
  • temperature: Controls randomness (0-1). Lower values = more deterministic. Default is 0.7
  • max_tokens: Maximum number of tokens to generate
  • byok_api_key: Your own API key for the provider (BYOK)

Additional parameters specific to each provider can be passed as additional options.

Response Structure

Responses follow a consistent structure:

interface APIResponse<T> {
  request_id: string;
  created_at: string;
  duration_ms: number;
  provider: string;
  model: string;
  success: boolean;
  message: string;
  usage: {
    tokens_prompt: number;
    tokens_completion: number;
    tokens_total: number;
    cost: number;
    latency: number;
    timestamp: string;
    cache_read_tokens: number;
    cache_write_tokens: number;
    reasoning_tokens: number;
    web_search_count: number;
    request_count: number;
    cost_breakdown: {
      input_tokens: number;
      output_tokens: number;
      cache_read: number;
      cache_write: number;
      reasoning: number;
      web_search: number;
      request: number;
    };
  };
  raw_response: any;
  data: T;
  finish_reason?: string;
}

License

MIT License - see LICENSE file for details.

Support

For support and questions:

Contributing

Contributions are welcome! Please see our Contributing Guide for details.