JSPM

routelayer

0.1.1
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 3
  • Score
    100M100P100Q25908F
  • License MIT

Unified LLM routing with automatic fallback, cost-aware model selection, and semantic caching.

Package Exports

  • routelayer

Readme

RouteLayer (JavaScript / TypeScript)

Unified LLM routing with automatic fallback, cost-aware model selection, and semantic caching.

RouteLayer is a lightweight, zero-dependency (other than the provider SDKs) middleware that sits between your application and your LLM providers. It solves the "Inference Cost Crisis" by intelligently routing requests based on priority, cost, and semantic similarity.

Why RouteLayer?

  • Simpler than LiteLLM — No complex configuration, no learning curve. Define your providers and go.
  • Zero-config — Sensible defaults out of the box. Just add your API keys.
  • No proxy server — RouteLayer runs in-process. No sidecar, no Docker container, no extra infra to manage.
  • Works in serverless — Perfect for Vercel, Cloudflare Workers, AWS Lambda, and any edge runtime. No persistent connections required.

Features

  • Unified API: Call OpenAI, Anthropic, and Gemini through a single, standardized interface.
  • Zero-Downtime Fallback: Automatically failover to a secondary provider if the primary provider times out or goes down.
  • Cost-Aware Routing: Automatically select the cheapest available model for a given request.
  • Built-in Semantic Caching: Stop paying for the same answer twice. RouteLayer includes a lightweight, local, zero-dependency semantic cache that returns cached responses for semantically similar prompts.

Installation

npm install routelayer
# or
pnpm add routelayer
# or
yarn add routelayer

You will also need to install the SDKs for the providers you intend to use:

npm install openai @anthropic-ai/sdk @google/generative-ai

Quick Start

import { RouteLayer } from 'routelayer';

// 1. Initialize the router with your preferred models
const rl = new RouteLayer({
  providers: [
    {
      name: 'openai',
      model: 'gpt-4o-mini',
      apiKey: process.env.OPENAI_API_KEY!,
      priority: 0, // Try first
      costPer1kTokens: 0.0006,
    },
    {
      name: 'anthropic',
      model: 'claude-haiku-4-5-20251001',
      apiKey: process.env.ANTHROPIC_API_KEY!,
      priority: 1, // Fallback
      costPer1kTokens: 0.001,
    }
  ],
  strategy: 'priority', // Or 'cheapest'
  verbose: true
});

async function main() {
  // 2. Generate a response
  const response = await rl.generate('Explain quantum computing in one sentence.');

  // 3. Inspect the standardized response
  console.log(response.text);
  console.log(`Served by: ${response.provider} (${response.model})`);
  console.log(`Cost: $${response.costUsd.toFixed(6)}`);
  console.log(`Cached: ${response.cached}`);
}

main();

Routing Strategies

  • priority (Default): Tries providers in ascending order of their priority value. Great for setting up a primary model and a reliable fallback.
  • cheapest: Ignores priority and always attempts the provider with the lowest costPer1kTokens.

Semantic Caching

RouteLayer includes a built-in semantic cache. If a user asks "What is your refund policy?" and later asks "How do I get a refund?", the cache will intercept the second request and return the cached answer without hitting the LLM API, saving you money and reducing latency to zero.

import { RouteLayer, SemanticCache } from 'routelayer';

// Configure the cache (defaults are usually fine)
const cache = new SemanticCache({
  threshold: 0.92,   // Similarity threshold (0.0 to 1.0)
  ttlSeconds: 3600,  // Time to live (1 hour)
  maxSize: 1000      // Max entries in memory
});

const rl = new RouteLayer({
  providers: [...],
  cache
});

License

MIT License. Built by Freedom Engineers — routelayer.io