JSPM

@robilabs/lexa

2.0.1
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 1
  • Score
    100M100P100Q33534F
  • License MIT

JavaScript SDK for Lexa, the AI Model by Robi Labs. Build with powerful chat, text, and multimodal APIs.

Package Exports

  • @robilabs/lexa

Readme

Lexa TypeScript and JavaScript API Library

npm version npm bundle size License: MIT

This library provides convenient access to the Lexa REST API from TypeScript or JavaScript.

To learn how to use the Lexa API, check out the API Reference and Documentation.

Installation

npm install @robilabs/lexa

Usage

import Lexa from '@robilabs/lexa';

const client = new Lexa(process.env['LEXA_API_KEY']); // This is the default and can be omitted

const completion = await client.chat({
  model: 'lexa-mml',
  messages: [{ role: 'user', content: 'Are semicolons optional in JavaScript?' }],
});

console.log(completion.choices[0].message.content);

Streaming responses

We provide support for streaming responses using Server Sent Events (SSE).

import Lexa from '@robilabs/lexa';

const client = new Lexa();

const completion = await client.chat({
  model: 'lexa-mml',
  messages: [{ role: 'user', content: 'Say "Sheep sleep deep" ten times fast!' }],
  stream: true,
});

for await (const chunk of completion.stream) {
  console.log(chunk);
}

Models

Listing available models

Models are fetched live from the API and cached after the first call.

const { data: models } = await client.models();
console.log(models);
// [
//   { id: 'lexa-mml', object: 'model', owned_by: 'lexa', input_token_rate: 0.002,   output_token_rate: 0.008  },
//   { id: 'lexa-x1',  object: 'model', owned_by: 'lexa', input_token_rate: 0.00012, output_token_rate: 0.0005 },
//   { id: 'lime',     object: 'model', owned_by: 'lexa', input_token_rate: 0.0004,  output_token_rate: 0.002  },
// ]

Available models

Model Description Input ($/1K tokens) Output ($/1K tokens)
lexa-mml Multimodal model with vision capabilities $0.002 $0.008
lexa-x1 Fast, lightweight text-based model $0.00012 $0.0005
lime Balanced model for general-purpose tasks $0.0004 $0.002

Getting a single model

const model = await client.model('lexa-mml');
console.log(model?.id); // "lexa-mml"

Refreshing the model cache

const { data: models } = await client.refreshModels();

Handling errors

When the library is unable to connect to the API, or if the API returns a non-success status code (i.e., 4xx or 5xx response), an APICallError will be thrown:

import Lexa from '@robilabs/lexa';

const client = new Lexa();

const completion = await client
  .chat({
    model: 'lexa-mml',
    messages: [{ role: 'user', content: 'Hello!' }],
  })
  .catch((err) => {
    console.log(err.statusCode); // 401
    console.log(err.message);   // "Lexa API error: 401 Unauthorized"
    throw err;
  });
Status Code Description
401 Invalid or missing API key
429 Rate limit exceeded (retryable)
>=500 Internal server error
N/A Network / connection error

Retries

Requests that fail with a 429 Rate Limit or >=500 status are automatically retried. The built-in isRetryable flag is set accordingly so upstream AI SDK tooling can handle backoff automatically.

Advanced Usage

Custom base URL

const client = new Lexa('your-api-key', {
  baseURL: 'https://your-custom-endpoint.com/v1',
});

Custom headers

const client = new Lexa('your-api-key', {
  headers: {
    'X-Custom-Header': 'my-value',
  },
});

Using with the AI SDK

@robilabs/lexa implements the LanguageModelV2 interface from @ai-sdk/provider, so it works seamlessly with the Vercel AI SDK:

import { LexaProvider } from '@robilabs/lexa';
import { generateText } from 'ai';

const provider = new LexaProvider({ apiKey: 'your-api-key' });

const { text } = await generateText({
  model: provider.languageModel('lexa-mml'),
  prompt: 'Write a haiku about the ocean.',
});

console.log(text);

Migration from OpenAI

If you're currently using OpenAI's SDK, migrating to Lexa is straightforward:

Before (OpenAI):

import OpenAI from 'openai';

const client = new OpenAI({ apiKey: 'your-key' });

const completion = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(completion.choices[0].message.content);

After (Lexa):

import Lexa from '@robilabs/lexa';

const client = new Lexa('your-key');

const completion = await client.chat({
  model: 'lexa-mml',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(completion.choices[0].message.content);

Getting Your API Key

  1. Visit lexa.chat
  2. Sign up or log in to your account
  3. Navigate to Account → API Keys
  4. Generate a new API key and use it in your application

We recommend storing your key in an environment variable:

export LEXA_API_KEY="your-api-key"
const client = new Lexa(process.env['LEXA_API_KEY']);

Semantic versioning

This package generally follows SemVer conventions, though certain backwards-incompatible changes may be released as minor versions:

  1. Changes that only affect static types, without breaking runtime behavior.
  2. Changes to library internals which are technically public but not intended for external use.
  3. Changes that we do not expect to impact the vast majority of users in practice.

We are keen for your feedback — please open an issue with questions, bugs, or suggestions.

Requirements

TypeScript >= 4.9 is supported.

The following runtimes are supported:

  • Node.js 18 LTS or later
  • Bun 1.0 or later
  • Cloudflare Workers
  • Vercel Edge Runtime
  • Modern web browsers (be cautious about exposing your API key in client-side code)

License

MIT — see LICENSE for details.


Made with ❤️ by Robi Labs