JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 3186
  • Score
    100M100P100Q122788F
  • License MIT

The translation layer for LLM provider messages

Package Exports

  • rosetta-ai

Readme

Rosetta

The translation layer for LLM provider messages.

Rosetta converts messages between different LLM providers using a standardized intermediate format (GenAI). Just pass in messages from any provider—OpenAI, Anthropic, Google, or even custom formats—and get consistent output. No manual mapping required.

Features

  • 🔄 Convert messages from any supported provider to a unified GenAI format
  • 🔀 Convert GenAI messages to any supported provider format
  • 🪄 Universal fallback - Pass messages from any LLM provider or framework, even unsupported ones, and we'll attempt best-effort conversion
  • 🔍 Automatic provider detection when source is not specified
  • 📝 Full TypeScript support with strict types
  • ✅ Runtime validation with Zod schemas
  • 💾 Preserve provider-specific metadata for lossless round-trips
  • 🌐 Works in Node.js and browsers
  • 🌳 Tree-shakeable ESM build

Installation

npm install rosetta-ai
# or
pnpm add rosetta-ai
# or
yarn add rosetta-ai

Quick Start

import { translate, Provider } from "rosetta-ai";

// Your messages
const messages = [
  { role: "user", parts: [{ type: "text", content: "Hello!" }] },
];

// Convert to GenAI (intermediate format) - auto-infers source
const { messages: genaiMessages } = translate(messages);

// Specify source provider explicitly
const { messages: result } = translate(messages, {
  from: Provider.GenAI,
  to: Provider.GenAI,
});

API

translate / safeTranslate

import { translate, safeTranslate, Provider } from "rosetta-ai";

// translate throws on error
const { messages, system } = translate(inputMessages, {
  from: Provider.GenAI,       // Source provider (optional, auto-inferred if not provided)
  to: Provider.GenAI,         // Target provider (optional, defaults to GenAI)
  system: systemInstructions, // Separated system instructions (optional)
  direction: "input",         // "input" (default) or "output" - affects role interpretation (e.g. "user" vs "assistant")
});

// safeTranslate returns error instead of throwing
const result = safeTranslate(inputMessages);

if (result.error) {
  console.error("Translation failed:", result.error.message);
} else {
  console.log("Translated:", result.messages);
}

Input Flexibility

Messages and system instructions accept flexible input formats:

// Messages: string or array of provider messages
translate("Hello!");                              // Simple string
translate([{ role: "user", content: "Hello!" }]); // Provider message array

// System: string, single object, or array
translate(messages, { system: "You are helpful" });
translate(messages, { system: { type: "text", content: "Be helpful" } });
translate(messages, { system: [{ type: "text", content: "Instructions" }] });

Each provider validates messages with its own Zod schema at runtime.

Translator Class

For advanced configuration, use the Translator class:

import { Translator, Provider } from "rosetta-ai";

const translator = new Translator({
  // Custom priority order for provider inference
  inferPriority: [Provider.GenAI],
});

const { messages } = translator.translate(inputMessages);

Supported Providers

Provider Status toGenAI fromGenAI
GenAI ✅ Available
Promptl ✅ Available
VercelAI ✅ Available
OpenAI Completions ✅ Available -
OpenAI Responses ✅ Available -
Anthropic ✅ Available -
Google Gemini ✅ Available -
Compat ✅ Available -

Universal Compatibility

The Compat provider is a universal fallback that handles messages from any LLM provider—even ones not explicitly supported. When you call translate() without specifying a source provider, Rosetta tries to match against known provider schemas. If none match, it automatically falls back to Compat, which:

  • Normalizes field names across conventions (tool_calls, toolCalls, tool-calls all work)
  • Detects common patterns: roles, content arrays, tool calls, images, reasoning, etc.
  • Handles formats from Cohere, Mistral, Ollama, AWS Bedrock, LangChain, and more
  • Preserves unrecognized data so nothing is lost
// Works with any provider - no need to specify the source
const weirdMessages = [
  { role: "user", content: "Hello" },
  { role: "assistant", tool_calls: [{ id: "1", function: { name: "search", arguments: "{}" } }] },
];

const { messages } = translate(weirdMessages); // Just works™

More providers will be added. See AGENTS.md for contribution guidelines.

GenAI Format

GenAI is the intermediate format used for translation. It provides a unified representation of LLM messages:

import type { GenAIMessage, GenAIPart, GenAISystem } from "rosetta-ai";

const message: GenAIMessage = {
  role: "user",
  parts: [
    { type: "text", content: "Hello!" },
    { type: "blob", modality: "image", content: "base64...", mime_type: "image/png" },
  ],
};

const system: GenAISystem = [
  { type: "text", content: "You are a helpful assistant." },
];

Part Types

  • text - Plain text content
  • blob - Binary data (base64 encoded)
  • file - File reference by ID
  • uri - URI reference
  • reasoning - Model reasoning/thinking
  • tool_call - Tool call request
  • tool_call_response - Tool call result
  • generic - Custom/extensible part type

Provider Metadata

All GenAI entities support _provider_metadata to preserve provider-specific data:

const message: GenAIMessage = {
  role: "assistant",
  parts: [{ type: "text", content: "Hello!" }],
  _provider_metadata: {
    genai: { custom: "data" },
  },
};

Examples

Check out the examples folder for usage examples (requires building the package first).

Development

Prerequisites

  • Node.js >= 20.0.0
  • pnpm >= 10.0.0

Setup

# Clone the repository
git clone https://github.com/latitude-dev/rosetta-ts.git
cd rosetta-ts

# Install dependencies
pnpm install

Commands

Command Description
pnpm install Install dependencies
pnpm build Build the package
pnpm dev Build in watch mode
pnpm test Run tests
pnpm lint Check for lint, format and type errors
pnpm format Format code and fixable lint errors

Adding a New Provider

The AGENTS.md file contains extensively curated guidelines for AI coding agents, including detailed step-by-step instructions for adding new providers. The easiest way to add a provider is to give a coding agent (like Cursor, Claude, or similar) the provider's message schema along with a prompt like this:

Based on the attached [Provider Name] message schema (see attached), add a
[Provider Name] provider to the package. Follow ALL the guidelines in AGENTS.md.

- This provider will be source-only / source and target.
- This provider does / does not separate system instructions from the message list.
- Build a unified schema if the provider has separate types for input and output.

The schema can be in any format the agent can understand: TypeScript SDK types, JSON Schema, OpenAPI definitions, Python types, or even API documentation.

Example prompt for adding Google Gemini:

Based on the attached Google Gemini TypeScript SDK types (specifically the
messages and system instructions for the GenerateContent function), add a
Google provider to the package. Follow ALL the guidelines in AGENTS.md.

- This provider will be source-only, not a target.
- This provider separates system instructions from the message list.
- Build a unified schema since the provider has different types for input and output.

The agent will handle creating the schema files, implementing the specification, registering the provider, writing tests, and updating documentation—all following the project's conventions.

License

MIT - see LICENSE for details.

Contributing

Contributions are welcome! Please read AGENTS.md for detailed contribution guidelines, including architecture decisions, coding standards, and the step-by-step process for adding new providers.