JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 22
  • Score
    100M100P100Q59075F
  • License MIT

Client SDK for interacting with the Optimaiz logging & trace system.

Package Exports

  • @endlessriver/optimaiz
  • @endlessriver/optimaiz/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@endlessriver/optimaiz) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

@endlessriver/optimaiz

Unified tracing, feedback, and cost analytics for LLM-based apps.
Drop-in SDK to track prompts, responses, cost, errors, and feedback across OpenAI, LangChain, Sarvam, Gemini, and more. Visit https://optimaiz.io to observe, analyze, optimize and comply


๐Ÿ“ฆ Installation

npm install @endlessriver/optimaiz

โœจ Key Features

Smart Model Selection

Automatically selects the best model based on your needs

Intelligent Caching

Optimizes response times with smart caching

Performance Optimization

Continuously optimizes for best results

Tool/Function Calling

Provider-agnostic tool management with automatic format conversion


๐Ÿ› ๏ธ Initialization

import { OptimaizClient } from "@endlessriver/optimaiz";

const optimaiz = new OptimaizClient({
  token: process.env.OPTIMAIZ_API_KEY!,
});

๐Ÿš€ Basic Usage: call

The call function provides a unified way to interact with the Optimaiz API, handling all the complexity of model selection, caching, and optimization behind the scenes.

const { response, status } = await optimaiz.call({
  promptTemplate: [
    {
      type: "text",
      role: "user",
      value: "Summarize this: {text}"
    }
  ],
  promptVariables: {
    text: "Your text to summarize"
  },
  tools: [weatherTool], // Optional: Include tools for function calling
  modelParams: {
    temperature: 0.7
  },
  threadId: "summary_thread",
  userId: "user_123",
  agentId: "Tool:LLM"
});

The function returns:

  • data: The model's response with traceId
  • error: Relevant error, null if success

Key benefits of using call:

  • โœ… Automatic model selection based on your needs
  • โœ… Built-in error handling and logging
  • โœ… Intelligent caching for faster responses
  • โœ… Automatic trace generation and management
  • โœ… Seamless integration with the Optimaiz platform

๐Ÿš€ Basic Usage: wrapLLMCall

const { response, traceId } = await optimaiz.wrapLLMCall({
  provider: "openai",
  model: "gpt-4o",
  promptTemplate: [{ role: "user", type: "text", value: "Summarize this" }],
  promptVariables: {},
  tools: [weatherTool], // Optional: Include tools for function calling
  call: () => openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Summarize this" }],
    tools: optimaiz.convertToolsToProviderFormat([weatherTool], "openai"), // Convert to provider format
  }),
});

This handles:

  • โœ… Start trace
  • โœ… Append raw response
  • โœ… Finalize trace with latency
  • โœ… Log any errors

โš™๏ธ Advanced Usage with IDs

const { response } = await optimaiz.wrapLLMCall({
  traceId: "trace_123",
  agentId: "agent:translator",
  userId: "user_456",
  flowId: "translate_email",
  threadId: "email_translation",
  sessionId: "session_2025_06_01_user_456",
  provider: "openai",
  model: "gpt-4o",
  promptTemplate: [{ role: "user", type: "text", value: "Translate to French: {text}" }],
  promptVariables: { text: "Hello, how are you?" },
  call: () => openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Translate to French: Hello, how are you?" }],
  }),
});

๐Ÿงฉ Manual Usage (Start, Append, Finalize)

Sometimes you need lower-level control (e.g., multiple responses, partial logs).

๐Ÿ”น Start a trace manually

await optimaiz.startTrace({
  traceId: "trace_xyz",
  agentId: "imageAnalyzer",
  userId: "user_999",
  flowId: "caption_image",
  promptTemplate: [
    { role: "user", type: "image", value: "https://cdn.site/image.png" },
    { role: "user", type: "text", value: "What's in this image?" }
  ],
  promptVariables: {},
  provider: "openai",
  model: "gpt-4o"
});

๐Ÿ”น Append a model response

await optimaiz.appendResponse({
  traceId: "trace_xyz",
  rawResponse: response,
  provider: "openai",
  model: "gpt-4o"
});

๐Ÿ”น Finalize the trace

await optimaiz.finalizeTrace("trace_xyz");

โŒ Log an Error to a Trace

await optimaiz.logError("trace_abc123", {
  message: "Timeout waiting for OpenAI response",
  code: "TIMEOUT_ERROR",
  details: {
    timeout: "30s",
    model: "gpt-4o",
    retryAttempt: 1,
  },
});

๐Ÿ”ง Example Usage

try {
  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Summarize this" }],
  });

  await optimaiz.appendResponse({
    traceId,
    rawResponse: response,
    provider: "openai",
    model: "gpt-4o",
  });

  await optimaiz.finalizeTrace(traceId);
} catch (err: any) {
  await optimaiz.logError(traceId, {
    message: err.message,
    code: err.code || "UNCAUGHT_EXCEPTION",
    details: err.stack,
  });
  throw err;
}

๐Ÿ› ๏ธ Tool Management

Optimaiz supports comprehensive tool/function calling with provider-agnostic interfaces and automatic format conversion.

Standard Tool Definition

const weatherTool: StandardToolDefinition = {
  name: "get_weather",
  description: "Get current weather for a location",
  parameters: {
    type: "object",
    properties: {
      location: {
        type: "string",
        description: "City name or coordinates",
        required: true
      },
      unit: {
        type: "string",
        enum: ["celsius", "fahrenheit"],
        description: "Temperature unit"
      }
    },
    required: ["location"]
  },
  category: "weather",
  tags: ["api", "external"]
};

Tool Prompt Helper

const { promptTemplate, promptVariables } = optimaiz.generatePromptFromTools({
  toolInfo: [weatherTool],
  toolInput: { name: "get_weather", arguments: { location: "Delhi" } },
});

Tool Format Conversion

// Convert standard tools to provider-specific format
const openaiTools = optimaiz.convertToolsToProviderFormat([weatherTool], "openai");
const anthropicTools = optimaiz.convertToolsToProviderFormat([weatherTool], "anthropic");

// Convert provider tools to standard format
const standardTools = optimaiz.convertProviderToolsToStandard(openaiTools, "openai");

// Validate tool definitions
const validation = optimaiz.validateTools([weatherTool]);
if (!validation.valid) {
  console.error("Tool validation errors:", validation.errors);
}

Tool Execution Tracking

// Track tool execution
await optimaiz.addToolExecution({
  traceId: "trace_123",
  toolId: "weather_api_1",
  toolName: "get_weather",
  executionTime: new Date(),
  duration: 150, // milliseconds
  success: true,
  result: { temperature: 25, unit: "celsius" }
});

// Add tool results to trace
await optimaiz.addToolResults({
  traceId: "trace_123",
  toolResults: [{
    toolCallId: "call_1",
    name: "get_weather",
    result: { temperature: 25, unit: "celsius" }
  }]
});

๐Ÿ”„ Compose Prompts from Template

const { prompts, promptTemplate, promptVariables } = optimaiz.composePrompts(
  [
    { role: "system", content: "You are a poet." },
    { role: "user", content: "Write a haiku about {topic}" },
  ],
  { topic: "the ocean" }
);

๐Ÿ“‚ Integration Examples

โœ… OpenAI SDK

const userPrompt = "Summarize this blog about AI agents";

await optimaiz.wrapLLMCall({
  provider: "openai",
  model: "gpt-4o",
  agentId: "summarizer",
  userId: "user_123",
  promptTemplate: [{ role: "user", type: "text", value: userPrompt }],
  promptVariables: {},
  tools: [weatherTool], // Optional: Include tools
  call: () => openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: userPrompt }],
    tools: optimaiz.convertToolsToProviderFormat([weatherTool], "openai"),
  }),
});

โœ… LangChain

const prompt = PromptTemplate.fromTemplate("Tell me a joke about {topic}");
const formatted = await prompt.format({ topic: "elephants" });

await optimaiz.wrapLLMCall({
  provider: "openai",
  model: "gpt-4o",
  agentId: "joke-bot",
  userId: "user_321",
  flowId: "joke-generation",
  promptTemplate: [{ role: "user", type: "text", value: "Tell me a joke about {topic}" }],
  promptVariables: { topic: "elephants" },
  call: () => langchainModel.invoke(formatted),
});

โœ… Sarvam AI (Audio)

await optimaiz.wrapLLMCall({
  provider: "sarvam",
  model: "shivang",
  agentId: "transcriber",
  userId: "user_999",
  flowId: "transcribe",
  promptTemplate: [{ role: "user", type: "audio", value: "https://cdn.site/audio.wav" }],
  promptVariables: {},
  call: () => sarvam.speechToText({ url: "https://cdn.site/audio.wav" }),
});

โœ… Gemini (Google Vertex AI)

await optimaiz.wrapLLMCall({
  provider: "google",
  model: "gemini-pro",
  promptTemplate: [{ role: "user", type: "text", value: "Write a haiku about the ocean." }],
  promptVariables: {},
  call: () => gemini.generateContent({
    contents: [{ role: "user", parts: [{ text: "Write a haiku about the ocean." }] }],
  }),
});

๐Ÿ“Š Field Scope & Best Practices

Field Scope Used for... Example Value
traceId Per action Track 1 LLM/tool call trace_a9f3
flowId Per task Multi-step task grouping flow_generate_poem
agentId Per trace Identify AI agent handling task calendarAgent
threadId Per topic Group related flows by theme/intent thread_booking
sessionId Per session Temporal or login-bound grouping session_2025_06_01_user1
userId Global Usage, feedback, and cost attribution user_321

โœ… Use These for Full Insight:

  • agentId: Enables per-agent cost & prompt optimization
  • userId: Enables user behavior analytics & pricing insights
  • flowId: Helps trace multi-step user tasks
  • traceId: Use like a span for 1 prompt/response
  • threadId, sessionId: Group related interactions over time or topics

โœจ Optimaiz Features

  • โœ… Works with OpenAI, Gemini, Sarvam, Mistral, LangChain, Anthropic
  • ๐Ÿง  RAG and function/tool-call support
  • ๐Ÿ› ๏ธ Provider-agnostic tool management with automatic format conversion
  • ๐Ÿ” Token usage + latency tracking
  • ๐Ÿ“‰ Cost and model metadata logging
  • ๐Ÿงช Error + feedback logging
  • ๐Ÿ”„ Templated prompt builder + tool integration support
  • ๐Ÿงฉ Full control via start/append/finalize or simple wrapLLMCall

๐Ÿ›ก๏ธ Error Handling

Optimaiz provides comprehensive error handling with specific error types for different scenarios.

Error Types

import { 
  OptimaizError, 
  OptimaizAuthenticationError, 
  OptimaizValidationError, 
  OptimaizServerError 
} from '@endlessriver/optimaiz';

try {
  const result = await optimaiz.call({
    promptTemplate: [{ role: "user", type: "text", value: "Hello" }],
    promptVariables: {}
  });
} catch (error) {
  if (OptimaizClient.isAuthenticationError(error)) {
    // Handle authentication errors (401)
    console.error('Auth error:', error.message);
  } else if (OptimaizClient.isValidationError(error)) {
    // Handle validation errors (400)
    console.error('Validation error:', error.message, error.details);
  } else if (OptimaizClient.isServerError(error)) {
    // Handle server errors (500)
    console.error('Server error:', error.message);
  } else {
    // Handle other errors
    console.error('Unknown error:', error.message);
  }
}

Common Error Scenarios

Error Type Status Common Causes Solution
Authentication 401 Invalid/missing token Check API key validity, Add Provider Key in Optimaiz Max section
Authorization 403 Invalid app token Verify token permissions
Validation 400 Missing required fields, invalid tools Check request format
Server 500 Database/LLM provider issues Retry or contact support

Error Properties

All Optimaiz errors include:

  • message: Human-readable error message
  • status: HTTP status code
  • details: Additional error details (if available)
  • type: Error type for programmatic handling

๐Ÿ”— Get Started

  1. Install: npm install @endlessriver/optimaiz
  2. Add your API key: process.env.OPTIMAIZ_API_KEY
  3. Use wrapLLMCall() for LLM/tool calls
  4. Pass userId, agentId, and flowId for best observability
  5. Analyze and improve prompt cost, user flow, and LLM performance

Need hosted dashboards, insights, or tuning support?
Visit ๐Ÿ‘‰ https://optimaiz.io