Package Exports
- @endlessriver/optimaiz
- @endlessriver/optimaiz/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@endlessriver/optimaiz) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
@endlessriver/optimaiz
Unified tracing, feedback, and cost analytics for LLM-based apps.
Drop-in SDK to track prompts, responses, cost, errors, and feedback across OpenAI, LangChain, Sarvam, Gemini, and more. Visit https://optimaiz.io to observe, analyze, optimize and comply
๐ฆ Installation
npm install @endlessriver/optimaizโจ Key Features
Smart Model Selection
Automatically selects the best model based on your needs
Intelligent Caching
Optimizes response times with smart caching
Performance Optimization
Continuously optimizes for best results
Tool/Function Calling
Provider-agnostic tool management with automatic format conversion
๐ ๏ธ Initialization
import { OptimaizClient } from "@endlessriver/optimaiz";
const optimaiz = new OptimaizClient({
token: process.env.OPTIMAIZ_API_KEY!,
});๐ Basic Usage: call
The call function provides a unified way to interact with the Optimaiz API, handling all the complexity of model selection, caching, and optimization behind the scenes.
const { response, status } = await optimaiz.call({
promptTemplate: [
{
type: "text",
role: "user",
value: "Summarize this: {text}"
}
],
promptVariables: {
text: "Your text to summarize"
},
tools: [weatherTool], // Optional: Include tools for function calling
modelParams: {
temperature: 0.7
},
threadId: "summary_thread",
userId: "user_123",
agentId: "Tool:LLM"
});The function returns:
data: The model's response with traceIderror: Relevant error, null if success
Key benefits of using call:
- โ Automatic model selection based on your needs
- โ Built-in error handling and logging
- โ Intelligent caching for faster responses
- โ Automatic trace generation and management
- โ Seamless integration with the Optimaiz platform
๐ Basic Usage: wrapLLMCall
const { response, traceId } = await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
promptTemplate: [{ role: "user", type: "text", value: "Summarize this" }],
promptVariables: {},
tools: [weatherTool], // Optional: Include tools for function calling
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Summarize this" }],
tools: optimaiz.convertToolsToProviderFormat([weatherTool], "openai"), // Convert to provider format
}),
});This handles:
- โ Start trace
- โ Append raw response
- โ Finalize trace with latency
- โ Log any errors
โ๏ธ Advanced Usage with IDs
const { response } = await optimaiz.wrapLLMCall({
traceId: "trace_123",
agentId: "agent:translator",
userId: "user_456",
flowId: "translate_email",
threadId: "email_translation",
sessionId: "session_2025_06_01_user_456",
provider: "openai",
model: "gpt-4o",
promptTemplate: [{ role: "user", type: "text", value: "Translate to French: {text}" }],
promptVariables: { text: "Hello, how are you?" },
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Translate to French: Hello, how are you?" }],
}),
});๐งฉ Manual Usage (Start, Append, Finalize)
Sometimes you need lower-level control (e.g., multiple responses, partial logs).
๐น Start a trace manually
await optimaiz.startTrace({
traceId: "trace_xyz",
agentId: "imageAnalyzer",
userId: "user_999",
flowId: "caption_image",
promptTemplate: [
{ role: "user", type: "image", value: "https://cdn.site/image.png" },
{ role: "user", type: "text", value: "What's in this image?" }
],
promptVariables: {},
provider: "openai",
model: "gpt-4o"
});๐น Append a model response
await optimaiz.appendResponse({
traceId: "trace_xyz",
rawResponse: response,
provider: "openai",
model: "gpt-4o"
});๐น Finalize the trace
await optimaiz.finalizeTrace("trace_xyz");โ Log an Error to a Trace
await optimaiz.logError("trace_abc123", {
message: "Timeout waiting for OpenAI response",
code: "TIMEOUT_ERROR",
details: {
timeout: "30s",
model: "gpt-4o",
retryAttempt: 1,
},
});๐ง Example Usage
try {
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Summarize this" }],
});
await optimaiz.appendResponse({
traceId,
rawResponse: response,
provider: "openai",
model: "gpt-4o",
});
await optimaiz.finalizeTrace(traceId);
} catch (err: any) {
await optimaiz.logError(traceId, {
message: err.message,
code: err.code || "UNCAUGHT_EXCEPTION",
details: err.stack,
});
throw err;
}๐ ๏ธ Tool Management
Optimaiz supports comprehensive tool/function calling with provider-agnostic interfaces and automatic format conversion.
Standard Tool Definition
const weatherTool: StandardToolDefinition = {
name: "get_weather",
description: "Get current weather for a location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "City name or coordinates",
required: true
},
unit: {
type: "string",
enum: ["celsius", "fahrenheit"],
description: "Temperature unit"
}
},
required: ["location"]
},
category: "weather",
tags: ["api", "external"]
};Tool Prompt Helper
const { promptTemplate, promptVariables } = optimaiz.generatePromptFromTools({
toolInfo: [weatherTool],
toolInput: { name: "get_weather", arguments: { location: "Delhi" } },
});Tool Format Conversion
// Convert standard tools to provider-specific format
const openaiTools = optimaiz.convertToolsToProviderFormat([weatherTool], "openai");
const anthropicTools = optimaiz.convertToolsToProviderFormat([weatherTool], "anthropic");
// Convert provider tools to standard format
const standardTools = optimaiz.convertProviderToolsToStandard(openaiTools, "openai");
// Validate tool definitions
const validation = optimaiz.validateTools([weatherTool]);
if (!validation.valid) {
console.error("Tool validation errors:", validation.errors);
}Tool Execution Tracking
// Track tool execution
await optimaiz.addToolExecution({
traceId: "trace_123",
toolId: "weather_api_1",
toolName: "get_weather",
executionTime: new Date(),
duration: 150, // milliseconds
success: true,
result: { temperature: 25, unit: "celsius" }
});
// Add tool results to trace
await optimaiz.addToolResults({
traceId: "trace_123",
toolResults: [{
toolCallId: "call_1",
name: "get_weather",
result: { temperature: 25, unit: "celsius" }
}]
});๐ Compose Prompts from Template
const { prompts, promptTemplate, promptVariables } = optimaiz.composePrompts(
[
{ role: "system", content: "You are a poet." },
{ role: "user", content: "Write a haiku about {topic}" },
],
{ topic: "the ocean" }
);๐ Integration Examples
โ OpenAI SDK
const userPrompt = "Summarize this blog about AI agents";
await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
agentId: "summarizer",
userId: "user_123",
promptTemplate: [{ role: "user", type: "text", value: userPrompt }],
promptVariables: {},
tools: [weatherTool], // Optional: Include tools
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: userPrompt }],
tools: optimaiz.convertToolsToProviderFormat([weatherTool], "openai"),
}),
});โ LangChain
const prompt = PromptTemplate.fromTemplate("Tell me a joke about {topic}");
const formatted = await prompt.format({ topic: "elephants" });
await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
agentId: "joke-bot",
userId: "user_321",
flowId: "joke-generation",
promptTemplate: [{ role: "user", type: "text", value: "Tell me a joke about {topic}" }],
promptVariables: { topic: "elephants" },
call: () => langchainModel.invoke(formatted),
});โ Sarvam AI (Audio)
await optimaiz.wrapLLMCall({
provider: "sarvam",
model: "shivang",
agentId: "transcriber",
userId: "user_999",
flowId: "transcribe",
promptTemplate: [{ role: "user", type: "audio", value: "https://cdn.site/audio.wav" }],
promptVariables: {},
call: () => sarvam.speechToText({ url: "https://cdn.site/audio.wav" }),
});โ Gemini (Google Vertex AI)
await optimaiz.wrapLLMCall({
provider: "google",
model: "gemini-pro",
promptTemplate: [{ role: "user", type: "text", value: "Write a haiku about the ocean." }],
promptVariables: {},
call: () => gemini.generateContent({
contents: [{ role: "user", parts: [{ text: "Write a haiku about the ocean." }] }],
}),
});๐ Field Scope & Best Practices
| Field | Scope | Used for... | Example Value |
|---|---|---|---|
traceId |
Per action | Track 1 LLM/tool call | trace_a9f3 |
flowId |
Per task | Multi-step task grouping | flow_generate_poem |
agentId |
Per trace | Identify AI agent handling task | calendarAgent |
threadId |
Per topic | Group related flows by theme/intent | thread_booking |
sessionId |
Per session | Temporal or login-bound grouping | session_2025_06_01_user1 |
userId |
Global | Usage, feedback, and cost attribution | user_321 |
โ Use These for Full Insight:
agentId: Enables per-agent cost & prompt optimizationuserId: Enables user behavior analytics & pricing insightsflowId: Helps trace multi-step user taskstraceId: Use like a span for 1 prompt/responsethreadId,sessionId: Group related interactions over time or topics
โจ Optimaiz Features
- โ Works with OpenAI, Gemini, Sarvam, Mistral, LangChain, Anthropic
- ๐ง RAG and function/tool-call support
- ๐ ๏ธ Provider-agnostic tool management with automatic format conversion
- ๐ Token usage + latency tracking
- ๐ Cost and model metadata logging
- ๐งช Error + feedback logging
- ๐ Templated prompt builder + tool integration support
- ๐งฉ Full control via start/append/finalize or simple
wrapLLMCall
๐ก๏ธ Error Handling
Optimaiz provides comprehensive error handling with specific error types for different scenarios.
Error Types
import {
OptimaizError,
OptimaizAuthenticationError,
OptimaizValidationError,
OptimaizServerError
} from '@endlessriver/optimaiz';
try {
const result = await optimaiz.call({
promptTemplate: [{ role: "user", type: "text", value: "Hello" }],
promptVariables: {}
});
} catch (error) {
if (OptimaizClient.isAuthenticationError(error)) {
// Handle authentication errors (401)
console.error('Auth error:', error.message);
} else if (OptimaizClient.isValidationError(error)) {
// Handle validation errors (400)
console.error('Validation error:', error.message, error.details);
} else if (OptimaizClient.isServerError(error)) {
// Handle server errors (500)
console.error('Server error:', error.message);
} else {
// Handle other errors
console.error('Unknown error:', error.message);
}
}Common Error Scenarios
| Error Type | Status | Common Causes | Solution |
|---|---|---|---|
| Authentication | 401 | Invalid/missing token | Check API key validity, Add Provider Key in Optimaiz Max section |
| Authorization | 403 | Invalid app token | Verify token permissions |
| Validation | 400 | Missing required fields, invalid tools | Check request format |
| Server | 500 | Database/LLM provider issues | Retry or contact support |
Error Properties
All Optimaiz errors include:
message: Human-readable error messagestatus: HTTP status codedetails: Additional error details (if available)type: Error type for programmatic handling
๐ Get Started
- Install:
npm install @endlessriver/optimaiz - Add your API key:
process.env.OPTIMAIZ_API_KEY - Use
wrapLLMCall()for LLM/tool calls - Pass
userId,agentId, andflowIdfor best observability - Analyze and improve prompt cost, user flow, and LLM performance
Need hosted dashboards, insights, or tuning support?
Visit ๐ https://optimaiz.io