Package Exports
- @endlessriver/optimaiz
- @endlessriver/optimaiz/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@endlessriver/optimaiz) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
@endlessriver/optimaiz
Unified tracing, feedback, and cost analytics for LLM-based apps.
Drop-in SDK to track prompts, responses, cost, errors, and feedback across OpenAI, LangChain, Sarvam, Gemini, and more. Visit https://optimaiz.io to observe, analyze, optimize and comply
๐ฆ Installation
npm install @endlessriver/optimaizโจ Key Features
Smart Model Selection
Automatically selects the best model based on your needs
Intelligent Caching
Optimizes response times with smart caching
Performance Optimization
Continuously optimizes for best results
๐ ๏ธ Initialization
import { OptimaizClient } from "@endlessriver/optimaiz";
const optimaiz = new OptimaizClient({
token: process.env.OPTIMAIZ_API_KEY!,
});๐ Basic Usage: call
The call function provides a unified way to interact with the Optimaiz API, handling all the complexity of model selection, caching, and optimization behind the scenes.
const { response, status } = await optimaiz.call({
promptTemplate: [
{
type: "text",
role: "user",
value: "Summarize this: {text}"
}
],
promptVariables: {
text: "Your text to summarize"
},
modelParams: {
temperature: 0.7
},
threadId: "summary_thread",
userId: "user_123",
agentId: "Tool:LLM"
});The function returns:
data: The model's response with traceIderrot: Relevant error, null if success
Key benefits of using call:
- โ Automatic model selection based on your needs
- โ Built-in error handling and logging
- โ Intelligent caching for faster responses
- โ Automatic trace generation and management
- โ Seamless integration with the Optimaiz platform
๐ Basic Usage: wrapLLMCall
const { response, traceId } = await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
promptTemplate: [{ role: "user", type: "text", value: "Summarize this" }],
promptVariables: {},
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Summarize this" }],
}),
});This handles:
- โ Start trace
- โ Append raw response
- โ Finalize trace with latency
- โ Log any errors
โ๏ธ Advanced Usage with IDs
const { response } = await optimaiz.wrapLLMCall({
traceId: "trace_123",
agentId: "agent:translator",
userId: "user_456",
flowId: "translate_email",
threadId: "email_translation",
sessionId: "session_2025_06_01_user_456",
provider: "openai",
model: "gpt-4o",
promptTemplate: [{ role: "user", type: "text", value: "Translate to French: {text}" }],
promptVariables: { text: "Hello, how are you?" },
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Translate to French: Hello, how are you?" }],
}),
});๐งฉ Manual Usage (Start, Append, Finalize)
Sometimes you need lower-level control (e.g., multiple responses, partial logs).
๐น Start a trace manually
await optimaiz.startTrace({
traceId: "trace_xyz",
agentId: "imageAnalyzer",
userId: "user_999",
flowId: "caption_image",
promptTemplate: [
{ role: "user", type: "image", value: "https://cdn.site/image.png" },
{ role: "user", type: "text", value: "What's in this image?" }
],
promptVariables: {},
provider: "openai",
model: "gpt-4o"
});๐น Append a model response
await optimaiz.appendResponse({
traceId: "trace_xyz",
rawResponse: response,
provider: "openai",
model: "gpt-4o"
});๐น Finalize the trace
await optimaiz.finalizeTrace("trace_xyz");โ Log an Error to a Trace
await optimaiz.logError("trace_abc123", {
message: "Timeout waiting for OpenAI response",
code: "TIMEOUT_ERROR",
details: {
timeout: "30s",
model: "gpt-4o",
retryAttempt: 1,
},
});๐ง Example Usage
try {
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Summarize this" }],
});
await optimaiz.appendResponse({
traceId,
rawResponse: response,
provider: "openai",
model: "gpt-4o",
});
await optimaiz.finalizeTrace(traceId);
} catch (err: any) {
await optimaiz.logError(traceId, {
message: err.message,
code: err.code || "UNCAUGHT_EXCEPTION",
details: err.stack,
});
throw err;
}๐งช Tool Prompt Helper
const { promptTemplate, promptVariables } = optimaiz.generatePromptFromTools({
toolInfo: [weatherTool],
toolInput: { name: "get_weather", arguments: { location: "Delhi" } },
});๐ Compose Prompts from Template
const { prompts, promptTemplate, promptVariables } = optimaiz.composePrompts(
[
{ role: "system", content: "You are a poet." },
{ role: "user", content: "Write a haiku about {topic}" },
],
{ topic: "the ocean" }
);๐ Integration Examples
โ OpenAI SDK
const userPrompt = "Summarize this blog about AI agents";
await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
agentId: "summarizer",
userId: "user_123",
promptTemplate: [{ role: "user", type: "text", value: userPrompt }],
promptVariables: {},
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: userPrompt }],
}),
});โ LangChain
const prompt = PromptTemplate.fromTemplate("Tell me a joke about {topic}");
const formatted = await prompt.format({ topic: "elephants" });
await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
agentId: "joke-bot",
userId: "user_321",
flowId: "joke-generation",
promptTemplate: [{ role: "user", type: "text", value: "Tell me a joke about {topic}" }],
promptVariables: { topic: "elephants" },
call: () => langchainModel.invoke(formatted),
});โ Sarvam AI (Audio)
await optimaiz.wrapLLMCall({
provider: "sarvam",
model: "shivang",
agentId: "transcriber",
userId: "user_999",
flowId: "transcribe",
promptTemplate: [{ role: "user", type: "audio", value: "https://cdn.site/audio.wav" }],
promptVariables: {},
call: () => sarvam.speechToText({ url: "https://cdn.site/audio.wav" }),
});โ Gemini (Google Vertex AI)
await optimaiz.wrapLLMCall({
provider: "google",
model: "gemini-pro",
promptTemplate: [{ role: "user", type: "text", value: "Write a haiku about the ocean." }],
promptVariables: {},
call: () => gemini.generateContent({
contents: [{ role: "user", parts: [{ text: "Write a haiku about the ocean." }] }],
}),
});๐ Field Scope & Best Practices
| Field | Scope | Used for... | Example Value |
|---|---|---|---|
traceId |
Per action | Track 1 LLM/tool call | trace_a9f3 |
flowId |
Per task | Multi-step task grouping | flow_generate_poem |
agentId |
Per trace | Identify AI agent handling task | calendarAgent |
threadId |
Per topic | Group related flows by theme/intent | thread_booking |
sessionId |
Per session | Temporal or login-bound grouping | session_2025_06_01_user1 |
userId |
Global | Usage, feedback, and cost attribution | user_321 |
โ Use These for Full Insight:
agentId: Enables per-agent cost & prompt optimizationuserId: Enables user behavior analytics & pricing insightsflowId: Helps trace multi-step user taskstraceId: Use like a span for 1 prompt/responsethreadId,sessionId: Group related interactions over time or topics
โจ Optimaiz Features
- โ Works with OpenAI, Gemini, Sarvam, Mistral, LangChain, Anthropic
- ๐ง RAG and function/tool-call support
- ๐ Token usage + latency tracking
- ๐ Cost and model metadata logging
- ๐งช Error + feedback logging
- ๐ Templated prompt builder + tool integration support
- ๐งฉ Full control via start/append/finalize or simple
wrapLLMCall
๐ Get Started
- Install:
npm install @endlessriver/optimaiz - Add your API key:
process.env.OPTIMAIZ_API_KEY - Use
wrapLLMCall()for LLM/tool calls - Pass
userId,agentId, andflowIdfor best observability - Analyze and improve prompt cost, user flow, and LLM performance
Need hosted dashboards, insights, or tuning support?
Visit ๐ https://optimaiz.io