Package Exports
- @endlessriver/optimaiz
- @endlessriver/optimaiz/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@endlessriver/optimaiz) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
@endlessriver/optimaiz
Unified tracing, feedback, and cost analytics for LLM-based apps.
Drop-in SDK to track prompts, responses, cost, errors, and feedback across OpenAI, LangChain, Sarvam, Gemini, and more. Visit https://optimaiz.io to observe, analyze, optimize and comply
π¦ Installation
npm install @endlessriver/optimaizπ οΈ Initialization
import { OptimaizClient } from "@endlessriver/optimaiz";
const optimaiz = new OptimaizClient({
token: process.env.OPTIMAIZ_API_KEY!,
});π Basic Usage: wrapLLMCall
const { response, traceId } = await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
promptTemplate: [{ role: "user", type: "text", value: "Summarize this" }],
promptVariables: {},
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Summarize this" }],
}),
});This handles:
- β Start trace
- β Append raw response
- β Finalize trace with latency
- β Log any errors
βοΈ Advanced Usage with IDs
const { response } = await optimaiz.wrapLLMCall({
traceId: "trace_123",
agentId: "agent:translator",
userId: "user_456",
flowId: "translate_email",
threadId: "email_translation",
sessionId: "session_2025_06_01_user_456",
provider: "openai",
model: "gpt-4o",
promptTemplate: [{ role: "user", type: "text", value: "Translate to French: {text}" }],
promptVariables: { text: "Hello, how are you?" },
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Translate to French: Hello, how are you?" }],
}),
});π§© Manual Usage (Start, Append, Finalize)
Sometimes you need lower-level control (e.g., multiple responses, partial logs).
πΉ Start a trace manually
await optimaiz.startTrace({
traceId: "trace_xyz",
agentId: "imageAnalyzer",
userId: "user_999",
flowId: "caption_image",
promptTemplate: [
{ role: "user", type: "image", value: "https://cdn.site/image.png" },
{ role: "user", type: "text", value: "Whatβs in this image?" }
],
promptVariables: {},
provider: "openai",
model: "gpt-4o"
});πΉ Append a model response
await optimaiz.appendResponse({
traceId: "trace_xyz",
rawResponse: response,
provider: "openai",
model: "gpt-4o"
});πΉ Finalize the trace
await optimaiz.finalizeTrace("trace_xyz");β Log an Error to a Trace
await optimaiz.logError("trace_abc123", {
message: "Timeout waiting for OpenAI response",
code: "TIMEOUT_ERROR",
details: {
timeout: "30s",
model: "gpt-4o",
retryAttempt: 1,
},
});π§ Example Usage
try {
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Summarize this" }],
});
await optimaiz.appendResponse({
traceId,
rawResponse: response,
provider: "openai",
model: "gpt-4o",
});
await optimaiz.finalizeTrace(traceId);
} catch (err: any) {
await optimaiz.logError(traceId, {
message: err.message,
code: err.code || "UNCAUGHT_EXCEPTION",
details: err.stack,
});
throw err;
}π§ͺ Tool Prompt Helper
const { promptTemplate, promptVariables } = optimaiz.generatePromptFromTools({
toolInfo: [weatherTool],
toolInput: { name: "get_weather", arguments: { location: "Delhi" } },
});π Compose Prompts from Template
const { prompts, promptTemplate, promptVariables } = optimaiz.composePrompts(
[
{ role: "system", content: "You are a poet." },
{ role: "user", content: "Write a haiku about {topic}" },
],
{ topic: "the ocean" }
);π Integration Examples
β OpenAI SDK
const userPrompt = "Summarize this blog about AI agents";
await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
agentId: "summarizer",
userId: "user_123",
promptTemplate: [{ role: "user", type: "text", value: userPrompt }],
promptVariables: {},
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: userPrompt }],
}),
});β LangChain
const prompt = PromptTemplate.fromTemplate("Tell me a joke about {topic}");
const formatted = await prompt.format({ topic: "elephants" });
await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
agentId: "joke-bot",
userId: "user_321",
flowId: "joke-generation",
promptTemplate: [{ role: "user", type: "text", value: "Tell me a joke about {topic}" }],
promptVariables: { topic: "elephants" },
call: () => langchainModel.invoke(formatted),
});β Sarvam AI (Audio)
await optimaiz.wrapLLMCall({
provider: "sarvam",
model: "shivang",
agentId: "transcriber",
userId: "user_999",
flowId: "transcribe",
promptTemplate: [{ role: "user", type: "audio", value: "https://cdn.site/audio.wav" }],
promptVariables: {},
call: () => sarvam.speechToText({ url: "https://cdn.site/audio.wav" }),
});β Gemini (Google Vertex AI)
await optimaiz.wrapLLMCall({
provider: "google",
model: "gemini-pro",
promptTemplate: [{ role: "user", type: "text", value: "Write a haiku about the ocean." }],
promptVariables: {},
call: () => gemini.generateContent({
contents: [{ role: "user", parts: [{ text: "Write a haiku about the ocean." }] }],
}),
});π Field Scope & Best Practices
| Field | Scope | Used for... | Example Value |
|---|---|---|---|
traceId |
Per action | Track 1 LLM/tool call | trace_a9f3 |
flowId |
Per task | Multi-step task grouping | flow_generate_poem |
agentId |
Per trace | Identify AI agent handling task | calendarAgent |
threadId |
Per topic | Group related flows by theme/intent | thread_booking |
sessionId |
Per session | Temporal or login-bound grouping | session_2025_06_01_user1 |
userId |
Global | Usage, feedback, and cost attribution | user_321 |
β Use These for Full Insight:
agentId: Enables per-agent cost & prompt optimizationuserId: Enables user behavior analytics & pricing insightsflowId: Helps trace multi-step user taskstraceId: Use like a span for 1 prompt/responsethreadId,sessionId: Group related interactions over time or topics
β¨ Optimaiz Features
- β Works with OpenAI, Gemini, Sarvam, Mistral, LangChain, Anthropic
- π§ RAG and function/tool-call support
- π Token usage + latency tracking
- π Cost and model metadata logging
- π§ͺ Error + feedback logging
- π Templated prompt builder + tool integration support
- π§© Full control via start/append/finalize or simple
wrapLLMCall
π Get Started
- Install:
npm install @endlessriver/optimaiz - Add your API key:
process.env.OPTIMAIZ_API_KEY - Use
wrapLLMCall()for LLM/tool calls - Pass
userId,agentId, andflowIdfor best observability - Analyze and improve prompt cost, user flow, and LLM performance
Need hosted dashboards, insights, or tuning support?
Visit π https://optimaiz.io