Package Exports
- @visibe.ai/node
Readme
visibe.ai SDK for Node.js
Observability for AI agents. Track costs, performance, and errors across your entire AI stack — whether you're using LangChain, LangGraph, Vercel AI, Anthropic, AWS Bedrock, or direct OpenAI calls.
🚀 Quick Start
npm install @visibe.ai/nodeGet your API key at app.visibe.ai → Settings → API Keys, then add one line to your app:
// CJS Syntax
const { init } = require('@visibe.ai/node')
// ESM Syntax
import { init } from '@visibe.ai/node'
init({ apiKey: 'sk_live_your_key_here' })That's it. Every OpenAI, Anthropic, LangChain, LangGraph, Vercel AI, and Bedrock call is automatically traced from this point on — no wrappers, no config changes.
🧩 Supported Frameworks
Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.
⚙️ Configuration
| Option | Type | Default | Description |
|---|---|---|---|
apiKey |
string |
— | Your Visibe API key. Falls back to VISIBE_API_KEY env var |
redactContent |
boolean |
false |
Omit prompt and completion text from all traces. Only metadata is sent (tokens, cost, duration, model, errors). |
sessionId |
string |
— | Tag all traces with a session ID |
frameworks |
string[] |
All detected | Limit auto-instrumentation to specific frameworks |
📊 What Gets Tracked
| Metric | Sent when redactContent: true |
|---|---|
| Cost, tokens, duration | ✅ |
| Model & provider | ✅ |
| Tool calls (name, duration, success/failure) | ✅ |
| Errors (type, message, HTTP status) | ✅ |
| Full execution timeline (spans) | ✅ |
| Prompt text | ❌ omitted |
| Completion text | ❌ omitted |
When redactContent: true, prompt and completion text never leave your codebase. You retain full observability over costs, performance, and errors.
📖 API Reference
init()
Call once at the top of your app, before creating any clients. Returns a Visibe instance.
import { init } from '@visibe.ai/node'
const visibe = init({ apiKey: 'sk_live_abc123' })track()
Groups multiple LLM calls into a single named trace.
await visibe.track(client, 'my-conversation', async () => {
await client.chat.completions.create({ model: 'gpt-4o-mini', messages: [...] })
await client.chat.completions.create({ model: 'gpt-4o-mini', messages: [...] })
})
// Both calls appear as spans under one trace.runWithSession()
Like track(), but works across all already-instrumented clients without specifying one.
await visibe.runWithSession('research-task', async () => {
// Any instrumented client used here is grouped into one trace.
})startSession()
Opens a persistent trace that accumulates spans across multiple async calls (e.g., a multi-turn chat conversation spanning many HTTP requests). Use this when runWithSession() isn't enough because the trace needs to outlive a single function call.
// Open the session once (e.g., on first message)
const session = await visibe.startSession('Chat Session', { sessionId: userId })
// Each turn: wrap your agent call in session.run()
const reply = await session.run(() => runAgent(message))
// When the conversation ends, close the session
await session.end()session.run(fn) — Executes fn inside the session's trace context. Every instrumented LLM/tool call inside fn is recorded as a span under the same trace, regardless of how many HTTP requests it spans.
session.end() — Closes the trace and sends the final aggregated metrics (total cost, token counts, duration, LLM call count) to the dashboard. Safe to call multiple times — subsequent calls are no-ops.
Example: Express chatbot with per-conversation traces
import { init } from '@visibe.ai/node'
const visibe = init({ apiKey: process.env.VISIBE_API_KEY })
visibe.instrument(openaiClient)
const sessions = new Map() // sessionId → VisibeSession
app.post('/api/chat', async (req, res) => {
const { message, sessionId } = req.body
let session = sessions.get(sessionId)
if (!session) {
session = await visibe.startSession('Chat Session', { sessionId })
sessions.set(sessionId, session)
}
const reply = await session.run(() => runAgent(message))
// Close the trace when the conversation is done
if (isConversationOver(message)) {
await session.end()
sessions.delete(sessionId)
res.json({ reply, closeSession: true })
} else {
res.json({ reply })
}
})middleware()
Creates one trace per HTTP request. Works with Express, Fastify, and any (req, res, next) compatible framework.
import express from 'express'
const app = express()
app.use(visibe.middleware())
// Custom trace name:
app.use(visibe.middleware({ name: (req) => `${req.method} ${req.url}` }))Concurrent requests are fully isolated via AsyncLocalStorage.
instrument() / uninstrument()
Manually instrument a specific client instance instead of relying on auto-instrumentation.
visibe.instrument(client, { name: 'my-agent' })
visibe.uninstrument(client)shutdown()
Flushes buffered spans before process exit. Not needed for typical servers — the SDK handles SIGTERM / SIGINT automatically. Only needed for short-lived scripts or test suites.
import { shutdown } from '@visibe.ai/node'
await shutdown()🛡️ Safety Guarantees
- No crashes — every SDK operation is wrapped in try/catch
- No latency — all backend calls are fire-and-forget
- No leaks — internal timer is
unref()'d, won't block process exit - No key, no problem — SDK is silently a no-op when no API key is set
No data is sold or shared with third parties. Content is used solely to display traces in your dashboard.
🔗 Resources
- visibe.ai — Product website
- app.visibe.ai — Dashboard
- npm Package
📃 License
MIT — see LICENSE for details.