Package Exports
- @visibe.ai/node
Readme
visibe.ai SDK for Node.js
Observability for AI agents. Track costs, performance, and errors across your entire AI stack — whether you're using LangChain, LangGraph, Vercel AI, Anthropic, AWS Bedrock, or direct OpenAI calls.
🚀 Quick Start
npm install @visibe.ai/nodeGet your API key at app.visibe.ai → Settings → API Keys, then add one line to your app:
// CJS Syntax
const { init } = require('@visibe.ai/node')
// ESM Syntax
import { init } from '@visibe.ai/node'
init({ apiKey: 'sk_live_your_key_here' })That's it. Every OpenAI, Anthropic, LangChain, LangGraph, Vercel AI, and Bedrock call is automatically traced from this point on — no wrappers, no config changes.
🧩 Supported Frameworks
Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.
⚙️ Configuration
| Option | Type | Default | Description |
|---|---|---|---|
apiKey |
string |
— | Your Visibe API key. Falls back to VISIBE_API_KEY env var |
redactContent |
boolean |
false |
Omit prompt and completion text from all traces. Only metadata is sent (tokens, cost, duration, model, errors). |
sessionId |
string |
— | Tag all traces with a session ID |
frameworks |
string[] |
All detected | Limit auto-instrumentation to specific frameworks |
📊 What Gets Tracked
| Metric | Sent when redactContent: true |
|---|---|
| Cost, tokens, duration | ✅ |
| Model & provider | ✅ |
| Tool calls (name, duration, success/failure) | ✅ |
| Errors (type, message, HTTP status) | ✅ |
| Full execution timeline (spans) | ✅ |
| Prompt text | ❌ omitted |
| Completion text | ❌ omitted |
When redactContent: true, prompt and completion text never leave your codebase. You retain full observability over costs, performance, and errors.
📖 API Reference
init()
Call once at the top of your app, before creating any clients. Returns a Visibe instance.
import { init } from '@visibe.ai/node'
const visibe = init({ apiKey: 'sk_live_abc123' })track()
Groups multiple LLM calls into a single named trace.
await visibe.track(client, 'my-conversation', async () => {
await client.chat.completions.create({ model: 'gpt-4o-mini', messages: [...] })
await client.chat.completions.create({ model: 'gpt-4o-mini', messages: [...] })
})
// Both calls appear as spans under one trace.runWithSession()
Like track(), but works across all already-instrumented clients without specifying one.
await visibe.runWithSession('research-task', async () => {
// Any instrumented client used here is grouped into one trace.
})middleware()
Creates one trace per HTTP request. Works with Express, Fastify, and any (req, res, next) compatible framework.
import express from 'express'
const app = express()
app.use(visibe.middleware())
// Custom trace name:
app.use(visibe.middleware({ name: (req) => `${req.method} ${req.url}` }))Concurrent requests are fully isolated via AsyncLocalStorage.
instrument() / uninstrument()
Manually instrument a specific client instance instead of relying on auto-instrumentation.
visibe.instrument(client, { name: 'my-agent' })
visibe.uninstrument(client)shutdown()
Flushes buffered spans before process exit. Not needed for typical servers — the SDK handles SIGTERM / SIGINT automatically. Only needed for short-lived scripts or test suites.
import { shutdown } from '@visibe.ai/node'
await shutdown()🛡️ Safety Guarantees
- No crashes — every SDK operation is wrapped in try/catch
- No latency — all backend calls are fire-and-forget
- No leaks — internal timer is
unref()'d, won't block process exit - No key, no problem — SDK is silently a no-op when no API key is set
No data is sold or shared with third parties. Content is used solely to display traces in your dashboard.
🔗 Resources
- visibe.ai — Product website
- app.visibe.ai — Dashboard
- npm Package
📃 License
MIT — see LICENSE for details.