JSPM

  • Created
  • Published
  • Downloads 26
  • Score
    100M100P100Q70776F
  • License MIT

AI Agent Observability — Track OpenAI, LangChain, LangGraph, Bedrock, Vercel AI, Anthropic

Package Exports

  • @visibe.ai/node

Readme

Visibe SDK for Node.js

Visibe Analytics

Observability for AI agents. Track costs, performance, and errors across your entire AI stack — whether you're using LangChain, LangGraph, Vercel AI, Anthropic, AWS Bedrock, or direct OpenAI calls.

npm version Node TypeScript


📦 Getting Started

1. Create an account

Sign up at app.visibe.ai and create a project.

2. Get an API key

In your project, go to Settings → API Keys and generate a new key. It will look like sk_live_....

3. Install the SDK

npm install @visibe.ai/node

4. Set your API key

export VISIBE_API_KEY=sk_live_your_api_key_here

Or in a .env file:

VISIBE_API_KEY=sk_live_your_api_key_here

5. Instrument your app

import { init } from '@visibe.ai/node'

init()

That's it. Every OpenAI, Anthropic, LangChain, LangGraph, Vercel AI, and Bedrock client created after this call is automatically traced — no other code changes needed.


🧩 Integrations

Framework Auto (init()) Manual (instrument())
OpenAI
Anthropic
LangChain
LangGraph
Vercel AI
AWS Bedrock

Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.

OpenAI

import { init } from '@visibe.ai/node'
import OpenAI from 'openai'

init()

const client = new OpenAI()
const response = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello!' }],
})
// Automatically traced — cost, tokens, duration, and content captured.

Streaming is also supported:

const stream = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Count to 5' }],
  stream: true,
})
for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? '')
}
// Token usage and cost captured when the stream is exhausted.

Anthropic

import { init } from '@visibe.ai/node'
import Anthropic from '@anthropic-ai/sdk'

init()

const client = new Anthropic()
const response = await client.messages.create({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 100,
  messages: [{ role: 'user', content: 'Hello!' }],
})
// Automatically traced.

LangChain

import { init } from '@visibe.ai/node'

init()

// require() AFTER init() so the instrumentation is already active
const { ChatOpenAI }      = require('@langchain/openai')
const { PromptTemplate }  = require('@langchain/core/prompts')
const { StringOutputParser } = require('@langchain/core/output_parsers')
const { RunnableSequence }   = require('@langchain/core/runnables')

const chain = RunnableSequence.from([
  PromptTemplate.fromTemplate('Summarize: {text}'),
  new ChatOpenAI({ model: 'gpt-4o-mini' }),
  new StringOutputParser(),
])

const result = await chain.invoke({ text: 'AI observability matters.' })
// Full chain traced — LLM calls, token counts, and duration captured.

You can also use the LangChainCallback directly for explicit control:

import { LangChainCallback } from '@visibe.ai/node/integrations/langchain'
import { randomUUID } from 'node:crypto'

const traceId  = randomUUID()
const callback = new LangChainCallback({ visibe, traceId, agentName: 'my-agent' })

const model = new ChatOpenAI({ model: 'gpt-4o-mini', callbacks: [callback] })
await model.invoke([new HumanMessage('Hello!')])

LangGraph

import { init } from '@visibe.ai/node'

init()  // must come BEFORE graph compilation

const { StateGraph, END } = require('@langchain/langgraph')
const { ChatOpenAI }      = require('@langchain/openai')
const { HumanMessage }    = require('@langchain/core/messages')

const model = new ChatOpenAI({ model: 'gpt-4o-mini' })

const graph = new StateGraph({
  channels: { messages: { value: (x, y) => x.concat(y), default: () => [] } },
})
  .addNode('research', async (state) => ({
    messages: [await model.invoke([new HumanMessage('Research this topic')])],
  }))
  .addNode('summarise', async (state) => ({
    messages: [await model.invoke([new HumanMessage('Summarise the research')])],
  }))
  .addEdge('__start__', 'research')
  .addEdge('research', 'summarise')
  .addEdge('summarise', END)
  .compile()

await graph.invoke({ messages: [] })
// Each node's LLM calls traced, total cost and token counts rolled up per graph run.

Vercel AI

import { init } from '@visibe.ai/node'

init()  // must come BEFORE require('ai')

// require() AFTER init() so patchVercelAI() has replaced the exports
const { generateText }  = require('ai')
const { openai }        = require('@ai-sdk/openai')

const result = await generateText({
  model: openai('gpt-4o-mini'),
  prompt: 'Write a haiku about observability.',
})
// Automatically traced — provider, model, tokens, and cost captured.

streamText and generateObject are also automatically patched.

AWS Bedrock

import { init } from '@visibe.ai/node'
import { BedrockRuntimeClient, ConverseCommand } from '@aws-sdk/client-bedrock-runtime'

init()

const client = new BedrockRuntimeClient({ region: 'us-east-1' })
const response = await client.send(new ConverseCommand({
  modelId: 'anthropic.claude-3-haiku-20240307-v1:0',
  messages: [{ role: 'user', content: [{ text: 'Hello!' }] }],
}))
// Automatically traced. Works with all models available via Bedrock —
// Claude, Nova, Llama, Mistral, and more.

Supports ConverseCommand, ConverseStreamCommand, InvokeModelCommand, and InvokeModelWithResponseStreamCommand.


⚙️ Configuration

import { init } from '@visibe.ai/node'

init({
  apiKey:       'sk_live_abc123',          // or set VISIBE_API_KEY env var
  frameworks:   ['openai', 'langgraph'],   // limit to specific frameworks
  contentLimit: 500,                       // max chars for LLM content in traces
  debug:        true,                      // enable debug logging
})

Options

Option Type Description Default
apiKey string Your Visibe API key VISIBE_API_KEY env var
apiUrl string Override API endpoint https://api.visibe.ai
frameworks string[] Limit auto-instrumentation to specific frameworks All detected
contentLimit number Max chars for LLM/tool content in spans 1000
debug boolean Enable debug logging false
sessionId string Tag all traces with a session ID

Environment Variables

Variable Description Default
VISIBE_API_KEY Your API key (required)
VISIBE_API_URL Override API endpoint https://api.visibe.ai
VISIBE_CONTENT_LIMIT Max chars for LLM/tool content in spans 1000
VISIBE_DEBUG Enable debug logging (1 to enable) 0

📊 What Gets Tracked

Metric Description
Cost Total spend + per-call cost breakdown using current model pricing
Tokens Input/output tokens per LLM call
Duration Total time + time per step
Tools Which tools were used, duration, success/failure
Errors When and where things failed, with error type and message
Spans Full execution timeline with LLM calls, tool calls, and agent events

🔧 Manual Instrumentation

For cases where you need explicit control — instrumenting a specific client, grouping multiple calls into a named trace, or using Visibe without init().

Instrument a specific client

import { Visibe } from '@visibe.ai/node'
import OpenAI from 'openai'

const visibe = new Visibe({ apiKey: 'sk_live_abc123' })
const client = new OpenAI()

visibe.instrument(client, { name: 'my-agent' })

await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello!' }],
})
// Each call creates its own trace named 'my-agent'.

Group multiple calls into one trace

import { Visibe } from '@visibe.ai/node'
import OpenAI from 'openai'

const visibe = new Visibe()
const client = new OpenAI()

await visibe.track(client, 'my-conversation', async () => {
  await client.chat.completions.create({ model: 'gpt-4o-mini', messages: [...] })
  await client.chat.completions.create({ model: 'gpt-4o-mini', messages: [...] })
})
// Both calls sent as one grouped trace with combined cost and token totals.

Remove instrumentation

visibe.uninstrument(client)

Graceful shutdown

The SDK registers SIGTERM and SIGINT handlers automatically. For long-running scripts where you want to ensure all spans are flushed before exit:

import { shutdown } from '@visibe.ai/node'

await shutdown()

🔗 Resources


📃 License

MIT — see LICENSE for details.