JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 172
  • Score
    100M100P100Q84176F
  • License MIT

ThinkHive SDK v3.1 - AI agent observability supporting 25 trace formats including LangSmith, Langfuse, Opik, Braintrust, Datadog, MLflow, and more

Package Exports

  • @thinkhive/sdk
  • @thinkhive/sdk/instrumentation/langchain
  • @thinkhive/sdk/instrumentation/openai
  • @thinkhive/sdk/integrations/customer-context
  • @thinkhive/sdk/integrations/ticket-linking

Readme

ThinkHive SDK

The official JavaScript/TypeScript SDK for ThinkHive - AI Agent Observability Platform.

Features

  • 25 Trace Format Support: Automatic detection and normalization for LangSmith, Langfuse, Helicone, CrewAI, Opik, Braintrust, HoneyHive, Datadog, MLflow, AgentOps, Portkey, TruLens, Lunary, LangWatch, OpenLIT, Maxim AI, Galileo, PostHog, Keywords AI, Agenta, and more
  • Trace Analysis: Analyze AI agent traces with detailed explainability
  • RAG Evaluation: 8 quality metrics for RAG systems (groundedness, faithfulness, etc.)
  • Hallucination Detection: 9 types of hallucination detection
  • Business Impact: Industry-specific ROI calculations
  • Auto-Instrumentation: Works with LangChain, OpenAI, Anthropic, and more
  • OpenTelemetry: Built on OTLP for seamless integration

Installation

npm install @thinkhive/sdk

Quick Start

Basic Usage

import { ThinkHive } from '@thinkhive/sdk';

// Initialize client
const client = new ThinkHive({
  apiKey: 'your_api_key',
  baseUrl: 'https://api.thinkhive.ai'
});

// Send a trace
const result = await client.trace({
  userMessage: 'What is the weather in San Francisco?',
  agentResponse: 'The weather in San Francisco is currently 65°F and sunny.',
  agentId: 'weather-agent'
});

console.log(`Trace ID: ${result.traceId}`);
if (result.analysis) {
  console.log(`Outcome: ${result.analysis.outcome.verdict}`);
  console.log(`Impact Score: ${result.analysis.businessImpact.impactScore}`);
}

With Business Context

const result = await client.trace({
  userMessage: 'I want to cancel my order #12345',
  agentResponse: 'I understand you want to cancel order #12345...',
  agentId: 'support-agent',
  businessContext: {
    customerId: 'cust_abc123',
    transactionValue: 150.00,
    priority: 'high',
    industry: 'ecommerce'
  }
});

// Access ROI metrics
if (result.analysis?.businessImpact?.roi) {
  const roi = result.analysis.businessImpact.roi;
  console.log(`Estimated Revenue Loss: $${roi.estimatedRevenueLoss}`);
  console.log(`Churn Probability: ${roi.churnProbability}%`);
}

Explainer API

// Full trace analysis with RAG evaluation
const analysis = await client.explainer.analyze({
  userMessage: 'What is your return policy?',
  agentResponse: 'Items can be returned within 30 days...',
  retrievedContexts: ['Return Policy: 30 day returns...'],
  outcome: 'success'
}, {
  tier: 'full_llm',
  includeRagEvaluation: true,
  includeHallucinationDetection: true
});

console.log(`Summary: ${analysis.summary}`);
console.log(`Groundedness: ${analysis.ragEvaluation?.groundedness}`);

// Batch analysis
const batchResult = await client.explainer.analyzeBatch([
  { userMessage: '...', agentResponse: '...' },
  { userMessage: '...', agentResponse: '...' }
], { tier: 'fast_llm' });

// Semantic search
const searchResults = await client.explainer.search({
  query: 'refund complaints',
  filters: { outcome: 'failure' },
  limit: 10
});

Quality Metrics

// Get RAG scores
const ragScores = await client.quality.getRagScores('trace-123');
console.log(`Groundedness: ${ragScores.groundedness}`);
console.log(`Faithfulness: ${ragScores.faithfulness}`);

// Get hallucination report
const report = await client.quality.getHallucinationReport('trace-123');
if (report.hasHallucinations) {
  for (const detection of report.detectedTypes) {
    console.log(`- ${detection.type}: ${detection.description}`);
  }
}

// Evaluate RAG for custom input
const evaluation = await client.quality.evaluateRag({
  query: 'What is the return policy?',
  response: 'Items can be returned within 30 days.',
  contexts: [{ content: 'Return Policy: 30 day returns...' }]
});

ROI Analytics

// Get ROI summary
const summary = await client.analytics.getRoiSummary();
console.log(`Revenue Saved: $${summary.totalRevenueSaved}`);

// Get per-agent ROI
const agentRoi = await client.analytics.getRoiByAgent('support-agent');
console.log(`Success Rate: ${agentRoi.successRate}%`);

// Get correlation analysis
const correlations = await client.analytics.getCorrelations();
for (const corr of correlations.correlations) {
  console.log(`${corr.type}: ${corr.actionableInsight}`);
}

Providing Feedback

// After receiving user feedback
await client.feedback({
  traceId: result.traceId,
  rating: 5,
  wasHelpful: true,
  comment: 'Very accurate response!'
});

// When response was incorrect
await client.feedback({
  traceId: result.traceId,
  rating: 2,
  wasHelpful: false,
  hadIssues: ['incorrect_info', 'too_long'],
  correctedResponse: 'The correct answer is...'
});

Auto-Instrumentation

import { init, autoInstrument } from '@thinkhive/sdk';

// Initialize SDK
init({
  apiKey: 'your_api_key',
  serviceName: 'my-ai-agent',
  autoInstrument: true,
  frameworks: ['langchain', 'openai']
});

// Or manually instrument
autoInstrument(client, {
  frameworks: ['langchain', 'openai'],
  capturePrompts: true,
  captureResponses: true,
  businessContext: { industry: 'saas' }
});

// Now all LangChain and OpenAI calls are automatically traced!

Analysis Tiers

Tier Description Latency Cost
rule_based Pattern matching, keyword extraction ~50ms Free
fast_llm Quick LLM analysis (GPT-3.5) ~500ms Low
full_llm Complete analysis (GPT-4o) ~3s Standard
deep Multi-pass with validation ~15s Premium

Environment Variables

Variable Description
THINKHIVE_API_KEY Your ThinkHive API key
THINKHIVE_ENDPOINT Custom API endpoint (optional)
THINKHIVE_SERVICE_NAME Service name for traces (optional)

API Reference

See API Documentation for complete type definitions.

License

MIT License - see LICENSE for details.