Package Exports
- @thinkhive/sdk
- @thinkhive/sdk/instrumentation/langchain
- @thinkhive/sdk/instrumentation/openai
- @thinkhive/sdk/integrations/customer-context
- @thinkhive/sdk/integrations/ticket-linking
Readme
ThinkHive SDK v4.0.0
The official JavaScript/TypeScript SDK for ThinkHive - AI Agent Observability Platform.
Features
- OpenTelemetry-Based Tracing: Built on OTLP for seamless integration with existing observability tools
- Run-Centric Architecture: Atomic unit of work tracking with claims, calibration, and linking
- Facts vs Inferences: Claims API for separating verified facts from inferences
- Deterministic Ticket Linking: 7 methods for linking runs to support tickets
- Calibrated Predictions: Brier scores for prediction accuracy
- Auto-Instrumentation: Works with LangChain, OpenAI, Anthropic, and more
- Multi-Format Support: Normalizes traces from 25+ observability platforms
Installation
npm install @thinkhive/sdkQuick Start
Basic Initialization
import { init, runs, traceLLM, shutdown } from '@thinkhive/sdk';
// Initialize the SDK
init({
apiKey: 'thk_your_api_key',
serviceName: 'my-ai-agent',
autoInstrument: true,
frameworks: ['langchain', 'openai'],
});
// Create a run (atomic unit of work)
const run = await runs.create({
agentId: 'weather-agent',
conversationMessages: [
{ role: 'user', content: 'What is the weather in San Francisco?' },
{ role: 'assistant', content: 'The weather in San Francisco is currently 65°F and sunny.' }
],
outcome: 'success',
});
console.log(`Run ID: ${run.id}`);
// Shutdown when done
await shutdown();Manual Tracing
import { init, traceLLM, traceRetrieval, traceTool, traceChain } from '@thinkhive/sdk';
init({ apiKey: 'thk_your_api_key', serviceName: 'my-agent' });
// Trace an LLM call
const response = await traceLLM({
name: 'generate-response',
modelName: 'gpt-4',
provider: 'openai',
input: { prompt: 'Hello!' }
}, async () => {
// Your LLM call here
return await openai.chat.completions.create({...});
});
// Trace a retrieval operation
const docs = await traceRetrieval({
name: 'search-knowledge-base',
query: 'refund policy',
topK: 5
}, async () => {
return await vectorStore.similaritySearch(query, 5);
});
// Trace a tool call
const result = await traceTool({
name: 'lookup-order',
toolName: 'order_lookup',
parameters: { orderId: '12345' }
}, async () => {
return await lookupOrder('12345');
});Analyzer API (User-Selected Analysis)
import { analyzer } from '@thinkhive/sdk';
// Estimate cost before running analysis
const estimate = await analyzer.estimateCost({
traceIds: ['trace-1', 'trace-2', 'trace-3'],
tier: 'standard',
});
console.log(`Estimated cost: $${estimate.estimatedCost}`);
// Analyze specific traces
const analysis = await analyzer.analyze({
traceIds: ['trace-1', 'trace-2'],
tier: 'standard',
includeRootCause: true,
includeLayers: true,
});
// Analyze traces by time window with smart sampling
const windowAnalysis = await analyzer.analyzeWindow({
agentId: 'support-agent',
startDate: new Date('2024-01-01'),
endDate: new Date('2024-01-31'),
filters: { outcomes: ['failure'], minSeverity: 'medium' },
sampling: { strategy: 'smart', samplePercent: 10 },
});
// Get aggregated insights
const summary = await analyzer.summarize({
agentId: 'support-agent',
startDate: new Date('2024-01-01'),
endDate: new Date('2024-01-31'),
});Issues API (Clustered Failure Patterns)
import { issues } from '@thinkhive/sdk';
// List issues for an agent
const issueList = await issues.list('support-agent', {
status: 'open',
limit: 10,
});
// Get a specific issue
const issue = await issues.get('issue-123');
// Get fixes for an issue
const fixes = await issues.getFixes('issue-123');API Key Management
import { apiKeys, hasPermission, canAccessAgent } from '@thinkhive/sdk';
// Create a scoped API key
const result = await apiKeys.create({
name: 'CI Pipeline Key',
permissions: {
read: true,
write: true,
delete: false
},
scopeType: 'agent', // Restrict to specific agents
allowedAgentIds: ['agent-prod-001'],
environment: 'production',
expiresAt: new Date(Date.now() + 90 * 24 * 60 * 60 * 1000) // 90 days
});
console.log(`Key created: ${result.name} (${result.keyPrefix}...)`);
// Check permissions
if (hasPermission(key, 'write')) {
// Can write data
}
// Check agent access
if (canAccessAgent(key, 'agent-123')) {
// Can access this agent
}Claims API (Facts vs Inferences)
import { claims, isFact, isInference, getHighConfidenceClaims } from '@thinkhive/sdk';
// List claims for a run
const claimList = await claims.list(runId);
// Filter by type
const facts = claimList.filter(isFact);
const inferences = claimList.filter(isInference);
// Get high confidence claims
const confident = getHighConfidenceClaims(claimList, 0.9);Calibration API (Prediction Accuracy)
import { calibration, calculateBrierScore, isWellCalibrated } from '@thinkhive/sdk';
// Get calibration status for a prediction type
const status = await calibration.status(agentId, 'churn_risk');
// Get all calibration metrics
const metrics = await calibration.allMetrics(agentId);
// Calculate Brier score for predictions
const brierScore = calculateBrierScore(predictions);
// Check if well calibrated
if (isWellCalibrated(brierScore)) {
console.log('Agent predictions are well calibrated');
}Business Metrics API
import {
businessMetrics,
isMetricReady,
needsMoreTraces,
getStatusMessage
} from '@thinkhive/sdk';
// Get current metric value with status
const metric = await businessMetrics.current('agent-123', 'Deflection Rate');
console.log(`${metric.metricName}: ${metric.valueFormatted}`);
if (metric.status === 'insufficient_data') {
console.log(`Need ${metric.minTraceThreshold - metric.traceCount} more traces`);
}
// Get historical data for graphing
const history = await businessMetrics.history('agent-123', 'Deflection Rate', {
startDate: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000),
endDate: new Date(),
granularity: 'daily',
});
console.log(`${history.dataPoints.length} data points`);
console.log(`Change: ${history.summary.changePercent}%`);
// Record external metric values (from CRM, surveys, etc.)
await businessMetrics.record('agent-123', {
metricName: 'CSAT/NPS',
value: 4.5,
unit: 'score',
periodStart: '2024-01-01T00:00:00Z',
periodEnd: '2024-01-07T23:59:59Z',
source: 'survey_system',
sourceDetails: { surveyId: 'survey_456', responseCount: 150 },
});Metric Status Types
| Status | Description |
|---|---|
ready |
Metric calculated and ready to display |
insufficient_data |
Need more traces before calculation |
awaiting_external |
External data source not connected |
stale |
Data is older than expected |
Ticket Linking (Zendesk Integration)
import {
linking,
generateZendeskMarker,
linkRunToZendeskTicket
} from '@thinkhive/sdk';
// Generate a marker to embed in ticket
const marker = generateZendeskMarker(runId);
// Returns: <!-- thinkhive:run:abc123 -->
// Link a run to a ticket
await linkRunToZendeskTicket(runId, ticketId);
// Get best linking method
import { getBestLinkMethod } from '@thinkhive/sdk';
const method = getBestLinkMethod(runData);
// Returns: 'conversation_id' | 'subject_hash' | 'marker' | etc.Auto-Instrumentation
import { init } from '@thinkhive/sdk';
// Initialize with auto-instrumentation
init({
apiKey: 'thk_your_api_key',
serviceName: 'my-ai-agent',
autoInstrument: true,
frameworks: ['langchain', 'openai', 'anthropic']
});
// Now all LangChain, OpenAI, and Anthropic calls are automatically traced!Analysis Tiers
| Tier | Description | Use Case |
|---|---|---|
fast |
Quick pattern-based analysis | High-volume, low-latency needs |
standard |
LLM-powered analysis | Default for most use cases |
deep |
Multi-pass with validation | Critical traces, root cause analysis |
Environment Variables
| Variable | Description |
|---|---|
THINKHIVE_API_KEY |
Your ThinkHive API key |
THINKHIVE_ENDPOINT |
Custom API endpoint (default: https://app.thinkhive.ai) |
THINKHIVE_SERVICE_NAME |
Service name for traces (optional) |
Architecture
Key Concepts
Run-Centric Model: The atomic unit of work is a "Run" (not a trace). A run captures:
- Conversation messages
- Retrieved contexts
- Tool calls
- Outcome and metadata
Facts vs Inferences: Claims API separates:
- Facts: Verified information from retrieval or tool calls
- Inferences: LLM-generated conclusions
- Computed: Derived values from rules
Calibrated Predictions: Track prediction accuracy using:
- Brier scores for overall calibration
- ECE (Expected Calibration Error) for bucketed analysis
API Structure
| API | Description |
|---|---|
runs |
Create and manage runs (atomic work units) |
claims |
Manage facts/inferences for runs |
calibration |
Track prediction accuracy |
analyzer |
User-selected trace analysis |
issues |
Clustered failure patterns |
linking |
Connect runs to support tickets |
customerContext |
Time-series customer snapshots |
apiKeys |
API key management |
businessMetrics |
Industry-driven metrics with historical tracking |
roiAnalytics |
Business ROI and financial impact analysis |
qualityMetrics |
RAG evaluation and hallucination detection |
Evaluation APIs
| API | Description |
|---|---|
humanReview |
Human-in-the-loop review queues |
nondeterminism |
Multi-sample reliability testing |
evalHealth |
Evaluation metric health monitoring |
deterministicGraders |
Rule-based evaluation |
conversationEval |
Multi-turn conversation evaluation |
transcriptPatterns |
Pattern detection in transcripts |
Upgrading from v3
See MIGRATION.md for the full v3 → v4 migration guide. Key changes:
apiVersionremoved frominit()options — routing is handled automatically per modulecalibration.recordOutcome()andcalibration.reliabilityDiagram()removed — usecalibration.status()andcalibration.allMetrics()instead- Default endpoint changed from staging to
https://app.thinkhive.ai - OTLP init now requires
apiKey(agent ID alone is not sufficient)
API Reference
See API Documentation for complete type definitions.
License
MIT License - see LICENSE for details.