Package Exports
- @execlave/sdk
- @execlave/sdk/instrumentation
- @execlave/sdk/integrations/langchain
- @execlave/sdk/package.json
Readme
@execlave/sdk
Official JavaScript/TypeScript SDK for the Execlave AI agent governance platform.
Framework integrations — use the LangChain callback handler for automatic tracing and policy enforcement, or see all integrations. Get an API key.
Installation
npm install @execlave/sdkQuick Start
import { Execlave } from '@execlave/sdk';
// Initialize the SDK
const ag = new Execlave({
apiKey: 'exe_prod_xxx', // or set EXECLAVE_API_KEY env var
environment: 'production',
});
// Register your agent (idempotent — call on startup)
const agent = await ag.registerAgent({
agentId: 'my-chatbot',
name: 'Customer Support Bot',
type: 'chatbot',
platform: 'custom',
});
// Trace an LLM call
const trace = ag.startTrace({ agentId: 'my-chatbot', sessionId: 'sess_123' });
trace.setInput(userQuestion);
const answer = await llm.call(userQuestion);
trace.setOutput(answer).setModel('gpt-4-turbo').setTokens(150, 300).setCost(0.012);
trace.finish(); // submits to buffer; flushed in background
// Or use the wrap() helper for automatic tracing
const tracedCall = ag.wrap(
async (question: string) => {
return await llm.call(question);
},
{ agentId: 'my-chatbot' },
);
const result = await tracedCall('Hello!');
// Prompt management
const version = await agent.deployPrompt({
promptTemplate: 'You are a helpful assistant. Answer: {question}',
systemMessage: 'Be concise.',
modelName: 'gpt-4-turbo',
changeSummary: 'Improved conciseness',
});
// Kill-switch aware
try {
const trace = ag.startTrace();
// ...
} catch (err) {
if (err instanceof AgentPausedError) {
return 'Service temporarily unavailable.';
}
throw err;
}
// Graceful shutdown (flushes remaining traces)
await ag.shutdown();Configuration
| Option | Type | Default | Description |
|---|---|---|---|
apiKey |
string |
EXECLAVE_API_KEY env |
API key (exe_prod_xxx) |
baseUrl |
string |
https://api.execlave.com |
Execlave API URL |
environment |
string |
production |
Deployment environment |
asyncMode |
boolean |
true |
Buffer traces for background flush |
batchSize |
number |
100 |
Max traces per flush batch |
flushIntervalMs |
number |
10000 |
Background flush interval |
debug |
boolean |
false |
Enable debug logging |
enableControlChannel |
boolean |
true |
Poll agent status for kill-switch |
pollIntervalMs |
number |
15000 |
Status poll interval |
API Reference
Execlave
ping()— Check API connectivityregisterAgent(opts)— Register an AI agentstartTrace(opts)— Start a manual tracewrap(fn, opts)— Wrap a function with automatic tracingcheckAgentStatus(agentId?)— Get agent statusflush()— Flush buffered tracesshutdown()— Flush and shut down
Agent
deployPrompt(opts)— Deploy a new prompt versiongetCurrentPrompt()— Get the deployed promptlistPromptVersions()— List all versionsrefreshStatus()— Refresh agent statusisPaused— Whether agent is kill-switched
Trace
setInput(data)— Set input data (chainable)setOutput(data)— Set output data (chainable)setModel(name)— Set model name (chainable)setTokens(prompt, completion)— Set token counts (chainable)setCost(usd)— Set cost (chainable)addMetadata(meta)— Add metadata (chainable)finish(status?, error?, errorType?)— Submit the trace
Zero Dependencies
This SDK has zero runtime dependencies. It uses Node.js built-in http/https modules for network requests.
Legal
By using this SDK, you agree to the Execlave Terms of Service.
License
MIT — see LICENSE for details.