Package Exports
- @axonflow/sdk
Readme
AxonFlow SDK for TypeScript
Add invisible AI governance to your applications in 3 lines of code. No UI changes. No user training. Just drop-in enterprise protection.
Installation
npm install @axonflow/sdkQuick Start
Gateway Mode (Recommended)
Gateway Mode provides the most reliable integration by explicitly separating policy checks, LLM calls, and audit logging:
import { AxonFlow } from '@axonflow/sdk';
import OpenAI from 'openai';
// Initialize clients
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
endpoint: process.env.AXONFLOW_ENDPOINT || 'http://localhost:8080'
});
const prompt = 'What is the capital of France?';
// Step 1: Pre-check policies
const ctx = await axonflow.getPolicyApprovedContext({
userToken: 'user-123',
query: prompt
});
if (!ctx.approved) {
throw new Error(`Blocked: ${ctx.blockReason}`);
}
// Step 2: Make your own LLM call
const startTime = Date.now();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: prompt }]
});
const latencyMs = Date.now() - startTime;
// Step 3: Audit the call
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: response.choices[0].message.content?.substring(0, 100) || '',
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: response.usage?.prompt_tokens || 0,
completionTokens: response.usage?.completion_tokens || 0,
totalTokens: response.usage?.total_tokens || 0
},
latencyMs
});
console.log('Response:', response.choices[0].message.content);Proxy Mode (Simpler Alternative)
For simpler integrations, Proxy Mode handles policy checking and auditing in a single call:
import { AxonFlow } from '@axonflow/sdk';
const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
endpoint: 'http://localhost:8080'
});
// Single call - policies checked, query processed, audit logged
const response = await axonflow.executeQuery({
userToken: 'user-123',
query: 'What is the capital of France?',
requestType: 'chat',
context: {
provider: 'openai',
model: 'gpt-4'
}
});
if (response.success) {
console.log('Response:', response.data);
}Self-Hosted Mode (No License Required)
Connect to a self-hosted AxonFlow instance running via docker-compose:
import { AxonFlow } from '@axonflow/sdk';
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// Self-hosted (localhost) - no license key needed!
const axonflow = new AxonFlow({
endpoint: 'http://localhost:8080'
// That's it - no authentication required for localhost
});
// Use Gateway Mode for self-hosted
const prompt = 'Test with self-hosted AxonFlow';
const ctx = await axonflow.getPolicyApprovedContext({
userToken: 'user-123',
query: prompt
});
if (!ctx.approved) {
throw new Error(`Blocked: ${ctx.blockReason}`);
}
const startTime = Date.now();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: prompt }]
});
// Don't forget to audit!
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: response.choices[0].message.content?.substring(0, 100) || '',
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: response.usage?.prompt_tokens || 0,
completionTokens: response.usage?.completion_tokens || 0,
totalTokens: response.usage?.total_tokens || 0
},
latencyMs: Date.now() - startTime
});
console.log(response.choices[0].message.content);Self-hosted deployment:
# Clone and start AxonFlow
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow
export OPENAI_API_KEY=sk-your-key-here
docker-compose up
# SDK connects to http://localhost:8080 - no license needed!Features:
- ā Full AxonFlow features without license
- ā Perfect for local development and testing
- ā Same API as production
- ā Automatically detects localhost and skips authentication
Legacy API Key Auth (Deprecated)
ā ļø Deprecated:
apiKeyauthentication is deprecated. Please migrate to license-based authentication usinglicenseKey.
// Legacy method (still supported for backward compatibility)
const axonflow = new AxonFlow({ apiKey: process.env.AXONFLOW_API_KEY });Proxy Mode (executeQuery)
Proxy Mode routes all requests through AxonFlow's /api/request endpoint, providing a simpler integration pattern with automatic policy enforcement:
Basic Query Execution
import { AxonFlow, PolicyViolationError } from '@axonflow/sdk';
const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY
});
// Execute a chat query with policy enforcement
const response = await axonflow.executeQuery({
userToken: 'user-123',
query: 'Explain quantum computing in simple terms',
requestType: 'chat',
context: {
provider: 'openai',
model: 'gpt-4'
}
});
if (response.success) {
console.log('Response:', response.data);
console.log('Policies evaluated:', response.policyInfo?.policiesEvaluated);
}Handling Policy Violations
try {
await axonflow.executeQuery({
userToken: 'user-123',
query: 'Process this SSN: 123-45-6789',
requestType: 'chat'
});
} catch (error) {
if (error instanceof PolicyViolationError) {
console.log('Request blocked:', error.blockReason);
console.log('Violating policies:', error.policies);
}
}SQL Query Governance
// SQL queries get additional injection detection
const sqlResponse = await axonflow.executeQuery({
userToken: 'analyst-user',
query: 'SELECT name, email FROM customers WHERE status = active LIMIT 100',
requestType: 'sql'
});Health Check
// Check if AxonFlow agent is healthy
const health = await axonflow.healthCheck();
if (health.status === 'healthy') {
console.log('Agent version:', health.version);
console.log('Uptime:', health.uptime);
} else {
console.warn('Agent status:', health.status);
}Request Types
| Request Type | Description |
|---|---|
chat |
General chat/LLM queries |
sql |
SQL queries (with injection detection) |
mcp-query |
MCP connector queries |
multi-agent-plan |
Generate multi-agent plans |
execute-plan |
Execute a generated plan |
Gateway Mode (Direct LLM Calls)
Gateway Mode is for advanced users who want to make direct LLM calls while still getting policy enforcement:
// Step 1: Pre-check policies
const ctx = await axonflow.getPolicyApprovedContext({
userToken: 'user-jwt',
query: 'Analyze customer data',
dataSources: ['postgres']
});
if (!ctx.approved) {
throw new Error(`Blocked: ${ctx.blockReason}`);
}
// Step 2: Make direct LLM call with approved data
const llmResponse = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: JSON.stringify(ctx.approvedData) }]
});
// Step 3: Audit the call
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: llmResponse.choices[0].message.content.substring(0, 100),
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: llmResponse.usage.prompt_tokens,
completionTokens: llmResponse.usage.completion_tokens,
totalTokens: llmResponse.usage.total_tokens
},
latencyMs: 250
});React Example
import { AxonFlow } from '@axonflow/sdk';
import { useState } from 'react';
const axonflow = new AxonFlow({
licenseKey: process.env.REACT_APP_AXONFLOW_LICENSE_KEY,
endpoint: process.env.REACT_APP_AXONFLOW_ENDPOINT || 'http://localhost:8080'
});
function ChatComponent() {
const [response, setResponse] = useState('');
const [error, setError] = useState<string | null>(null);
const handleSubmit = async (prompt: string) => {
setError(null);
try {
// Use Proxy Mode for simple integrations
// Note: In production, get userToken from your auth context
const result = await axonflow.executeQuery({
userToken: 'user-123', // Replace with actual user token
query: prompt,
requestType: 'chat'
});
if (result.success) {
setResponse(result.data);
}
} catch (err) {
setError(err instanceof Error ? err.message : 'An error occurred');
}
};
return (
// Your existing UI - no changes needed
<div>...</div>
);
}Next.js API Route
// pages/api/chat.ts
import { AxonFlow, PolicyViolationError } from '@axonflow/sdk';
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
endpoint: process.env.AXONFLOW_ENDPOINT || 'http://localhost:8080'
});
export default async function handler(req, res) {
const { prompt, userToken } = req.body;
try {
// Step 1: Pre-check policies
const ctx = await axonflow.getPolicyApprovedContext({
userToken: userToken || 'anonymous',
query: prompt
});
if (!ctx.approved) {
return res.status(403).json({ error: ctx.blockReason });
}
// Step 2: Make the LLM call
const startTime = Date.now();
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: prompt }]
});
const latencyMs = Date.now() - startTime;
// Step 3: Audit the call
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: completion.choices[0].message.content?.substring(0, 100) || '',
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: completion.usage?.prompt_tokens || 0,
completionTokens: completion.usage?.completion_tokens || 0,
totalTokens: completion.usage?.total_tokens || 0
},
latencyMs
});
res.json({ success: true, response: completion.choices[0].message.content });
} catch (error) {
if (error instanceof PolicyViolationError) {
return res.status(403).json({ error: error.blockReason });
}
const message = error instanceof Error ? error.message : 'Unknown error';
res.status(500).json({ error: message });
}
}Configuration Options
const axonflow = new AxonFlow({
apiKey: 'your-api-key', // Required (use client_id from AxonFlow)
// Optional settings
mode: 'production', // or 'sandbox' for testing
endpoint: 'https://staging-eu.getaxonflow.com', // Default public endpoint
tenant: 'your-tenant-id', // For multi-tenant setups (use client_id)
debug: true, // Enable debug logging
// Retry configuration
retry: {
enabled: true,
maxAttempts: 3,
delay: 1000
},
// Cache configuration
cache: {
enabled: true,
ttl: 60000 // 1 minute
}
});VPC Private Endpoint (Low-Latency)
For customers running within AWS VPC, use the private endpoint for sub-10ms latency:
const axonflow = new AxonFlow({
apiKey: 'your-client-id',
endpoint: 'https://vpc-private-endpoint.getaxonflow.com:8443', // VPC private endpoint
tenant: 'your-client-id',
mode: 'production'
});Performance:
- Public endpoint: ~100ms (internet routing)
- VPC private endpoint: <10ms P99 (intra-VPC routing)
Note: VPC endpoints require AWS VPC peering setup with AxonFlow infrastructure.
Sandbox Mode (Testing)
// Use sandbox mode for testing without affecting production
const axonflow = AxonFlow.sandbox('demo-key');
// Test with PII detection (will be blocked)
try {
const response = await axonflow.executeQuery({
userToken: 'test-user',
query: 'My SSN is 123-45-6789',
requestType: 'chat'
});
} catch (error) {
// Expected: PolicyViolationError - PII detected
console.log('Correctly blocked:', error.message);
}What Gets Protected?
AxonFlow automatically:
- Blocks prompts containing sensitive data (PII, credentials, etc.)
- Redacts personal information from responses
- Enforces rate limits and usage quotas
- Prevents prompt injection attacks
- Logs all requests for compliance audit trails
- Monitors costs and usage patterns
Error Handling
import { AxonFlow, PolicyViolationError, AuthenticationError, APIError } from '@axonflow/sdk';
try {
const response = await axonflow.executeQuery({
userToken: 'user-123',
query: prompt,
requestType: 'chat'
});
} catch (error) {
if (error instanceof PolicyViolationError) {
// Request violated a policy
console.log('Policy violation:', error.blockReason);
console.log('Policies:', error.policies);
} else if (error instanceof AuthenticationError) {
// Authentication failed
console.error('Auth error:', error.message);
} else if (error instanceof APIError) {
// API error (status, statusText, body)
console.error(`API error ${error.status}:`, error.body);
} else {
// Other errors
console.error('Error:', error);
}
}Production Best Practices
Environment Variables: Never hardcode API keys
const axonflow = new AxonFlow({ apiKey: process.env.AXONFLOW_API_KEY });
Fail Open: In production, AxonFlow fails open if unreachable
// If AxonFlow is down, the original call proceeds // This ensures your app stays operational
Tenant Isolation: Use tenant IDs for multi-tenant apps
const axonflow = new AxonFlow({ apiKey: 'your-key', tenant: getCurrentTenantId() });
Support
- Documentation: https://docs.axonflow.com
- Email: support@axonflow.com
- GitHub: https://github.com/axonflow/sdk-typescript
MCP Connector Marketplace
Integrate with external data sources using AxonFlow's MCP (Model Context Protocol) connectors:
List Available Connectors
const connectors = await axonflow.listConnectors();
connectors.forEach(conn => {
console.log(`Connector: ${conn.name} (${conn.type})`);
console.log(` Description: ${conn.description}`);
console.log(` Installed: ${conn.installed}`);
console.log(` Capabilities: ${conn.capabilities.join(', ')}`);
});Install a Connector
await axonflow.installConnector({
connector_id: 'amadeus-travel',
name: 'amadeus-prod',
tenant_id: 'your-tenant-id',
options: {
environment: 'production'
},
credentials: {
api_key: process.env.AMADEUS_API_KEY,
api_secret: process.env.AMADEUS_API_SECRET
}
});
console.log('Connector installed successfully!');Query a Connector
// Query the Amadeus connector for flight information
const resp = await axonflow.queryConnector(
'amadeus-prod',
'Find flights from Paris to Amsterdam on Dec 15',
{
origin: 'CDG',
destination: 'AMS',
date: '2025-12-15'
}
);
if (resp.success) {
console.log('Flight data:', resp.data);
} else {
console.error('Query failed:', resp.error);
}Production Connectors (November 2025)
AxonFlow now supports 7 production-ready connectors:
Salesforce CRM Connector
Query Salesforce data using SOQL:
// Query Salesforce contacts
const contacts = await axonflow.queryConnector(
'salesforce-crm',
'Find all contacts for account Acme Corp',
{
soql: "SELECT Id, Name, Email, Phone FROM Contact WHERE AccountId = '001xx000003DHP0'"
}
);
console.log(`Found ${contacts.data.length} contacts`);Authentication: OAuth 2.0 password grant (configured in AxonFlow dashboard)
Snowflake Data Warehouse Connector
Execute analytics queries on Snowflake:
// Query Snowflake for sales analytics
const analytics = await axonflow.queryConnector(
'snowflake-warehouse',
'Get monthly revenue for last 12 months',
{
sql: `SELECT DATE_TRUNC('month', order_date) as month,
COUNT(*) as orders,
SUM(amount) as revenue
FROM orders
WHERE order_date >= DATEADD(month, -12, CURRENT_DATE())
GROUP BY month
ORDER BY month`
}
);
console.log('Revenue data:', analytics.data);Authentication: Key-pair JWT authentication (configured in AxonFlow dashboard)
Slack Connector
Send notifications and alerts to Slack channels:
// Send Slack notification
const result = await axonflow.queryConnector(
'slack-workspace',
'Send deployment notification to #engineering channel',
{
channel: '#engineering',
text: 'š Deployment complete! All systems operational.',
blocks: [
{
type: 'section',
text: {
type: 'mrkdwn',
text: '*Deployment Status*\nā
All systems operational'
}
}
]
}
);
console.log('Message sent:', result.success);Authentication: OAuth 2.0 bot token (configured in AxonFlow dashboard)
Available Connectors
| Connector | Type | Use Case |
|---|---|---|
| PostgreSQL | Database | Relational data access |
| Redis | Cache | Distributed rate limiting |
| Slack | Communication | Team notifications |
| Salesforce | CRM | Customer data, SOQL queries |
| Snowflake | Data Warehouse | Analytics, reporting |
| Amadeus GDS | Travel | Flight/hotel booking |
| Cassandra | NoSQL | Distributed database |
For complete connector documentation, see https://docs.getaxonflow.com/mcp
Multi-Agent Planning (MAP)
Generate and execute complex multi-step plans using AI agent orchestration:
Generate a Plan
// Generate a travel planning workflow
const plan = await axonflow.generatePlan(
'Plan a 3-day trip to Paris with moderate budget',
'travel' // Domain hint (optional)
);
console.log(`Generated plan ${plan.planId} with ${plan.steps.length} steps`);
console.log(`Complexity: ${plan.complexity}, Parallel: ${plan.parallel}`);
plan.steps.forEach((step, i) => {
console.log(` Step ${i + 1}: ${step.name} (${step.type})`);
console.log(` Description: ${step.description}`);
console.log(` Agent: ${step.agent}`);
if (step.dependsOn.length > 0) {
console.log(` Depends on: ${step.dependsOn.join(', ')}`);
}
});Execute a Plan
// Execute the generated plan
const execResp = await axonflow.executePlan(plan.planId);
console.log(`Plan Status: ${execResp.status}`);
console.log(`Duration: ${execResp.duration}`);
if (execResp.status === 'completed') {
console.log(`Result:\n${execResp.result}`);
// Access individual step results
Object.entries(execResp.stepResults || {}).forEach(([stepId, result]) => {
console.log(` ${stepId}:`, result);
});
} else if (execResp.status === 'failed') {
console.error(`Error: ${execResp.error}`);
}Check Plan Status
// For long-running plans, check status periodically
const status = await axonflow.getPlanStatus(plan.planId);
console.log(`Plan Status: ${status.status}`);
if (status.status === 'running') {
console.log('Plan is still executing...');
}Complete Example: Trip Planning with MAP
import { AxonFlow } from '@axonflow/sdk';
async function planTrip() {
// Initialize client with license key
const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
debug: true
});
// 1. Generate multi-agent plan
const plan = await axonflow.generatePlan(
'Plan a 3-day trip to Paris for 2 people with moderate budget',
'travel'
);
console.log(`ā
Generated plan with ${plan.steps.length} steps (parallel: ${plan.parallel})`);
// 2. Execute the plan
console.log('\nš Executing plan...');
const execResp = await axonflow.executePlan(plan.planId);
// 3. Display results
if (execResp.status === 'completed') {
console.log(`\nā
Plan completed in ${execResp.duration}`);
console.log(`\nš Complete Itinerary:\n${execResp.result}`);
} else {
console.error(`\nā Plan failed: ${execResp.error}`);
}
}
planTrip().catch(console.error);Migration Guide
Migrating from API Key to License Key
If you're currently using apiKey authentication, migrate to license-based authentication:
Before:
const axonflow = new AxonFlow({
apiKey: process.env.AXONFLOW_API_KEY
});After:
const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY
});How to get a license key:
- Contact AxonFlow support at dev@getaxonflow.com
- License keys are provided as part of your AxonFlow subscription
- Store keys securely in environment variables or secrets management systems
Backward Compatibility:
- The SDK maintains full backward compatibility with
apiKey - No breaking changes - existing code continues to work
- You can migrate at your own pace
License
MIT