Package Exports
- @fastn-ai/ucl-sdk
- @fastn-ai/ucl-sdk/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@fastn-ai/ucl-sdk) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
UCL SDK
Find the best matching tools from UCL using semantic search. Get top 5 ranked tools with confidence scores and LLM-ready prompts for AI agent integration.
Quick Start
Step 1: Get Your Credentials
Before using the UCL SDK, you need to obtain your credentials from the UCL platform:
- Sign up/Login to the UCL platform at app.ucl.dev
- Navigate to Settings → API Credentials
- Copy your credentials:
- Space ID: Your unique space identifier (looks like:
0b394afd-e5ba-4b7c-85af-a2d6639e7c04) - Auth Token: Your authentication token (looks like:
7c8bc9182a07e19b262495dd1e27fa21bd5029a9)
- Space ID: Your unique space identifier (looks like:
Tip: Store these credentials securely in your environment variables or configuration files.
Step 2: Installation
npm install @fastn-ai/ucl-sdkStep 3: Basic Setup
Create your first UCL integration:
import { UCL, UCLConfig } from '@fastn-ai/ucl-sdk';
// Basic configuration - just your credentials
const config: UCLConfig = {
authToken: process.env.UCL_AUTH_TOKEN, // From Step 1
spaceId: process.env.UCL_SPACE_ID // From Step 1
// Everything else uses defaults
};
async function findTools() {
// Initialize UCL
const ucl = new UCL(config);
await ucl.initialize();
// Find tools for a user message
const analysis = await ucl.findTools(
[{ role: "user", content: "Send an email to john@example.com" }]
);
// Check if tools were found
if (analysis.requiresTool && analysis.topMatches) {
console.log(`Found ${analysis.topMatches.length} matching tools!`);
// Show the top matches
analysis.topMatches.forEach((match, index) => {
const emoji = match.confidence === 'high' ? '🟢' :
match.confidence === 'medium' ? '🟡' : '🔴';
console.log(`${index + 1}. ${emoji} ${match.tool.function.name} (${match.confidence})`);
});
// Best tool recommendation
console.log(`\nBest match: ${analysis.tool?.function.name}`);
} else {
console.log("No matching tools found");
}
}
findTools();LLM Integration
The UCL SDK automatically generates LLM-ready prompts for your found tools, making AI agent integration simple:
const analysis = await ucl.findTools(messages);
if (analysis.requiresTool && analysis.llmPrompt) {
// Ready-to-use prompt for your LLM
console.log("LLM Prompt:", analysis.llmPrompt);
// Send to your favorite LLM (OpenAI, Claude, etc.)
const llmResponse = await your_llm_client.chat({
messages: [{ role: "user", content: analysis.llmPrompt }]
});
// The LLM will return structured tool selection and parameters
const toolSelection = JSON.parse(llmResponse);
console.log("Selected tool:", toolSelection.selectedTool);
console.log("Parameters:", toolSelection.parameters);
}What You Get with LLM Integration:
- Tool Selection: LLM chooses the best tool from top 5 matches
- Parameter Extraction: Automatically extracts parameters from conversation
- Missing Parameter Detection: Identifies what additional info is needed
- Structured Data: Get data ready for tool execution
Complete Example: Conversation Agent
We've created a complete conversation agent example that shows how to:
- Handle user conversations naturally
- Ask follow-up questions for missing parameters
- Execute tools when ready
- Maintain conversation history
- Provide user-friendly responses
// See https://github.com/fastnai/ucl-sdk/tree/main/examples for the full implementation
const { UCL } = require('@fastn-ai/ucl-sdk');
// Simple conversation agent that executes tools
class ConversationAgent {
constructor() {
this.ucl = new UCL({
authToken: process.env.UCL_AUTH_TOKEN,
spaceId: process.env.UCL_SPACE_ID
});
}
async processMessage(userMessage) {
const analysis = await this.ucl.findTools([...conversationHistory]);
if (analysis.requiresTool && analysis.tool) {
// Check for missing parameters and ask follow-up questions
// Execute tool when all parameters are available
return this.executeToolWithParameters(analysis.tool, userMessage);
}
return "How can I help you today?";
}
}Try it yourself:
# Clone and run the example
git clone https://github.com/fastnai/ucl-sdk.git
cd ucl-sdk/examples
node conversation-agent.jsConfiguration Guide
Basic Configuration (Recommended)
const config: UCLConfig = {
authToken: process.env.UCL_AUTH_TOKEN,
spaceId: process.env.UCL_SPACE_ID
// Uses defaults:
// - environment: "DRAFT" (for testing)
// - embedding: Local Xenova model (no external API calls)
};Advanced Configuration
const config: UCLConfig = {
authToken: process.env.UCL_AUTH_TOKEN,
spaceId: process.env.UCL_SPACE_ID,
environment: "LIVE", // Use "DRAFT" for testing
baseUrl: "https://custom-api.com", // Custom API endpoint
// Advanced embedding configuration
embeddingConfig: {
provider: "openai", // "xenova" (local) or "openai" (cloud)
openaiApiKey: process.env.OPENAI_API_KEY, // Required for OpenAI
modelSize: "large", // "small" | "base" | "large" (xenova only)
preloadModel: true, // Faster startup
maxBatchSize: 32 // Batch processing size
}
};Usage Patterns
Pattern 1: Simple Tool Discovery
async function simpleDiscovery(userMessage: string) {
const analysis = await ucl.findTools(
[{ role: "user", content: userMessage }],
"tenant-123"
);
return analysis.requiresTool ? analysis.tool : null;
}Pattern 2: Conversation-Aware Discovery
async function contextAwareDiscovery(messages: Message[]) {
const analysis = await ucl.findTools(messages);
// Filter by confidence level
const highConfidenceTools = analysis.topMatches?.filter(
match => match.confidence === 'high'
);
return highConfidenceTools?.length > 0
? highConfidenceTools[0]
: analysis.topMatches?.[0];
}Pattern 3: Multi-Step Tool Selection
async function toolSelection(messages: Message[]) {
const analysis = await ucl.findTools(messages);
if (!analysis.requiresTool) {
return { action: "continue_conversation" };
}
if (analysis.topMatches?.length === 1) {
return { action: "auto_select", tool: analysis.tool };
}
// Multiple options - use LLM to decide
if (analysis.llmPrompt) {
const llmDecision = await callYourLLM(analysis.llmPrompt);
return { action: "llm_selected", decision: llmDecision };
}
return { action: "ask_user", options: analysis.topMatches };
}Understanding Results
ToolAnalysisResult Structure
{
requiresTool: true, // Tools were found
tool: { // Best matching tool
function: {
name: "send_email",
description: "Send an email message",
parameters: { /* tool parameters */ }
}
},
topMatches: [ // Top 5 ranked tools
{
tool: { /* tool object */ },
score: 0.89, // Similarity score (0-1)
confidence: "high", // "high" | "medium" | "low"
rank: 1 // Ranking position
}
],
llmPrompt: "Based on the following...", // Ready-to-use LLM prompt
toolSelectionContext: "User Intent: Send email\nTop Tools: send_email (high), send_message (medium)",
analysisMetrics: {
processingTime: 45, // Time taken (ms)
totalToolsAnalyzed: 127, // Number of tools analyzed
similarityThreshold: 0.3 // Threshold used
}
}Confidence Levels
- High: Score > 0.7, likely correct tool
- Medium: Score 0.3-0.7, probably correct but verify
- Low: Score < 0.3, uncertain match, consider alternatives
Environment Setup
Using Environment Variables (.env)
# .env file
UCL_AUTH_TOKEN=your-auth-token-here
UCL_SPACE_ID=your-space-id-here
UCL_ENVIRONMENT=DRAFT// In your code
import dotenv from 'dotenv';
dotenv.config();
const config: UCLConfig = {
authToken: process.env.UCL_AUTH_TOKEN!,
spaceId: process.env.UCL_SPACE_ID!,
environment: process.env.UCL_ENVIRONMENT as 'DRAFT' | 'LIVE'
};TypeScript Configuration
Add to your tsconfig.json:
{
"compilerOptions": {
"moduleResolution": "node",
"esModuleInterop": true,
"allowSyntheticDefaultImports": true
}
}Production Deployment
Performance Optimization
const productionConfig: UCLConfig = {
authToken: process.env.UCL_AUTH_TOKEN!,
spaceId: process.env.UCL_SPACE_ID!,
environment: "LIVE",
embeddingConfig: {
provider: "xenova",
modelSize: "base", // Balance speed vs accuracy
preloadModel: true, // Faster responses
maxBatchSize: 64 // Higher throughput
}
};
// Initialize once at app startup
const ucl = new UCL(productionConfig);
await ucl.initialize();
// Clear cache periodically to manage memory
setInterval(() => {
ucl.clearCache();
}, 30 * 60 * 1000); // Every 30 minutesError Handling
import { UCL, ConfigurationError } from '@fastn-ai/ucl-sdk';
async function toolDiscovery(messages: Message[]) {
try {
const analysis = await ucl.findTools(messages);
return analysis;
} catch (error) {
if (error instanceof ConfigurationError) {
// Configuration issues (invalid credentials, etc.)
console.error("UCL Configuration Error:", error.message);
throw new Error("Service configuration error");
} else {
// Network, API, or other errors
console.error("UCL Service Error:", error.message);
// Fallback behavior
return {
requiresTool: false,
message: "Tool discovery temporarily unavailable"
};
}
}
}API Reference
Core Methods
| Method | Description | Returns |
|---|---|---|
initialize() |
Load tools and connectors | Promise<void> |
findTools(messages) |
Find matching tools with LLM prompts | Promise<ToolAnalysisResult> |
getAvailableTools() |
Get all tools | Tool[] |
getAvailableConnectors() |
Get all connectors | Connector[] |
refreshTools() |
Reload tools from API | Promise<void> |
clearCache() |
Clear analysis cache | void |
isInitialized() |
Check initialization status | boolean |
Type Definitions
interface Message {
role: 'user' | 'assistant' | 'system';
content: string;
}
interface ToolAnalysisResult {
requiresTool: boolean;
tool?: Tool;
topMatches?: ToolMatch[];
llmPrompt?: string; // NEW: Ready-to-use LLM prompt
toolSelectionContext?: string; // NEW: Context for LLM
requiresConnection?: boolean;
connector?: Connector;
message?: string;
analysisMetrics?: {
processingTime: number;
totalToolsAnalyzed: number;
similarityThreshold: number;
};
}
interface ToolMatch {
tool: Tool;
score: number; // 0-1 similarity score
confidence: 'high' | 'medium' | 'low';
rank: number;
relevanceMetrics: {
nameScore: number;
descriptionScore: number;
combinedScore: number;
};
}Common Questions
Q: Which embedding provider should I use?
A: Use xenova (local) for privacy and no API costs, or openai for better accuracy with API costs.
Q: What's the difference between DRAFT and LIVE environments? A: DRAFT is for testing/development with sandbox data. LIVE is for production with real connectors and tools.
Q: How many tools can I analyze at once? A: The SDK can handle hundreds of tools efficiently. Performance scales with your embedding model choice.
Q: Can I customize the similarity threshold? A: The SDK automatically adjusts thresholds for good results, but you can filter results by confidence level.
Q: How do I handle rate limits? A: The SDK includes retry logic. For high-volume usage, consider implementing your own queueing system.
License
MIT License - see the LICENSE file for details.
Support
- Website: ucl.dev
- Platform: app.ucl.dev
- Documentation: docs.fastn.ai/ucl-unified-context-layer/about-ucl
- Issues: GitHub Issues