Package Exports
- @higgsfield/client
- @higgsfield/client/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@higgsfield/client) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Higgsfield SDK for Node.js and TypeScript
Official SDK for interacting with Higgsfield AI's video and image generation APIs.
Installation
npm install @higgsfield/clientQuick Start
import { HiggsfieldClient } from '@higgsfield/client';
import { InputImage, InputAudio, inputMotion, SoulQuality, SoulSize, BatchSize, DoPModel, SpeakVideoQuality, SpeakDuration, webhook, strength, seed } from '@higgsfield/client/helpers';
// Initialize the client
const client = new HiggsfieldClient({
apiKey: 'YOUR_API_KEY',
apiSecret: 'YOUR_API_SECRET'
});Authentication
The SDK supports multiple authentication methods:
Option 1: Pass credentials directly
const client = new HiggsfieldClient({
apiKey: 'YOUR_API_KEY',
apiSecret: 'YOUR_API_SECRET'
});Option 2: Use environment variables
Set the following environment variables:
export HF_API_KEY="YOUR_API_KEY"
export HF_SECRET="YOUR_API_SECRET"Then initialize without credentials:
const client = new HiggsfieldClient();API Endpoints
Image-to-Video Generation (DoP Model)
Generate 5-second videos from static images using the DoP (Director of Photography) model with optional motion presets.
Basic Usage (without motion)
// Generate video from image (no motion applied)
const jobSet = await client.generate('/v1/image2video/dop', {
model: DoPModel.TURBO, // Options: DoPModel.LITE, DoPModel.STANDARD, DoPModel.TURBO
prompt: 'Cinematic camera movement around the subject',
input_images: [InputImage.fromUrl('https://example.com/image.jpg')]
});
// The generate method automatically polls for completion by default
// Access results directly from jobSet.jobs
if (jobSet.isCompleted) {
console.log('Video URL:', jobSet.jobs[0].results?.raw.url);
}Using Predefined Motions
First, fetch available motions:
// Get available motions (returns Motion[])
const motions: Motion[] = await client.getMotions();
// Motion type structure:
// {
// id: string;
// name: string;
// description?: string;
// preview_url?: string;
// start_end_frame?: boolean;
// }
// Find a specific motion
const zoomMotion = motions.find(m => m.name === 'Zoom In');
console.log('Motion preview:', zoomMotion.preview_url);
console.log('Supports start/end frame:', zoomMotion.start_end_frame);Then use a motion in your generation:
// Generate video with specific motion
const jobSet = await client.generate('/v1/image2video/dop', {
model: DoPModel.TURBO,
prompt: 'Apply zoom motion to the subject',
input_images: [InputImage.fromUrl('https://example.com/image.jpg')],
motions: [
inputMotion(zoomMotion.id, 0.8) // Motion UUID from getMotions() with strength (0.0 to 1.0)
]
});
// Check completion status
if (jobSet.isCompleted) {
console.log('Video generated successfully');
console.log('Video URL:', jobSet.jobs[0].results?.raw.url);
}Advanced Example with Upload
import fs from 'fs';
// Read local image
const imageBuffer = fs.readFileSync('path/to/your/image.jpg');
// Upload image to Higgsfield CDN
const imageUrl = await client.uploadImage(imageBuffer, 'jpeg');
// Generate video with multiple motions and webhook
const jobSet = await client.generate('/v1/image2video/dop', {
model: DoPModel.STANDARD, // Highest quality model
prompt: 'Cinematic dolly zoom with dramatic lighting',
input_images: [InputImage.fromUrl(imageUrl)],
motions: [
inputMotion('motion-uuid-1', 0.7),
inputMotion('motion-uuid-2', 0.5)
], // Can have up to 2 motions
seed: seed(42), // For reproducible results
enhance_prompt: true // AI-enhanced prompt
}, {
webhook: webhook('https://your-webhook-url.com/callback', 'your-webhook-secret')
});
// Handle results (polling happens automatically)
for (const job of jobSet.jobs) {
if (job.status === 'completed') {
console.log('Video URL:', job.results?.raw.url);
console.log('Preview URL:', job.results?.min.url);
} else if (job.status === 'failed') {
console.error('Job failed');
}
}Speech-to-Video Generation (Speak v2)
Generate videos with talking avatars from audio input. Note: Only WAV audio files are supported.
Basic Usage
// Generate talking avatar video from audio and image
// Note: Audio must be in WAV format
const jobSet = await client.generate('/v1/speak/higgsfield', {
input_image: InputImage.fromUrl('https://example.com/avatar.jpg'),
input_audio: InputAudio.fromUrl('https://example.com/speech.wav'), // Only WAV files supported
prompt: 'Professional presentation style',
quality: SpeakVideoQuality.MID, // Options: SpeakVideoQuality.MID or SpeakVideoQuality.HIGH
duration: SpeakDuration.SHORT, // Options: SpeakDuration.SHORT (5s), MEDIUM (10s), or LONG (15s)
seed: seed() // Random seed for varied results
});
// Check results
if (jobSet.isCompleted) {
console.log('Video URL:', jobSet.jobs[0].results?.raw.url);
}Text-to-Image Generation (Soul)
Generate artistic images from text descriptions using the Soul model.
Basic Usage
// Generate image from text
const jobSet = await client.generate('/v1/text2image/soul', {
prompt: 'A majestic mountain landscape at sunset, oil painting style',
width_and_height: SoulSize.SQUARE_1536x1536, // See SoulSize for all 13 available sizes
quality: SoulQuality.SD, // Options: SoulQuality.SD (720p) or SoulQuality.HD (1080p)
batch_size: BatchSize.SINGLE, // Options: BatchSize.SINGLE (1) or BatchSize.QUAD (4)
enhance_prompt: true // AI-enhanced prompt optimization
});
// Access generated image
if (jobSet.isCompleted) {
console.log('Image URL:', jobSet.jobs[0].results?.raw.url);
}Using Style Presets
First, fetch available styles:
// Get available Soul styles (returns SoulStyle[])
const styles: SoulStyle[] = await client.getSoulStyles();
// SoulStyle type structure:
// {
// id: string;
// name: string;
// description: string;
// preview_url: string;
// }
// Find a specific style
const oilPaintingStyle = styles.find(s => s.name === 'Oil Painting');Then use a style in your generation:
// Generate with specific style
const jobSet = await client.generate('/v1/text2image/soul', {
prompt: 'Portrait of a wise elderly person',
style_id: oilPaintingStyle.id, // Use style from getSoulStyles()
style_strength: strength(0.8), // Style intensity (0.0 to 1.0)
width_and_height: SoulSize.PORTRAIT_1536x2048,
quality: SoulQuality.HD,
batch_size: BatchSize.QUAD,
enhance_prompt: false,
seed: seed(12345) // For reproducible results
});
// Get all generated images
jobSet.jobs.forEach((job, index) => {
if (job.status === 'completed') {
console.log(`Image ${index + 1}:`, job.results?.raw.url);
}
});Advanced Example with Parameters
// Generate with advanced parameters and character consistency
const jobSet = await client.generate('/v1/text2image/soul', {
prompt: 'Futuristic city with flying cars, cyberpunk aesthetic',
width_and_height: SoulSize.LANDSCAPE_2048x1152, // Landscape format
quality: SoulQuality.HD,
batch_size: BatchSize.QUAD,
style_id: 'cyberpunk-style-uuid', // From getSoulStyles()
style_strength: strength(0.9),
custom_reference_id: 'character-uuid', // Character from custom references
custom_reference_strength: strength(0.7),
image_reference: InputImage.fromUrl('https://example.com/reference.jpg'),
enhance_prompt: true,
seed: seed(999) // Fixed seed for consistency
}, {
webhook: webhook('https://your-webhook-url.com/callback', 'your-webhook-secret')
});
// Download generated images
for (const job of jobSet.jobs) {
if (job.status === 'completed' && job.results) {
console.log('Full resolution:', job.results.raw.url);
console.log('Thumbnail:', job.results.min.url);
// You can download the images
const response = await fetch(job.results.raw.url);
const buffer = await response.arrayBuffer();
fs.writeFileSync(`output-${job.id}.jpg`, Buffer.from(buffer));
}
}Custom Character References (SoulIds)
Create and manage custom character references for consistent character generation across multiple images.
import { InputImageType } from '@higgsfield/client';
// List existing SoulIds
const soulIdList = await client.listSoulIds(1, 10); // page 1, 10 items per page
console.log(`Total SoulIds: ${soulIdList.total}`);
soulIdList.items.forEach(soul => {
console.log(`${soul.name} (${soul.id}): ${soul.status}`);
});
// Create a new SoulId from reference images
const newSoulId = await client.createSoulId({
name: 'My Character',
input_images: [
{ type: InputImageType.IMAGE_URL, image_url: 'https://example.com/ref1.jpg' },
{ type: InputImageType.IMAGE_URL, image_url: 'https://example.com/ref2.jpg' },
{ type: InputImageType.IMAGE_URL, image_url: 'https://example.com/ref3.jpg' }
]
}, true); // with polling
console.log('Created SoulId:', newSoulId.id);
// Use the SoulId in text-to-image generation
if (newSoulId.isCompleted) {
const jobSet = await client.generate('/v1/text2image/soul', {
prompt: 'Portrait in professional attire',
custom_reference_id: newSoulId.id,
custom_reference_strength: strength(1),
width_and_height: SoulSize.PORTRAIT_1536x2048,
quality: SoulQuality.HD
});
}API Methods
Core Methods
generate(endpoint: string, params: object, options?: { webhook?: WebhookPayload, withPolling?: boolean }): Promise<JobSet>- Generate content using any Higgsfield API endpointgetMotions(): Promise<Motion[]>- Get available motions for image-to-video generationgetSoulStyles(): Promise<SoulStyle[]>- Get available Soul styles for text-to-image generationuploadImage(imageBuffer: Buffer, format?: 'jpeg' | 'png' | 'webp'): Promise<string>- Upload an image and get its URLupload(data: Buffer | Uint8Array, contentType: string): Promise<string>- Upload any data with specific content typecreateSoulId(data: SoulIdCreateData, withPolling?: boolean): Promise<SoulId>- Create a custom character reference (SoulId) for consistent generationlistSoulIds(page?: number, pageSize?: number): Promise<SoulIdListResponse>- List all your SoulIds with pagination
Working with Jobs
Job Status Monitoring
// Create a job without automatic polling
const jobSet = await client.generate('/v1/text2image/soul', {
prompt: 'Beautiful landscape',
width_and_height: '1536x1536'
}, {
withPolling: false // Disable automatic polling
});
// Check status
console.log('JobSet ID:', jobSet.id);
// Manual polling (requires access to client's internal axios instance and config)
// Note: The client's internal properties are private, so this is for demonstration
// In practice, use withPolling: true (default) for automatic polling
while (!jobSet.isCompleted && !jobSet.isFailed && !jobSet.isCanceled) {
// await jobSet.poll(axiosClient, configObject);
console.log('Jobs status:', jobSet.jobs.map(j => j.status));
await new Promise(resolve => setTimeout(resolve, 2000)); // Wait 2 seconds
break; // Exit for demo purposes
}
// Access results
for (const job of jobSet.jobs) {
if (job.results) {
console.log('Result:', job.results.raw.url);
}
}Error Handling
The SDK provides comprehensive error handling with specific error types for different scenarios:
import {
AuthenticationError,
BadInputError,
ValidationError,
NotEnoughCreditsError,
APIError
} from '@higgsfield/client';
try {
const jobSet = await client.generate('/v1/text2image/soul', {
prompt: 'A beautiful landscape',
quality: SoulQuality.HD,
width_and_height: SoulSize.LANDSCAPE_2048x1152,
batch_size: BatchSize.SINGLE
});
// Check for specific job failures
for (const job of jobSet.jobs) {
switch (job.status) {
case 'completed':
console.log('Success:', job.results?.raw.url);
break;
case 'failed':
console.error('Generation failed');
break;
case 'nsfw':
console.warn('Content flagged as NSFW');
break;
case 'canceled':
console.warn('Job was canceled');
break;
}
}
} catch (error) {
// Handle specific error types
if (error instanceof AuthenticationError) {
console.error('❌ Authentication failed - check your API credentials');
} else if (error instanceof NotEnoughCreditsError) {
console.error('💳 Insufficient credits - please top up your account');
} else if (error instanceof BadInputError) {
console.error('📋 Invalid input parameters:');
console.error('Message:', error.message);
} else if (error instanceof ValidationError) {
console.error('⚠️ Validation error:');
console.error('Message:', error.message);
} else if (error instanceof APIError) {
console.error('🌐 API Error:');
console.error('Status:', error.statusCode);
console.error('Message:', error.message);
console.error('Response:', error.responseData);
} else {
console.error('💥 Unexpected error:', error);
}
}Helper Function Validation Errors
Helper functions also throw BadInputError for invalid inputs:
try {
// These will throw BadInputError with descriptive messages
const invalidStrength = strength(1.5); // Error: Strength must be between 0 and 1
const invalidSeed = seed(-1); // Error: Seed must be an integer between 0 and 1,000,000
const emptyImage = InputImage.fromUrl(''); // Error: Image URL must be a non-empty string
const emptyMotion = inputMotion('', 0.5); // Error: Motion ID must be a non-empty string
} catch (error) {
if (error instanceof BadInputError) {
console.error('Invalid helper input:', error.message);
}
}Configuration Options
const client = new HiggsfieldClient({
// Authentication
apiKey: 'YOUR_API_KEY',
apiSecret: 'YOUR_API_SECRET',
// API Configuration
baseURL: 'https://platform.higgsfield.ai', // Default
timeout: 120000, // 2 minutes default
// Retry Configuration
maxRetries: 3, // Maximum 5 retries allowed
retryBackoff: 1000, // Start with 1 second
retryMaxBackoff: 60000, // Max 60 seconds
// Polling Configuration
pollInterval: 2000, // Check every 2 seconds
maxPollTime: 300000, // Timeout after 5 minutes
// Custom Headers
headers: {
'X-Custom-Header': 'value'
}
});TypeScript Support
The SDK is written in TypeScript and provides full type definitions:
import {
HiggsfieldClient,
ClientConfig,
JobStatus,
JobSet,
Job,
GenerateParams,
SoulStyle,
Motion
} from '@higgsfield/client';
// All parameters are fully typed
const params: GenerateParams = {
prompt: 'A beautiful sunset',
width: 1024,
height: 1024
};
// JobStatus enum for status checking
if (jobSet.isCompleted) {
// All jobs completed
}
// Or check individual job status
if (jobSet.jobs[0].status === JobStatus.COMPLETED) {
// Handle completed job
}Best Practices
Upload large files: For better performance, upload large image/audio files to the CDN first:
const imageUrl = await client.uploadImage(localImageBuffer, 'jpeg');
Handle rate limits: Implement exponential backoff for retries:
const client = new HiggsfieldClient({ maxRetries: 5, retryBackoff: 2000, retryMaxBackoff: 30000 });
Use webhooks for long operations: For production, consider implementing webhooks instead of polling.
Cache motion and style IDs: Fetch and cache available motions/styles at startup:
const motions = await client.getMotions(); const styles = await client.getSoulStyles(); // Cache these for reuse const motionMap = new Map(motions.map(m => [m.id, m])); const styleMap = new Map(styles.map(s => [s.id, s]));
Clean up resources: Always close the client when done:
client.close();
Support
- Documentation: https://docs.higgsfield.ai
- API Status: https://status.higgsfield.ai
License
MIT