Package Exports
- @node-llm/core
Readme
@node-llm/core
A provider-agnostic LLM core for Node.js, heavily inspired by the elegant design of ruby-llm.
node-llm focuses on clean abstractions, minimal magic, and a streaming-first design. It provides a unified interface to interact with various LLM providers without being locked into their specific SDKs.
🚀 Features
- Provider-Agnostic: Switch between OpenAI, Anthropic, and others with a single line of config.
- Streaming-First: Native
AsyncIteratorsupport for real-time token delivery. - Tool Calling: Automatic execution loop for model-requested functions.
- Multi-modal & Smart Files: Built-in support for Vision (images), Audio, and Text files.
- Fluent API: Chainable methods like
.withTool()for dynamic registration. - Resilient: Configurable retry logic at the execution layer.
- Type-Safe: Written in TypeScript with full ESM support.
📦 Installation
npm install @node-llm/core
# or
pnpm add @node-llm/core🛠️ Quick Start
1. Configure the Provider
import { LLM } from "@node-llm/core";
import "dotenv/config";
LLM.configure({
provider: "openai", // Uses OPENAI_API_KEY from env
retry: { attempts: 3, delayMs: 500 }
});2. Basic Chat
const chat = LLM.chat("gpt-4o-mini", {
systemPrompt: "You are a helpful assistant."
});
const response = await chat.ask("What is Node.js?");
// Use as a string directly
console.log(response);
// Or access metadata (RubyLLM style)
console.log(response.content);
console.log(`Model: ${response.model_id}`);
console.log(`Tokens: ${response.input_tokens} in, ${response.output_tokens} out`);3. Streaming Responses
for await (const chunk of chat.stream("Write a poem")) {
process.stdout.write(chunk.content);
}4. Image Generation (Paint)
Generate images and interact with them using a rich API.
const image = await LLM.paint("a sunset over mountains", {
model: "dall-e-3"
});
// Use as a URL string
console.log(`URL: ${image}`);
// Or use rich methods
await image.save("sunset.png");
console.log(`Format: ${image.mimeType}`);5. Token Usage Tracking
Track tokens for individual turns or the entire conversation.
const response = await chat.ask("Hello!");
console.log(response.input_tokens); // 10
console.log(response.output_tokens); // 5
// Access aggregated usage for the whole session
console.log(chat.totalUsage.total_tokens);📚 Examples
Check the examples directory for focused scripts organized by provider:
OpenAI Examples
| Example | Description |
|---|---|
| Basic Chat | Simple completion request |
| Streaming | Real-time token streaming |
| Tool Calling | Automatic tool execution loop |
| Vision | Image analysis |
| List Models | Enumerate available models |
| Paint | Image generation with DALL-E |
| Image Features | Saving and processing generated images |
| Token Usage | Detailed stats for turns and conversations |
To run an example:
node examples/openai/01-basic-chat.mjs🔌 Advanced Usage
Tool Calling (Function Calling)
Define your tools and let the library handle the execution loop automatically.
const weatherTool = {
type: 'function',
function: {
name: 'get_weather',
parameters: {
type: 'object',
properties: { location: { type: 'string' } }
}
},
handler: async ({ location }) => {
return JSON.stringify({ location, temp: 22, unit: 'celsius' });
}
};
// Use the fluent API to add tools on the fly
const reply = await chat
.withTool(weatherTool)
.ask("What is the weather in London?");Multi-modal & File Support
Pass local paths or URLs directly. The library handles reading, MIME detection, and encoding.
// Vision
await chat.ask("What's in this image?", {
files: ["./screenshot.png"]
});
// Audio
await chat.ask("Transcribe this", {
files: ["./meeting.mp3"]
});
// Text/Code Analysis
await chat.ask("Explain this code", {
files: ["./app.ts"]
});📋 Supported Providers
| Provider | Status | Notes |
|---|---|---|
| OpenAI | ✅ Supported | Chat, Streaming, Tools, Vision, Audio, Images (DALL-E) |
| Anthropic | 🏗️ Roadmap | Coming soon |
| Azure OpenAI | 🏗️ Roadmap | Coming soon |
🧠 Design Philosophy
- Explicit over Implicit: No hidden side effects.
- Minimal Dependencies: Lightweight core with zero bloat.
- Developer Experience: Inspired by Ruby's elegance, built for Node's performance.
- Production Ready: Built-in retries and strict type checking.
📄 License
MIT © [node-llm contributors]