JSPM

  • Created
  • Published
  • Downloads 933
  • Score
    100M100P100Q106059F
  • License MIT

TypeScript LLM client with streaming tool execution. Tools fire mid-stream. Built-in function calling works with any model—no structured outputs or native tool support required.

Package Exports

  • llmist

Readme

llmist

CI codecov npm version License

Streaming-first multi-provider LLM client in TypeScript with home-made tool calling.

llmist implements its own tool calling syntax called "gadgets" - tools execute the moment their block is parsed, not after the response completes. Works with any model that can follow instructions.

Installation

npm install llmist

Quick Start

import { Gadget, LLMist, z } from 'llmist';

// Define a gadget (tool) with Zod schema
class Calculator extends Gadget({
  description: 'Performs arithmetic operations',
  schema: z.object({
    operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
    a: z.number(),
    b: z.number(),
  }),
}) {
  execute(params: this['params']): string {
    const { operation, a, b } = params;
    switch (operation) {
      case 'add': return String(a + b);
      case 'subtract': return String(a - b);
      case 'multiply': return String(a * b);
      case 'divide': return String(a / b);
    }
  }
}

// Run the agent
const answer = await LLMist.createAgent()
  .withModel('sonnet')
  .withGadgets(Calculator)
  .askAndCollect('What is 15 times 23?');

console.log(answer);

Features

  • Streaming-first - Tools execute mid-stream, not after response completes
  • Multi-provider - OpenAI, Anthropic, Gemini, HuggingFace with unified API
  • Type-safe - Full TypeScript inference from Zod schemas
  • Flexible hooks - Observers, interceptors, and controllers for deep integration
  • Built-in cost tracking - Real-time token counting and cost estimation
  • Multimodal - Vision and audio input support

Providers

Set one of these environment variables:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."
export HF_TOKEN="hf_..."

Use model aliases for convenience:

.withModel('sonnet')   // Claude 3.5 Sonnet
.withModel('opus')     // Claude Opus 4
.withModel('gpt4o')    // GPT-4o
.withModel('flash')    // Gemini 2.0 Flash

Documentation

Full documentation at llmist.dev

Examples

See the examples directory for runnable examples covering all features.

License

MIT