Package Exports
- thinklang
- thinklang/compiler
- thinklang/data
- thinklang/parser
- thinklang/runtime
Readme
ThinkLang
Type-safe AI for TypeScript. Structured LLM outputs, agentic tool calling, and multi-provider support — in one library.
import { think, zodSchema } from "thinklang";
import { z } from "zod";
const Sentiment = z.object({
label: z.enum(["positive", "negative", "neutral"]),
score: z.number(),
explanation: z.string(),
});
const result = await think<z.infer<typeof Sentiment>>({
prompt: "Analyze the sentiment of this review",
...zodSchema(Sentiment),
context: { review },
});
console.log(result.label); // "positive"Model-agnostic: 9+ providers — Anthropic, OpenAI, Gemini, Groq, DeepSeek, Mistral, Together, OpenRouter, Ollama, or bring your own.
Why ThinkLang?
Without ThinkLang (raw OpenAI SDK)
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Analyze the sentiment of this review: " + review }],
response_format: {
type: "json_schema",
json_schema: {
name: "Sentiment",
schema: {
type: "object",
properties: {
label: { type: "string", enum: ["positive", "negative", "neutral"] },
score: { type: "number" },
explanation: { type: "string" },
},
required: ["label", "score", "explanation"],
},
},
},
});
const result = JSON.parse(response.choices[0].message.content!);With ThinkLang
import { think, zodSchema } from "thinklang";
import { z } from "zod";
const Sentiment = z.object({
label: z.enum(["positive", "negative", "neutral"]),
score: z.number(),
explanation: z.string(),
});
const result = await think<z.infer<typeof Sentiment>>({
prompt: "Analyze the sentiment of this review",
...zodSchema(Sentiment),
context: { review },
});Same result. Less boilerplate. Type-safe. Works with any provider.
Quick Start
npm install thinklangimport { think } from "thinklang";
// Set ANTHROPIC_API_KEY (or any provider key) in your environment — it just works
const greeting = await think<string>({
prompt: "Say hello to the world in a creative way",
jsonSchema: { type: "string" },
});
console.log(greeting);For Zod schemas, agents, big data processing, and more — see the Library documentation.
Features
Agents & Tools
Define tools and run multi-turn agent loops. The agent calls tools until it arrives at a final answer.
import { agent, defineTool, zodSchema } from "thinklang";
import { z } from "zod";
const searchDocs = defineTool({
name: "searchDocs",
description: "Search documentation",
input: z.object({ query: z.string() }),
execute: async ({ query }) => await docsIndex.search(query),
});
const result = await agent({ prompt: "Find the answer", tools: [searchDocs], maxTurns: 5 });Also available as a language keyword:
tool searchDocs(query: string): string @description("Search documentation") {
let result = think<string>("Search for relevant documentation") with context: query
print result
}
let answer = agent<string>("Find the answer to the user's question")
with tools: searchDocs
max turns: 5AI Primitives
think, infer, and reason — three primitives for different tasks.
const summary = await think({ prompt: "Summarize this article", ...zodSchema(Summary), context: { article } });
const lang = await infer({ value: "Bonjour le monde", hint: "Detect the language", jsonSchema: { type: "string" } });
const plan = await reason({ goal: "Evaluate the portfolio", steps: [...], ...zodSchema(Plan) });Also available as language keywords:
let summary = think<Summary>("Summarize this article") with context: article
let lang = infer<string>("Bonjour le monde", "Detect the language")
let plan = reason<Plan> { goal: "Evaluate the portfolio" steps: 1. "Assess allocation" 2. "Identify risks" }Structured Types & Validation
Use Zod schemas for type-safe structured output:
import { z } from "zod";
import { think, zodSchema } from "thinklang";
const Classification = z.object({
category: z.string().describe("The category of the email"),
confidence: z.number().describe("Confidence score from 0 to 1"),
});
const result = await think<z.infer<typeof Classification>>({
prompt: "Classify this email",
...zodSchema(Classification),
context: { email },
});Also available as a language type:
type Classification {
@description("The category of the email")
category: string
@description("Confidence score from 0 to 1")
confidence: float
}Guards
Validate AI output with declarative constraints and automatic retry.
const summary = await think({
prompt: "Summarize this article",
jsonSchema: { type: "string" },
context: { article },
guards: [{ name: "length", constraint: 50, rangeEnd: 200 }],
retryCount: 3,
fallback: () => "Could not generate summary",
});Also available as a language keyword:
let summary = think<string>("Summarize this article") with context: article
guard { length: 50..200, contains_none: ["AI", "language model"] }
on_fail: retry(3) then fallback("Could not generate summary")Confidence Tracking
Confident<T> wraps AI responses with confidence scores and reasoning.
let result = think<Confident<Sentiment>>("Analyze this review") with context: review
let safe = result.expect(0.8) // throws if confidence < 0.8
let fallback = result.or(defaultValue) // returns fallback if not confidentBig Data
Process collections through AI at scale with concurrency control, cost budgeting, and streaming.
import { mapThink, Dataset, zodSchema } from "thinklang";
import { z } from "zod";
const results = await mapThink({
items: reviews,
promptTemplate: (r) => `Classify: "${r}"`,
...zodSchema(Sentiment),
maxConcurrency: 3,
costBudget: 1.00,
});
const pipeline = await Dataset.from(reviews)
.map(async (r) => think({ prompt: `Classify: "${r}"`, ...zodSchema(Sentiment) }))
.filter(async (s) => s.label === "positive")
.execute({ maxConcurrency: 3 });Also available as language keywords:
let sentiments = map_think<Sentiment>(reviews, "Classify this review")
concurrency: 3
cost_budget: 1.00
let summary = reduce_think<string>(sentiments, "Summarize all sentiments into a report")
batch_size: 5Pattern Matching & Pipeline
let response = match sentiment {
{ label: "positive", intensity: >= 8 } => "Very positive!"
{ label: "negative" } => "Negative detected"
_ => "Neutral or mild"
}
let result = rawText
|> think<Keywords>("Extract keywords")
|> think<Report>("Write a report from these keywords")Also a Language
ThinkLang is also a standalone programming language. Write .tl files where AI primitives are first-class keywords, types compile to JSON schemas, and the compiler catches errors before you hit the API.
npm install -g thinklang
export ANTHROPIC_API_KEY=your-key-heretype Sentiment {
@description("positive, negative, or neutral")
label: string
score: float
}
let result = think<Sentiment>("Analyze the sentiment of this review")
with context: review
print resultthinklang run analyze.tl| Command | Description |
|---|---|
thinklang run <file.tl> |
Run a ThinkLang program |
thinklang compile <file.tl> |
Emit compiled TypeScript |
thinklang repl |
Interactive REPL |
thinklang test [target] |
Run .test.tl test files |
thinklang cost-report |
Show cost summary |
See the Language Guide for the full tour.
Multi-Agent Orchestration
Coordinate multiple specialized agents that delegate tasks to each other.
import { agent, defineTool } from "thinklang";
const researcher = { name: "researcher", description: "Research topics", tools: [searchTool] };
const writer = { name: "writer", description: "Write content" };
const result = await agent({
prompt: "Create a climate report",
tools: [searchTool],
agents: [researcher, writer],
maxTurns: 10,
onAgentCall: (name, input) => console.log(`Delegating to ${name}`),
});Observability
Optional OpenTelemetry integration — all think, infer, reason, agent, and batch calls emit spans automatically when @opentelemetry/api is installed.
Prompt Optimization
Record prompt-result pairs and inject few-shot examples automatically:
import { optimizedThink } from "thinklang";
const result = await optimizedThink({ prompt: "Classify this", jsonSchema: schema });Multi-Provider
Swap providers with a single environment variable. No code changes needed.
| Provider | Package | Env Var | Default Model |
|---|---|---|---|
| Anthropic | @anthropic-ai/sdk (bundled) |
ANTHROPIC_API_KEY |
claude-opus-4-6 |
| OpenAI | openai (optional) |
OPENAI_API_KEY |
gpt-4o |
| Gemini | @google/generative-ai (optional) |
GEMINI_API_KEY |
gemini-2.0-flash |
| Ollama | (none) | OLLAMA_BASE_URL |
llama3 |
| Groq | (none — uses fetch) | GROQ_API_KEY |
llama-3.3-70b-versatile |
| DeepSeek | (none — uses fetch) | DEEPSEEK_API_KEY |
deepseek-chat |
| Mistral | (none — uses fetch) | MISTRAL_API_KEY |
mistral-large-latest |
| Together | (none — uses fetch) | TOGETHER_API_KEY |
meta-llama/Llama-3.3-70B-Instruct-Turbo |
| OpenRouter | (none — uses fetch) | OPENROUTER_API_KEY |
anthropic/claude-sonnet-4 |
Custom providers are supported through the ModelProvider interface, registerProvider(), or by extending OpenAICompatibleProvider.
IDE Support
The thinklang-vscode/ directory contains a VS Code extension with syntax highlighting, code snippets, and full LSP integration (diagnostics, hover, completion, go-to-definition, document symbols, signature help).
Community
- GitHub Issues — bug reports and feature requests
- Contributing Guide — how to set up and submit PRs
- Security Policy — reporting vulnerabilities
Documentation
Full documentation at thinklang.dev.
- Library Guide — Using ThinkLang from JavaScript/TypeScript
- Why ThinkLang? — Comparison with alternatives
- Language Guide — Getting started with
.tlfiles and the CLI - API Reference — Complete runtime API
- Examples — 10 JS/TS examples + 22 language programs
License
MIT