Package Exports
- harmony-protocol-js
- harmony-protocol-js/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (harmony-protocol-js) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Harmony Protocol for Node.js
A complete TypeScript/JavaScript implementation of OpenAI's Harmony response format for structured conversational AI interactions. This library provides 100% API compatibility with the Rust implementation, enabling structured conversations, multi-channel outputs, tool integration, and real-time streaming support.
⚠️ IMPORTANT: This Library Requires an OpenAI-Compatible Model
This library does NOT include an AI model. It provides the conversation formatting and parsing layer that works with models supporting the Harmony protocol. You need:
- OpenAI API access or compatible model endpoint
- A model that understands Harmony formatting (
<|start|>,<|message|>,<|end|>tokens) - Integration with your preferred inference provider (OpenAI, Azure, local servers)
What this library does:
Your App → Harmony Protocol → [Format Tokens] → OpenAI Model → [Response Tokens] → Harmony Protocol → Structured OutputOverview
This library provides a complete TypeScript/JavaScript implementation of the Harmony response format used by OpenAI's open-weight model series (gpt-oss). It enables parsing and rendering of structured conversations with support for:
- Multiple communication channels (analysis, commentary, final)
- Tool calling and function integration
- Reasoning effort control
- Streaming token parsing
- System and developer instructions
Key Features
🚀 Production-Ready Core
- Zero Dependencies (except tiktoken for tokenization)
- Full TypeScript Support with complete type safety and IntelliSense
- Memory Efficient streaming parser for real-time processing
- High Performance tokenization with tiktoken integration
- Thread Safe operations for concurrent usage
- Comprehensive Error Handling with typed exceptions
🎨 Advanced Output Rendering
- Multiple Output Formats (Text, Markdown, HTML, JSON, CSV)
- Channel-Specific Rendering (analysis, commentary, final)
- Custom Formatting Options (labels, truncation, timestamps)
- Streaming UI Support with incremental updates
- Export Capabilities for analytics and documentation
- CSS-Ready HTML with semantic classes
🔧 Flexible Architecture
- Multiple Encoding Support (o200k_base, custom encodings)
- Extensible Tool System with namespace-based organization
- Configurable Channel Routing with filtering options
- Role-based Validation with automatic message sorting
- Custom Tool Integration with JSON schema validation
- Real-Time Streaming with delta content updates
🌐 Complete Protocol Implementation
- 100% API Compatibility with the Rust implementation
- All Special Tokens supported (200006, 200008, 200007, etc.)
- Multi-Channel System for structured reasoning workflows
- Tool Calling Framework with built-in namespaces (Browser, Python, Functions)
- System Content Builder with fluent API design
- Message Validation with conversation state management
📊 Developer Experience
- 5 Comprehensive Examples with real-world usage patterns
- Complete Documentation with step-by-step guides
- Migration Guide from Rust implementation
- Integration Guides for OpenAI, Azure, local models
- Troubleshooting Guide with common solutions
- Performance Best Practices and optimization tips
Quick Start
Installation
# npm
npm install harmony-protocol-js
# yarn
yarn add harmony-protocol-js
# pnpm
pnpm add harmony-protocol-jsRequirements:
- Node.js ≥ 18.0.0
- TypeScript ≥ 5.0.0 (for TypeScript projects)
30-Second Quick Start
import { loadHarmonyEncoding, Message, Conversation, Role } from 'harmony-protocol-js';
// 1. Load encoding
const encoding = loadHarmonyEncoding();
// 2. Create conversation
const conversation = Conversation.fromMessages([
Message.system('You are a helpful assistant.'),
Message.user('What is 2 + 2?')
]);
// 3. Get tokens ready for your OpenAI model
const tokens = encoding.renderConversationForCompletion(conversation, Role.ASSISTANT);
// 4. Send tokens to OpenAI, get response tokens back
// const responseTokens = await openai.complete(tokens);
// 5. Parse response back to structured messages
// const messages = encoding.parseMessagesFromCompletionTokens(responseTokens, Role.ASSISTANT);Usage Examples
1. Basic Multi-Channel Conversation
import {
loadHarmonyEncoding,
Message,
Conversation,
Role,
Channel,
createSystemContent
} from 'harmony-protocol-js';
const encoding = loadHarmonyEncoding();
// Create a system message with multi-channel support
const systemContent = createSystemContent()
.withIdentity('Expert Mathematics Tutor')
.withRequiredChannels(['analysis', 'final'])
.withReasoningEffort('high')
.build();
// Build conversation
const conversation = Conversation.fromMessages([
Message.system(systemContent),
Message.user('Solve: What is the derivative of x² + 3x + 2?')
]);
// Get tokens for your OpenAI model
const tokens = encoding.renderConversationForCompletion(conversation, Role.ASSISTANT);
// After getting response from OpenAI:
// const messages = encoding.parseMessagesFromCompletionTokens(responseTokens, Role.ASSISTANT);2. Streaming Real-Time Parsing
import { StreamableParser, loadHarmonyEncoding, Role } from 'harmony-protocol-js';
const encoding = loadHarmonyEncoding();
const parser = new StreamableParser(encoding, Role.ASSISTANT);
// Connect to your OpenAI streaming endpoint
async function handleStreamingResponse(streamResponse: ReadableStream) {
const reader = streamResponse.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
// Process each token as it arrives
parser.processText(new TextDecoder().decode(value));
// Get real-time content delta for UI updates
const delta = parser.getLastContentDelta();
if (delta) {
console.log('New content:', delta);
// Update your UI immediately
updateStreamingUI(delta);
}
}
// Get final parsed messages
const messages = parser.intoMessages();
console.log('Complete conversation:', messages);
}3. Advanced Tool Integration
import {
createSystemContent,
createToolDescription,
createToolNamespace,
createBrowserToolNamespace,
createPythonToolNamespace,
Message,
Conversation,
Role
} from 'harmony-protocol-js';
// Create custom tools
const mathTools = [
createToolDescription('calculate', 'Perform mathematical calculations', {
type: 'object',
properties: {
expression: { type: 'string', description: 'Math expression to evaluate' },
precision: { type: 'number', description: 'Decimal places', default: 2 }
},
required: ['expression']
}),
createToolDescription('plot_function', 'Plot mathematical functions', {
type: 'object',
properties: {
function: { type: 'string', description: 'Function to plot (e.g., x^2 + 2x + 1)' },
xRange: { type: 'array', items: { type: 'number' }, description: '[min, max] for x-axis' }
},
required: ['function']
})
];
// Create tool namespaces
const mathNamespace = createToolNamespace('math', 'Mathematical computation tools', mathTools);
const browserNamespace = createBrowserToolNamespace();
const pythonNamespace = createPythonToolNamespace();
// Build comprehensive system content
const systemContent = createSystemContent()
.withIdentity('Advanced AI Assistant with Tool Access')
.withRequiredChannels(['analysis', 'commentary', 'final'])
.withReasoningEffort('high')
.withTools(mathNamespace)
.withTools(browserNamespace)
.withTools(pythonNamespace)
.build();
const conversation = new Conversation([
Message.system(systemContent),
Message.user('Calculate the roots of 2x² + 5x - 3 = 0 and plot the function')
]);
// Model will now have access to all defined tools
const tokens = encoding.renderConversationForCompletion(conversation, Role.ASSISTANT);4. OpenAI Integration Example
import OpenAI from 'openai';
import { loadHarmonyEncoding, Message, Conversation, Role } from 'harmony-protocol-js';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const encoding = loadHarmonyEncoding();
async function completeWithHarmony(userMessage: string) {
// 1. Create Harmony conversation
const conversation = Conversation.fromMessages([
Message.system('You are a helpful assistant. Use the analysis channel for reasoning.'),
Message.user(userMessage)
]);
// 2. Render to tokens
const tokens = encoding.renderConversationForCompletion(conversation, Role.ASSISTANT);
// 3. Send to OpenAI (note: this is conceptual - OpenAI doesn't directly accept token arrays)
// In practice, you'd need a compatible model that accepts Harmony-formatted text
const harmonyText = encoding.decode(tokens);
const response = await openai.completions.create({
model: 'text-davinci-003', // Use your Harmony-compatible model
prompt: harmonyText,
max_tokens: 1000,
stream: false
});
// 4. Parse response back to structured messages
const responseText = response.choices[0].text || '';
const responseTokens = encoding.encode(responseText);
const messages = encoding.parseMessagesFromCompletionTokens(responseTokens, Role.ASSISTANT);
return messages;
}
// Usage
const result = await completeWithHarmony('Explain quantum computing in simple terms');
console.log('Analysis:', result.find(m => m.channel === 'analysis')?.content);
console.log('Final Answer:', result.find(m => m.channel === 'final')?.content);5. Output Rendering & Formatting
import {
HarmonyRenderer,
StreamingRenderer,
renderToText,
renderToMarkdown,
renderToHTML,
renderFinalOnly,
renderSummary,
renderChannel
} from 'harmony-protocol-js';
// Quick rendering functions
const textOutput = renderToText(conversation);
const markdownOutput = renderToMarkdown(conversation);
const htmlOutput = renderToHTML(conversation);
// Advanced rendering with custom options
const renderer = new HarmonyRenderer({
showChannels: true,
showRoles: true,
channelLabels: {
analysis: '🤔 Thinking',
commentary: '💭 Context',
final: '💬 Answer'
},
maxContentLength: 200,
includeTimestamps: true
});
const rendered = renderer.renderConversation(conversation);
console.log('Text:', rendered.text);
console.log('Markdown:', rendered.markdown);
console.log('HTML:', rendered.html);
// Channel-specific rendering
const finalOnly = renderFinalOnly(conversation); // User-facing content only
const analysisContent = renderChannel(conversation, Channel.ANALYSIS);
// Streaming UI support
const streamingRenderer = new StreamingRenderer();
const updates = streamingRenderer.renderIncremental(newConversation);
if (updates.hasChanges) {
updateUI(updates.newContent); // Real-time updates
}
// Conversation analytics
const summary = renderSummary(conversation);
console.log(summary); // "Conversation contains 5 messages across 3 channels..."
// Export formats
const structured = rendered.structured;
const csvExport = structured.messages.map(m =>
`"${m.role}","${m.channel}","${m.content}",${m.originalLength}`
).join('\n');6. Error Handling & Validation
import {
HarmonyError,
ParseError,
RenderError,
ValidationError
} from 'harmony-protocol-js';
try {
// Validate conversation before processing
const validation = conversation.validate();
if (!validation.isValid) {
throw new ValidationError(`Invalid conversation: ${validation.errors.join(', ')}`);
}
const tokens = encoding.renderConversationForCompletion(conversation, Role.ASSISTANT);
const messages = encoding.parseMessagesFromCompletionTokens(responseTokens, Role.ASSISTANT);
} catch (error) {
if (error instanceof ParseError) {
console.error('Parsing failed:', error.message);
} else if (error instanceof RenderError) {
console.error('Rendering failed:', error.message);
} else if (error instanceof ValidationError) {
console.error('Validation failed:', error.message);
} else {
console.error('Unexpected error:', error);
}
}Streaming Parser
import { StreamableParser, loadHarmonyEncoding, Role } from 'harmony-protocol-js';
const encoding = loadHarmonyEncoding('o200k_base');
const parser = new StreamableParser(encoding, Role.ASSISTANT);
// In practice, responseTokens would come from your OpenAI model's streaming API
const responseTokens = [200006, 1234, 5678]; // These would be from OpenAI
// Process tokens as they arrive from the model
for (const token of responseTokens) {
parser.process(token);
// Get content delta for real-time streaming UI updates
const delta = parser.getLastContentDelta();
if (delta) {
process.stdout.write(delta); // Show new content to user immediately
}
}
// Get final structured messages after streaming is complete
const messages = parser.intoMessages();
console.log(`\nParsed ${messages.length} messages from model output`);Message Format
The Harmony format structures conversations using special tokens:
<|start|>system<|message|>You are ChatGPT, a large language model trained by OpenAI.
Knowledge cutoff: 2024-06
Reasoning: medium
# Valid channels: analysis, commentary, final. Channel must be included for every message.<|end|>
<|start|>user<|message|>What is 2 + 2?<|end|>
<|start|>assistant<|channel|>analysis<|message|>I need to perform a simple arithmetic calculation.<|end|>
<|start|>assistant<|channel|>final<|message|>2 + 2 equals 4.<|end|>Channel System
The library supports multiple communication channels for organized model outputs:
- analysis: Internal reasoning and analysis
- commentary: Model explanations and meta-commentary
- final: User-facing final responses
Channels can be configured as required, and the system automatically handles analysis dropping when final responses are complete.
Tool Integration
Built-in Tool Namespaces
- Browser Tools: Web browsing, search, and content extraction
- Python Tools: Code execution environment
- Function Tools: Custom function definitions
Custom Tools
import { createToolDescription } from 'harmony-protocol-js';
const customTool = createToolDescription(
'weather',
'Gets current weather for a location',
{
type: 'object',
properties: {
location: { type: 'string' },
units: { type: 'string', enum: ['celsius', 'fahrenheit'] }
},
required: ['location']
}
);Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Message │ │ Encoding │ │ Streaming │
│ Conversation │◄──►│ HarmonyEncoding │◄──►│ StreamableParser│
│ Types │ │ Token Handling │ │ Real-time Parse │
└─────────────────┘ └─────────────────┘ └─────────────────┘
▲ ▲ ▲
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Tool Integration│ │ Tiktoken │ │ TypeScript │
│ Namespaces │ │ Tokenization │ │ Type Safety │
│ Validation │ │ Encoding │ │ Full Typing │
└─────────────────┘ └─────────────────┘ └─────────────────┘Special Tokens
| Token | ID | Purpose |
|---|---|---|
| `< | start | >` |
| `< | message | >` |
| `< | end | >` |
| `< | channel | >` |
| `< | call | >` |
| `< | return | >` |
| `< | constrain | >` |
Performance
- Context Window: 1,048,576 tokens (1M)
- Max Action Length: 524,288 tokens (512K)
- Type Safe: Full TypeScript support
- Memory Efficient: Token reuse and streaming parsing
📋 Complete Examples & Tutorials
Running the Examples
# Clone and setup
git clone https://github.com/terraprompt/harmony-protocol-js-js
cd harmony-protocol-js-js
npm install && npm run build
# Run individual examples
npm run example:basic # Core functionality demonstration
npm run example:streaming # Real-time streaming parser
npm run example:tools # Tool integration and custom functions
npm run example:channels # Multi-channel workflow patterns
npm run example:rendering # Output formatting and rendering optionsExample Overview
| Example | Purpose | Key Features |
|---|---|---|
| basic-usage.ts | Core library functionality | Message creation, conversation rendering, token handling |
| streaming-parser.ts | Real-time processing | Streaming token processing, delta updates, error handling |
| tool-integration.ts | Tool system | Custom tools, namespaces, browser/Python tools |
| channel-management.ts | Multi-channel workflows | Channel routing, analysis dropping, conversation filtering |
| output-rendering.ts | Output formatting | Text/Markdown/HTML rendering, custom formatting, export options |
Integration Guides
- OpenAI Integration - Using with OpenAI API
- Azure OpenAI Integration - Azure OpenAI Service setup
- Local Models - Ollama, vLLM, and local inference
- Streaming UIs - Building real-time chat interfaces
- Error Handling - Comprehensive error management
Complete API Overview
Core Classes
| Class | Purpose | Key Methods |
|---|---|---|
Message |
Individual conversation messages | system(), user(), assistant(), withChannel() |
Conversation |
Message collections | fromMessages(), validate(), getStats(), filter() |
HarmonyEncoding |
Token encoding/decoding | renderConversation(), parseMessages(), countTokens() |
StreamableParser |
Real-time parsing | process(), getLastContentDelta(), intoMessages() |
Rendering System
| Class/Function | Purpose | Output Formats |
|---|---|---|
HarmonyRenderer |
Advanced rendering | Text, Markdown, HTML, Structured |
StreamingRenderer |
Incremental updates | Real-time UI support |
renderToText() |
Quick text output | Plain text with formatting |
renderToMarkdown() |
Documentation format | Markdown with headers |
renderToHTML() |
Web interfaces | HTML with CSS classes |
Tool Integration
| Function | Purpose | Built-in Tools |
|---|---|---|
createSystemContent() |
System configuration | Fluent API builder |
createToolDescription() |
Define tools | JSON schema validation |
createBrowserToolNamespace() |
Web browsing | Search, navigate, extract |
createPythonToolNamespace() |
Code execution | Execute, install packages |
Types and Enums
| Type | Values | Usage |
|---|---|---|
Role |
System, Developer, User, Assistant, Tool | Message attribution |
Channel |
Final, Analysis, Commentary | Response organization |
ReasoningEffort |
Low, Medium, High | System configuration |
Testing
# Run all tests
npm test
# Build the library
npm run build
# Run linting
npm run lint
# Format code
npm run formatTroubleshooting & FAQ
Common Issues
Q: Import errors when using the library
// ❌ This might fail
import { Message } from 'harmony-protocol-js/dist/message';
// ✅ Use this instead
import { Message } from 'harmony-protocol-js';Q: Tiktoken encoding errors
# If you see tiktoken errors, try:
npm install tiktoken@latest
# For Node.js compatibility issues:
npm install --save-dev @types/node@latestQ: TypeScript compilation issues
// Ensure your tsconfig.json includes:
{
"compilerOptions": {
"moduleResolution": "node",
"esModuleInterop": true,
"allowSyntheticDefaultImports": true
}
}Q: Large conversation performance
// For large conversations, use streaming or filtering
const finalOnly = renderFinalOnly(conversation); // Faster
const streaming = new StreamingRenderer(); // Memory efficientBest Practices
- Always validate conversations before encoding
- Use appropriate rendering format for your use case
- Handle errors gracefully with typed exceptions
- Cache rendered output when conversations don't change
- Use streaming for real-time interfaces
- Filter channels based on audience needs
Performance Tips
- Reuse
HarmonyEncodinginstances - Use
StreamableParserfor large responses - Enable analysis dropping for production
- Cache rendered outputs
- Use
renderFinalOnly()for user interfaces
Getting Help
- 📖 Complete Documentation - Comprehensive guides
- 🔍 Examples - Working code samples
- 🐛 GitHub Issues - Bug reports and questions
- 📚 API Reference - Detailed method documentation
Contributing
We welcome contributions! Please see our contributing guidelines:
- Fork the repository and create a feature branch
- Add comprehensive tests for new functionality
- Update documentation for any API changes
- Follow TypeScript best practices and existing code style
- Submit a pull request with clear description
Development Setup
git clone https://github.com/terraprompt/harmony-protocol-js-js
cd harmony-protocol-js-js
npm install
npm run build
npm testAdding Examples
New examples are always welcome! Follow the existing pattern:
- Create
.tsfile inexamples/ - Add comprehensive comments
- Include error handling
- Add to
package.jsonscripts - Document in
docs/examples.md
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Acknowledgments
- OpenAI for creating the Harmony response format
- Rust Implementation for providing the specification
- tiktoken maintainers for tokenization support
- TypeScript Team for excellent tooling
- Open Source Community for contributions and feedback
Disclaimer
This is a reverse-engineered implementation for educational and research purposes. It is not affiliated with or endorsed by OpenAI. The Harmony protocol specification may change as OpenAI continues development.