Package Exports
- ohwow
- ohwow/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (ohwow) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
ohwow
Send me a message for support at ogsus@ohwow.fun
A free and open-source local AI agent runtime. Full-featured out of the box with Ollama for local models. Cloud features (sync, OAuth integrations, cloud task dispatch, webhook relay) available with an ohwow.fun subscription.
Getting Started
Install
npm install ohwow -gRequirements
- Node.js 20+
- Ollama for local models
- Optional: Anthropic API key (for Claude models)
- Optional: Playwright browsers (
npx playwright install chromium) for browser automation - Optional: C++ compiler may be needed on some platforms for
better-sqlite3
Launch
ohwowFirst Launch
A setup wizard appears in your terminal. Point it at your Ollama instance to get started. To connect to ohwow.fun cloud, enter your license key (from the ohwow.fun dashboard, under Settings > License).
Config is saved to ~/.ohwow/config.json. You only do this once.
What Happens at Startup
Once configured, the runtime:
- Loads config from
~/.ohwow/config.json - Spawns the daemon process
- Initializes the local SQLite database
- Starts the execution engine, orchestrator, and model router
- Connects messaging channels (WhatsApp, Telegram) if configured
- Starts the scheduler, proactive engine, and trigger evaluator
- Launches the HTTP server (default port 7700)
- Connects to the ohwow.fun control plane (enterprise)
- Opens a WebSocket for real-time updates
From here, agents execute tasks on your hardware using your own API keys. The dashboard sends the work, your machine does the thinking.
CLI Commands
| Command | What it does |
|---|---|
ohwow |
Start the TUI (default) |
ohwow --daemon |
Start daemon in foreground (for systemd/launchd/Docker) |
ohwow stop |
Stop the daemon |
ohwow status |
Check daemon status (PID and port) |
ohwow logs |
Tail daemon logs |
ohwow restart |
Restart the daemon |
TUI
The terminal UI opens into a chat interface with tab navigation. Use arrow keys or tab to switch between:
- Dashboard — overview of your workspace
- Agents — manage agent configs, memory, and capabilities
- Tasks — view and manage running/completed tasks
- Approvals — review pending items before execution
- Activity — live feed of everything happening
- Automations — webhook-based automation triggers
- Contacts — CRM with leads, customers, partners
- Settings — config, connections, license
Everything you see in the web dashboard is also here, running locally.
Web UI
The runtime serves a built-in React app at http://localhost:7700. Same capabilities as the TUI. Useful if you prefer a graphical interface or want to share access with your team on the local network. Override the port with the OHWOW_PORT env var.
Orchestrator
The orchestrator is a conversational assistant built into the runtime with 40+ tools. Open the Chat tab in the TUI, or use the web UI.
You can talk to it naturally:
| What you say | What happens |
|---|---|
| "Run the content writer on this week's blog post" | Dispatches a task to that agent immediately |
| "What failed today?" | Lists recent failed tasks with details |
| "Schedule outreach every weekday at 9am" | Creates a cron schedule for the agent |
| "Send a WhatsApp to the team: launching Friday" | Sends the message through your connected WhatsApp |
| "Plan out researching 5 new leads this week" | Creates a multi-step plan with agent assignments, waits for your approval |
| "Show me the business pulse" | Returns task stats, contact pipeline, costs, and streaks |
| "Create a project for the website redesign" | Creates a project with a Kanban board |
| "Move that task to review" | Moves a task between board columns |
The orchestrator covers: agents, tasks, projects, CRM (contacts, pipeline, events), scheduling, messaging (WhatsApp + Telegram), A2A connections, goal planning, deep research, analytics, and workflows. It can also switch your TUI tabs if you ask ("go to approvals").
Features
Agent Memory
After each task, key facts, skills, and feedback are extracted and stored locally. Memories are periodically consolidated to keep context sharp. On future tasks, relevant memories are retrieved via RAG and compiled into the agent's context. Agents improve the more they work. View any agent's memory from the Agents tab.
Browser Automation
Agents can browse the web using Playwright. Navigation, clicking, form filling, screenshots, and content extraction. The browser launches on first use and runs headless by default. Set OHWOW_BROWSER_HEADLESS=false to watch it work.
WhatsApp and Telegram
Connect WhatsApp through a QR code scan in Settings (no Meta business API needed). Connect Telegram with a bot token. Once connected, incoming messages route to the orchestrator automatically. Your agents can reply, take action, or flag things for your attention. You control which chats are allowed.
Voice
Full voice pipeline with local and cloud options:
- Speech-to-Text: Voicebox (Whisper via local FastAPI server), WhisperLocal (via Ollama), or WhisperAPI (OpenAI cloud fallback)
- Text-to-Speech: VoiceboxTTS (local), Piper (local), or OpenAI TTS (cloud fallback)
A2A Protocol
Connect to external agents using the A2A protocol over JSON-RPC 2.0. Each agent publishes a card at /.well-known/agent-card.json describing its capabilities. Trust levels (read_only, execute, autonomous, admin) control what external agents can do. Scopes cover tasks, agents, results, and file access. Managed from the A2A tab or through the orchestrator.
Scheduling and Proactive Engine
Set agents or workflows to run on cron schedules. Create schedules through conversation ("schedule the analyst every Monday at 8am") or from the Schedules tab.
The proactive engine runs every 30 minutes and checks for overdue tasks, aging approvals, and idle agents. It generates nudges (suggestions, not auto-executions) so nothing falls through the cracks.
Goal Planning and Approvals
For complex goals, the orchestrator breaks them into multi-step plans with agent assignments and dependencies. Plans start as drafts. You review the steps, approve or reject, and track execution from the Plans tab. Rejected tasks can retry with your feedback included.
Projects and CRM
Organize tasks into projects with Kanban boards (backlog, todo, in progress, review, done). The built-in CRM tracks contacts (leads, customers, partners), logs events (calls, emails, meetings), and gives you pipeline analytics. All stored locally.
Automations and Triggers
Webhook-based automations that fire on external events. Configure field mapping to extract data from incoming payloads and route it to agents or workflows. Managed from the Automations tab.
Workflows
DAG-based multi-agent execution graphs. Define sequences of agent tasks with conditions and branches. Workflows can be triggered manually, on a schedule, or via automation triggers.
MCP Servers
Integrate external tools via the Model Context Protocol. Supports stdio (subprocess) and HTTP (Streamable HTTP) transports. Authentication via OAuth 2.1, bearer tokens, or API keys. MCP tools are auto-adapted to the Anthropic SDK format and can be assigned per-agent or globally.
Code Sandbox
Agents can execute JavaScript in an isolated sandbox (Node.js vm module). No filesystem, network, or process access. 5-second default timeout, 30-second maximum. Safe globals only (Math, JSON, Date, Array, Object, Map, Set, etc.).
Web Search
Agents with web search enabled can search the web during task execution, powered by Anthropic's built-in search tool.
Local Models with Ollama
If you run Ollama locally, the runtime routes lightweight tasks (orchestration, memory extraction) to your local model instead of Claude. The model catalog includes 25+ models across 5 memory tiers with device-aware recommendations. Complex work (planning, agent tasks, browser automation) still goes to Claude. If Ollama goes down, everything falls back to Anthropic automatically.
Offline Mode
If ohwow.fun becomes unreachable, the runtime continues with cached agent configs. Tasks still execute, results still store locally. When connectivity returns, everything syncs back up.
Connected Mode (Cloud)
Connect to ohwow.fun to add cloud features on top of the free local runtime.
Cloud features (require connection):
- Cloud sync: agent configs sync from your dashboard
- Cloud task dispatch: receive tasks dispatched from the web
- OAuth integrations: Gmail, Slack, and other cloud-based integrations
- Webhook relay: cloud-proxied webhooks for external services
- Heartbeats and health monitoring
What stays local (always):
- Prompts and system instructions
- Agent outputs and full conversations
- Long-term agent memory
- CRM contacts and activity history
- WhatsApp and Telegram message history
- Browser session data and screenshots
How it works:
Activate with a license key from the ohwow.fun dashboard (Settings > License). The runtime connects to the control plane via long-polling, receives task dispatches, and sends back status updates. Heartbeats confirm the device is online. Each license is locked to a single device. The cloud dashboard gives you a web interface for managing agents, reviewing tasks, and monitoring your workspace without touching the terminal.
All local features (agents, scheduling, WhatsApp, Telegram, A2A, browser automation, voice, CRM, automations, approvals) work without a cloud connection.
Headless / Daemon Mode
For servers, containers, or always-on deployments:
ohwow --daemonRuns the daemon in the foreground (no TUI). Suitable for systemd, launchd, or Docker. Set OHWOW_HEADLESS=1 as an alternative. The web UI still runs normally. Configure through environment variables.
Configuration
Config lives at ~/.ohwow/config.json. Key environment variables:
| Variable | Purpose |
|---|---|
OHWOW_PORT |
HTTP server port (default 7700) |
OHWOW_HEADLESS |
Set to 1 for headless mode |
OHWOW_BROWSER_HEADLESS |
Set to false to show the browser |
ANTHROPIC_API_KEY |
Anthropic API key for Claude models |
Supported Models
| Model | Provider |
|---|---|
| Claude Opus 4.6 | Anthropic |
| Claude Sonnet 4.6 | Anthropic |
| Claude Haiku 4.5 | Anthropic |
| Any Ollama model | Local |
License
BSL 1.1 (Business Source License). Free to use, including production. You can't use it to build a competing product. Converts to Apache 2.0 on March 2, 2030. See LICENSE for details.