JSPM

  • Created
  • Published
  • Downloads 42
  • Score
    100M100P100Q97795F
  • License SEE LICENSE IN LICENSE

Runtime agent server for the ohwow platform

Package Exports

  • ohwow
  • ohwow/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (ohwow) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

ohwow

A local AI agent runtime. Free to use with Ollama for local models. Enterprise features (cloud dashboard sync, WhatsApp, Telegram, scheduling, proactive engine) unlock with an ohwow.fun subscription.

Getting Started

Install

npm install ohwow -g

Requirements

  • Node.js 20+
  • Ollama (for free tier / local models)
  • Optional: Anthropic API key (for Claude models)
  • Optional: Playwright browsers (npx playwright install chromium) for browser automation
  • Optional: C++ compiler may be needed on some platforms for better-sqlite3

Launch

ohwow

On first launch, a setup wizard appears in your terminal. For the free tier, just point it at your Ollama instance. For enterprise features, enter your license key (from the ohwow.fun dashboard, under Settings > License) and your Anthropic API key. These are saved locally so you only do this once.

After setup, the runtime opens into a TUI (terminal UI) with tabs for your dashboard, agents, tasks, approvals, activity, schedules, plans, and a chat interface. Use arrow keys or tab to navigate. Everything you see in the web dashboard is also here, running locally.

What Happens at Startup

Once configured, the runtime:

  1. Initializes a local database
  2. Connects to ohwow.fun and syncs your agent configurations
  3. Starts the orchestrator, scheduler, and proactive engine
  4. Connects messaging channels (WhatsApp, Telegram) if you've set them up
  5. Launches a local web UI
  6. Begins polling for tasks dispatched from the dashboard

From here, agents execute tasks on your hardware using your own API key. The dashboard sends the work, your machine does the thinking.

Using the Orchestrator

The orchestrator is a conversational assistant built into the runtime with 40+ tools. Open the Chat tab in the TUI, or use the web UI from your browser.

You can talk to it naturally. Some examples of what it can do:

What you say What happens
"Run the content writer on this week's blog post" Dispatches a task to that agent immediately
"What failed today?" Lists recent failed tasks with details
"Schedule outreach every weekday at 9am" Creates a cron schedule for the agent
"Send a WhatsApp to the team: launching Friday" Sends the message through your connected WhatsApp
"Plan out researching 5 new leads this week" Creates a multi-step plan with agent assignments, waits for your approval
"Show me the business pulse" Returns task stats, contact pipeline, costs, and streaks
"Create a project for the website redesign" Creates a project with a Kanban board
"Move that task to review" Moves a task between board columns

The orchestrator covers: agents, tasks, projects, CRM (contacts, pipeline, events), scheduling, messaging (WhatsApp + Telegram), A2A connections, goal planning, deep research, analytics, and workflows. It can also switch your TUI tabs if you ask ("go to approvals").

Features

Agent Memory

After each task, key facts, skills, and feedback are extracted and stored locally. These memories are compiled into the agent's context on future tasks. Agents improve the more they work. You can view any agent's memory from the Agents tab.

Browser Automation

Agents can browse the web using Playwright. Navigation, clicking, form filling, screenshots, and content extraction. The browser launches on first use and runs headless by default. Set OHWOW_BROWSER_HEADLESS=false to watch it work.

WhatsApp and Telegram

Connect WhatsApp through a QR code scan in Settings (no Meta business API needed). Connect Telegram with a bot token. Once connected, incoming messages route to the orchestrator automatically. Your agents can reply, take action, or flag things for your attention. You control which chats are allowed.

Agent-to-Agent (A2A)

Connect to external agents using the A2A protocol. Each agent publishes a card describing its capabilities. You set trust levels to control what external agents can do. Managed from the A2A tab or through the orchestrator.

Scheduling

Set agents or workflows to run on cron schedules. Create schedules through conversation ("schedule the analyst every Monday at 8am") or from the Schedules tab. Toggle them on and off as needed.

Goal Planning

For complex goals, the orchestrator can break them into multi-step plans with agent assignments and dependencies. Plans start as drafts. You review the steps, approve or reject, and track execution from the Plans tab.

Approval Workflows

Some tasks pause for your sign-off before executing. The Approvals tab shows pending items. Approve to proceed, or reject with feedback. Rejected tasks can retry with your notes included.

Projects and CRM

Organize tasks into projects with Kanban boards (backlog, todo, in progress, review, done). The built-in CRM tracks contacts (leads, customers, partners), logs events (calls, emails, meetings), and gives you pipeline analytics. All stored locally.

Local Models with Ollama

If you run Ollama locally, the runtime can route lightweight tasks to your local model instead of Claude. Complex work still goes to Claude. If Ollama goes down, everything falls back automatically.

Agents with web search enabled can search the web during task execution, powered by Anthropic's built-in search tool.

Offline Mode

If ohwow.fun becomes unreachable, the runtime continues with cached agent configs. Tasks still execute, results still store locally. When connectivity returns, everything syncs back up.

What Stays Local

The runtime syncs agent configurations from ohwow.fun and reports back only operational metadata: task titles, status, token counts, and costs. Everything else stays on your machine:

  • Prompts and system instructions
  • Agent outputs and full conversations
  • Long-term agent memory
  • CRM contacts and activity history
  • WhatsApp and Telegram message history
  • Browser session data and screenshots

This is the core of the Enterprise plan. Your business data never leaves your infrastructure.

Web UI

The runtime also serves a web UI accessible from any browser on your network. Same capabilities as the TUI. Useful if you prefer a graphical interface or want to share access with your team locally.

Headless Mode

For servers, containers, or always-on deployments where you don't need a terminal interface:

ohwow --headless

In headless mode, configure through environment variables. The web UI still runs normally. See the configuration docs for available options.

Supported Models

Model Provider
Claude Opus 4.6 Anthropic
Claude Sonnet 4.5 Anthropic
Claude Haiku 4 Anthropic
Any Ollama model Local

License

BSL 1.1 (Business Source License). Free to use, including production. You can't use it to build a competing product. Converts to Apache 2.0 on March 2, 2030. See LICENSE for details.