Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (workermill) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
WorkerMill CLI
AI coding agent with multi-expert orchestration. Works with any LLM provider.
The lightweight, zero-setup version of WorkerMill — the open-source orchestration platform for AI coding agents. Same multi-expert engine, directly in your terminal. No server, no Docker, no account.
Works with Ollama (fully local), Anthropic, OpenAI, Google.
Quick Start
npx workermillFirst run launches a setup wizard — pick providers for workers, planner, and reviewer independently. Ollama is auto-detected (including WSL). Config saved to ~/.workermill/cli.json.
Install
# Run without installing
npx workermill
# Or install globally
npm install -g workermill
workermill
# Check your setup
wm doctorUsage
# Interactive chat
workermill
# Skip permission prompts
workermill --trust
# Read-only research mode
workermill --plan
# Resume last conversation
workermill --resume
# Override provider/model
workermill --provider anthropic --model claude-sonnet-4-6
# Cap output tokens
workermill --max-tokens 4096
# Then use /build inside the CLI for multi-expert orchestration
# /build spec.md
# /build REST API with auth, React dashboard, DockerFeatures
- Multi-expert orchestration —
/builddecomposes tasks into stories, each assigned to a specialist persona - Role-based model routing — Different models for workers, planner, and reviewer (e.g., Ollama for workers, Gemini for planning, Claude for review)
- 13 built-in tools — bash, read_file, write_file, edit_file, patch, glob, grep, ls, fetch, git, web_search, todo, sub_agent
- WORKERMILL.md — Project instructions file read by all agents. Also supports CLAUDE.md, .cursorrules
- MCP servers — Connect external tools via Model Context Protocol
- Hooks — Pre/post tool execution hooks for linting, formatting, etc.
- Custom commands — Drop
.mdfiles in.workermill/commands/for custom slash commands - Persistent learnings —
::learning::markers saved across sessions - @mentions —
@file.tsinlines code,@dir/inlines tree,@https://urlfetches content,@image.pngsends multimodal - Code review — Tech lead reads actual code diffs, with configurable revision cycles
- Bash guardrails — Blocks destructive commands and writes outside the project directory
- Permissions — Tab to cycle: Allow → Deny → Always allow → Trust all
- Session management — Persistent conversations with resume
- Cost tracking — Live in status bar with per-model pricing
- Auto-update — Notifies when a newer version is available
Commands
| Command | Description |
|---|---|
/build <task> |
Multi-expert orchestration — plans, executes, reviews |
/as <persona> <task> |
Run a task with a specific expert (e.g. /as security_engineer review auth) |
/retry |
Re-run the last build task |
/personas |
List all available experts, view/create custom personas |
/init |
Generate WORKERMILL.md for this project |
/settings |
View/change settings (review, ollama, etc.) |
/permissions |
Manage tool permissions (trust/ask/allow/deny) |
/undo |
Revert last build's changes (git stash or reset) |
/diff |
Preview uncommitted changes |
/model |
Show or switch model (/model provider/model) |
/plan |
Toggle read-only plan mode |
/trust |
Auto-approve all tools for this session |
/hooks |
View configured pre/post tool hooks |
/skills |
Custom slash commands from .workermill/commands/ |
/chrome |
Open/close headless Chrome browser |
/voice |
Voice input — speaks until silence |
/schedule |
Scheduled recurring tasks |
/update |
Check for updates |
/release-notes |
Show changelog |
/cost |
Session cost and token usage |
/status |
Session info |
/log |
Show recent CLI log entries |
/git |
Git branch and status |
/sessions |
List/switch sessions |
/editor |
Open $EDITOR for longer input |
/clear |
Reset conversation |
/quit |
Exit |
Shortcuts: !command runs shell directly, ESC cancels, ESC ESC rolls back last exchange, Shift+Tab cycles permission mode, Ctrl+C Ctrl+C exits.
Multi-Expert Orchestration
/build triggers multi-expert mode:
- Plans — Explores the codebase, designs stories as scope labels with dependencies and persona assignments. Workers receive the full original spec — the planner scopes, not rewrites.
- Executes — Each story assigned to a specialist persona. Workers see
## Ticket Requirements — THIS IS YOUR SPECwith your full task, plus their file scope. - Reviews — Tech lead reviews actual code with a 3-tier decision:
approved,revision_needed, orrejected. Bias toward approval — cosmetic issues don't block. Quality score (1-10) is informational. - Revises — If revision needed, only affected stories re-run with per-story feedback from the reviewer.
- Commits — Stages changes and commits (with your approval).
For single-expert tasks, use /as <persona> <task> — runs one expert with the full tool set and their specialized prompt.
Use /retry to re-plan the same task — the planner sees existing code and fills gaps.
Configuration
Files
| File | Purpose |
|---|---|
WORKERMILL.md |
Project instructions — read by all agents (committed to repo) |
~/.workermill/cli.json |
Global config (providers, routing, review, hooks, MCP) |
~/.workermill/sessions/ |
Conversation sessions |
~/.workermill/logs/ |
Debug logs (per-project) |
~/.workermill/learnings/ |
Persistent learnings (per-project) |
.workermill/config.json |
Per-project config overrides |
.workermill/commands/*.md |
Custom slash commands |
.workermill/personas/*.md |
Custom persona overrides |
Example Config
{
"providers": {
"ollama": {
"model": "qwen3-coder:30b",
"host": "http://localhost:11434",
"contextLength": 65536
},
"anthropic": {
"model": "claude-sonnet-4-6",
"apiKey": "{env:ANTHROPIC_API_KEY}"
},
"google": {
"model": "gemini-3.1-pro-preview",
"apiKey": "{env:GOOGLE_API_KEY}"
}
},
"default": "ollama",
"routing": {
"planner": "google",
"tech_lead": "anthropic"
},
"review": {
"enabled": true,
"maxRevisions": 3
},
"hooks": {
"post": [
{ "command": "npx eslint --fix", "tools": ["write_file", "edit_file"] }
]
},
"mcp": {
"my-server": { "command": "npx", "args": ["-y", "my-mcp-server"] }
}
}Settings
Change settings at runtime with /settings:
| Setting | Default | Command |
|---|---|---|
| Ollama host | auto-detected | /settings ollama.host <url> |
| Ollama context | 65536 | /settings ollama.context <n> |
| Review enabled | true | /settings review.enabled true/false |
| Max revisions | 3 | /settings review.maxRevisions <n> |
| Auto-revise | false | /settings review.autoRevise true/false |
12 Expert Personas
| Persona | Role |
|---|---|
architect |
System design and architecture |
backend_developer |
APIs, databases, server logic |
frontend_developer |
React, UI components, styling |
devops_engineer |
Docker, CI/CD, infrastructure |
qa_engineer |
Testing, quality gates |
security_engineer |
Auth, vulnerabilities, hardening |
data_ml_engineer |
Data pipelines, ML integration |
mobile_developer |
Mobile apps and responsive design |
tech_writer |
Documentation and API docs |
tech_lead |
Code review (used automatically) |
planner |
Task decomposition (used automatically) |
critic |
Plan quality review (used automatically) |
Use /personas to list all available personas. Use /as <persona> <task> to run a task with a specific expert.
Custom personas: add .workermill/personas/my_persona.md to your project or ~/.workermill/personas/ globally. Project personas override built-ins with the same name.
Requirements
- Node.js 20+
- An LLM provider (Ollama for local, or an API key for cloud providers)
License
MIT