Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@sunilp-org/jam-cli) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
╭───────────────────────────────────╮
│ │
│ ██╗ █████╗ ███╗ ███╗ │
│ ██║ ██╔══██╗ ████╗ ████║ │
│ ██║ ███████║ ██╔████╔██║ │
│ ██ ██║ ██╔══██║ ██║╚██╔╝██║ │
│ ╚████╔╝ ██║ ██║ ██║ ╚═╝ ██║ │
│ ╚═══╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ │
│ │
│ developer-first AI CLI │
│ │
╰───────────────────────────────────╯Jam CLI
The developer-first AI assistant for the terminal.
Ask questions · Trace call graphs · Review diffs · Generate patches · Run agentic tasks · Connect to Jira
All from your command line, powered by Ollama, OpenAI, Anthropic, Groq, or any compatible provider.
Getting Started · Commands · Configuration · Contributing · Security
Why Jam?
Most AI coding tools are built around a single vendor's model, require a browser or IDE plugin, and send your code to a remote server you don't control.
Jam is different by design:
- It runs entirely on your machine by default — your code never leaves your filesystem
- It is not tied to any single model or provider — you choose the engine; Jam is the harness
- It behaves like a proper Unix tool — pipeable, composable, and scriptable
- It treats code modification as a transaction — validate first, preview always, confirm before applying
- It is built to be contributed to — clean TypeScript, well-tested, architecture documented below
Highlights
| Feature | Description | |
|---|---|---|
| ⚡ | Streaming output | Responses begin rendering on the first token |
| 💬 | Interactive chat | Multi-turn sessions with history and resume |
| 📂 | Repo-aware | Explain files, search code, review diffs with full workspace context |
| 🔍 | Call graph tracing | jam trace maps any symbol's callers, callees, imports, and upstream chain with Mermaid diagrams |
| 🩹 | Patch workflow | Generate unified diffs, validate, preview, and apply with confirmation |
| 🤖 | Structured agent | jam run uses a typed plan-then-execute loop with read-before-write safety and shrinkage guards |
| 🎫 | Jira integration | Browse assigned issues, auto-create branches, generate implementation plans |
| ✅ | Verification pipeline | jam verify scans for secrets, runs checks, and assesses risk before you ship |
| 📝 | Smart commits | Auto-detects your project's commit convention (Conventional Commits, JIRA prefixes, etc.) |
| 🔌 | 5 providers | Ollama, OpenAI, Anthropic, Groq, Embedded — adapter pattern for adding any LLM |
| 🧠 | Auto-detection | --model claude-sonnet-4-20250514 auto-selects the right provider — no --provider needed |
| 💾 | Response caching | Identical prompts return cached results instantly — saves API calls and money |
| 📦 | Embedded inference | [Experimental] Tiny GGUF model runs directly in-process via node-llama-cpp |
| ⚙️ | Layered config | Global → repo → CLI flags; multiple named profiles |
| 🔐 | Secure secrets | OS keychain via keytar, env var fallback |
| 🐚 | Shell completions | Bash and Zsh |
| 🏠 | Privacy-first | Runs locally — your code never leaves your machine |
Design Philosophy
The best developer tools disappear into your workflow. They don't ask you to change how you work — they work the way you already do.
You own the model. Jam's ProviderAdapter is a clean interface — swap the AI engine with a config change, not a rewrite. No vendor lock-in, no model loyalty.
Your code stays private. The default is localhost. Nothing leaves your machine unless you explicitly point Jam at a remote provider. This isn't just a feature — it's the architecture.
Changes are transactions, not actions. jam patch validates with git apply --check before anything is touched, shows a full preview, and waits for explicit confirmation. No "undo" needed — changes never happen without your approval.
Unix composability. jam ask reads stdin, writes stdout, supports --json. It's a pipe stage, not a walled garden.
Security is configuration, not hope. Tool permissions (toolPolicy), allowed operations (toolAllowlist), and log redaction (redactPatterns) are declarative config — committable to .jamrc so your whole team inherits the same guardrails.
Who Is Jam For?
| Situation | Why Jam fits |
|---|---|
| You work in a security-sensitive codebase | Local-only by default — nothing leaves your machine |
| You want to use different models for different tasks | Named profiles + provider adapter — switch with --profile |
| You live in the terminal and resent leaving it | Every command is designed for the shell, not a browser tab |
| You're on a corporate network that blocks AI services | Point baseUrl at an internal Ollama instance and you're done |
| You want an AI tool that fits into CI/CD scripts | --json output, stdin support, non-zero exit codes on errors |
| You want to contribute to an AI tool without fighting vendor APIs | The hard parts (streaming, tool-calling, config) are already built cleanly |
Quick Start
Provider Support
| Provider | Status | Notes |
|---|---|---|
| Ollama | ✅ Default | Local inference via ollama serve |
| OpenAI | ✅ Supported | Requires OPENAI_API_KEY |
| Anthropic | ✅ Supported | Requires ANTHROPIC_API_KEY; Claude models with tool calling |
| Groq | ✅ Supported | Requires GROQ_API_KEY |
| Embedded | ⚗️ Experimental | In-process via node-llama-cpp, no server needed |
Provider auto-detection: You don't need to specify --provider if you pass a model name. Jam infers the provider automatically:
jam ask "Hello" --model claude-sonnet-4-20250514 # → anthropic
jam ask "Hello" --model gpt-4o # → openai
jam ask "Hello" --model llama-3.1-8b-instant # → groqPrerequisites
- Node.js 20+
- One of the following model backends:
- Ollama running locally (
ollama serve) + a pulled model (ollama pull llama3.2) - Embedded mode — no server needed! Uses
node-llama-cppto run a tiny GGUF model in-process. Install with:npm install node-llama-cpp(auto-downloads a ~250 MB model on first run)
- Ollama running locally (
Install
# Try instantly — no install required
npx @sunilp-org/jam-cli doctor
# Global install from npm
npm install -g @sunilp-org/jam-cli
# Homebrew (macOS / Linux)
brew tap sunilp/tap
brew install jam-cli
# Or run from source
git clone https://github.com/sunilp/jam-cli.git
cd jam-cli
npm install
npm run build
npm link # makes `jam` available globallyFirst Run
jam init # interactive setup — detects providers, creates .jamrc + JAM.md
jam doctor # checks Node version, config, provider connectivity, ripgrep
jam auth login # validates connection to the configured providerCommands
jam init
Interactive onboarding wizard. Detects available providers (Ollama running? API keys set?), lets you choose, creates .jamrc and JAM.md, and verifies connectivity.
jam init # interactive provider selection + config creation
jam init --yes # auto-select the best available providerjam ask
One-shot question. Streams the response to stdout.
jam ask "What is the difference between TCP and UDP?"
# From stdin
echo "Explain recursion in one paragraph" | jam ask
# From a file
jam ask --file prompt.txt
# JSON output (full response + token usage)
jam ask "What is 2+2?" --json
# Override model (provider auto-detected)
jam ask "Hello" --model claude-sonnet-4-20250514
# Use a named profile
jam ask "Hello" --profile workOptions:
| Flag | Description |
|---|---|
--file <path> |
Read prompt from file |
--system <prompt> |
Override system prompt |
--json |
Machine-readable JSON output |
--model <id> |
Override model for this request |
--provider <name> |
Override provider |
--base-url <url> |
Override provider base URL |
--profile <name> |
Use a named config profile |
--no-tools |
Disable read-only tool use (file discovery) |
--no-color |
Strip ANSI colors from output (global flag) |
-q, --quiet |
Suppress all non-essential output (spinners, status lines, decorations) |
jam chat
Interactive multi-turn chat REPL (Ink/React TUI).
jam chat # new session
jam chat --name "auth refactor" # named session
jam chat --resume <sessionId> # resume a previous sessionKeyboard shortcuts inside chat:
| Key | Action |
|---|---|
Enter |
Submit message |
Ctrl-C (once) |
Interrupt current generation |
Ctrl-C (twice) |
Exit chat |
Sessions are saved automatically to ~/.local/share/jam/sessions/ (macOS: ~/Library/Application Support/jam/sessions/).
jam trace
Trace the complete call graph of any function, class, or symbol across the codebase. Shows definition, callers, callees, import chain, upstream caller hierarchy, and a Mermaid diagram — all by default.
jam trace createProvider # full trace with AI analysis + Mermaid diagram
jam trace JamError --depth 1 # shallow upstream chain
jam trace loadConfig --no-ai # skip AI, just show the graph
jam trace buildCallGraph --json # structured JSON outputExample output:
Call Graph
────────────────────────────────────────────────────────────
createProvider(profile: Profile) → Promise<ProviderAdapter>
Defined: src/providers/factory.ts:30 [function]
Imported by:
│ src/commands/ask.ts:3
│ src/commands/commit.ts:5
│ src/commands/diff.ts:4
...
Called from:
├─ src/commands/ask.ts:138 createProvider(profile)
├─ src/commands/commit.ts:413 createProvider(profile)
├─ src/commands/diff.ts:56 createProvider(profile)
└─ src/commands/run.ts:72 createProvider(profile)
Calls into:
├─ OllamaAdapter({) [(workspace)]
├─ OpenAIAdapter({) [(workspace)]
├─ AnthropicAdapter({) [(workspace)]
└─ GroqAdapter({) [(workspace)]
Mermaid Diagram
────────────────────────────────────────────────────────────
```mermaid
graph TD
createProvider["createProvider ..."]
ask["ask.ts:138"] --> createProvider
commit["commit.ts:413"] --> createProvider
...AI Analysis ────────────────────────────────────────────────────────────
Flow Summary: createProvider is the central factory for LLM adapters...
**Options:**
| Flag | Description |
|------|-------------|
| `--depth <n>` | Upstream chain depth (default: 3) |
| `--no-ai` | Skip AI analysis, show only the graph |
| `--json` | Output the raw call graph as JSON |
---
### `jam explain`
Read one or more files and ask the model to explain them.
```bash
jam explain src/auth/middleware.ts
jam explain src/api/routes.ts src/api/handlers.ts
jam explain src/utils/retry.ts --jsonjam search
Search the codebase using ripgrep (falls back to JS if rg is not installed).
jam search "TODO" # plain search, prints results
jam search "useEffect" --glob "*.tsx" # filter by file type
jam search "createServer" --ask # pipe results to AI for explanation
jam search "error handling" --max-results 50Options:
| Flag | Description |
|---|---|
--glob <pattern> |
Limit to files matching this glob (e.g. *.ts) |
--max-results <n> |
Max results (default: 20) |
--ask |
Send results to AI for analysis |
--json |
JSON output (with --ask) |
jam diff
Run git diff and optionally review it with AI.
jam diff # review working tree changes
jam diff --staged # review staged changes (ready to commit)
jam diff --path src/api/ # limit to a specific directory
jam diff --no-review # just print the raw diff, no AI
jam diff --staged --json # JSON outputjam review
Review a branch or pull request with AI.
jam review # review current branch against main
jam review --base develop # diff against a different base branch
jam review --pr 42 # review a specific PR (requires GitHub CLI)
jam review --json # JSON outputOptions:
| Flag | Description |
|---|---|
--base <ref> |
Base branch or ref to diff against (default: main) |
--pr <number> |
Review a specific PR number (requires gh CLI) |
--json |
Machine-readable JSON output |
jam commit
Generate an AI-written commit message from staged changes and commit. Jam auto-detects your project's commit convention from git history — Conventional Commits, JIRA prefixes (PROJ-123: ...), bracket tickets ([PROJ-123] ...), or any consistent pattern.
jam commit # generate message, confirm, then commit
jam commit --dry # generate message only, do not commit
jam commit --yes # skip confirmation prompt
jam commit --amend # amend the last commit with a new AI messageConvention detection:
Jam samples the last 20 commits and detects the dominant pattern:
Detected convention: {ticket}: {type}({scope}): {description}
Generating commit message...
PROJ-456: feat(auth): add OAuth2 token refreshYou can also configure conventions explicitly in .jamrc:
{
"commitConvention": {
"format": "{ticket}: {type}: {description}",
"ticketPattern": "PROJ-\\d+",
"ticketRequired": true,
"types": ["feat", "fix", "chore", "docs"],
"rules": ["Always reference the module name in scope"]
}
}Options:
| Flag | Description |
|---|---|
--dry |
Generate the message but do not commit |
--yes |
Auto-confirm without prompting |
--amend |
Amend the last commit with a new AI-generated message |
jam verify
Validation pipeline for changes. Runs checks, scans for secrets, and assesses risk before you ship.
jam verify # verify current working tree changes
jam verify --staged # verify only staged changes
jam verify --base main # diff against a base branch
jam verify --json # structured JSON report
jam verify --fail-on-risk high # exit 1 if risk >= high (for CI)
jam verify --no-ai # skip AI risk assessmentChecks run:
| Check | Weight | What it does |
|---|---|---|
| Diff sanity | — | Verifies changes exist and aren't too large |
| Secret scan | 0.4 | 10 patterns: API keys, tokens, passwords, private keys, connection strings |
| Typecheck | 0.2 | Runs tsc --noEmit (if tsconfig exists) |
| Lint | 0.1 | Runs the project's lint command |
| Tests | 0.25 | Runs the project's test suite |
| AI risk review | — | LLM analyzes the diff for security, logic, and quality issues |
Risk levels: low → medium → high → critical, computed from weighted check failures.
jam patch
Ask the AI to generate a unified diff patch, validate it, and optionally apply it.
jam patch "Add input validation to the login function"
jam patch "Fix the off-by-one error in pagination" --file src/api/paginate.ts
jam patch "Add JSDoc comments to all public methods" --dry # generate only, don't apply
jam patch "Remove unused imports" --yes # auto-confirm applyFlow:
- Collects context (git status, current diff, specified files)
- Prompts the model for a unified diff
- Validates with
git apply --check - Shows the patch preview
- Asks for confirmation (unless
--yes) - Applies with
git apply
jam run
Agentic task workflow using a structured plan-then-execute loop. The model first generates a typed ExecutionPlan with ordered steps and success criteria, then executes each step with full safety enforcement.
jam run "Find all TODO comments and summarize them"
jam run "Check git status and explain what's changed"
jam run "Read src/config.ts and identify any security issues"
jam run "Add input validation to the login handler" # involves writes
jam run --yes "Rename all occurrences of userId to accountId" # auto-approve writesOptions:
| Flag | Description |
|---|---|
--yes |
Auto-approve all write tool confirmations (non-interactive) |
--model <id> |
Override model for this task |
--provider <name> |
Override provider |
--profile <name> |
Use a named config profile |
How it works:
- Plan — model generates an ordered list of steps (read → understand → write) with explicit success criteria
- Execute — each step runs in sequence; read-only results are cached to avoid redundant calls
- Read-before-write gate — write tools are automatically blocked until the target file has been read first, preventing silent overwrites of unread files
- Shrinkage guard — if a
write_fileproduces a file suspiciously smaller than the original, the write is auto-reverted and the model is redirected - Critic pass — after the loop completes, a critic evaluates the result and injects a correction if quality is insufficient
Available tools (model-callable):
| Tool | Type | Description |
|---|---|---|
read_file |
Read | Read file contents |
list_dir |
Read | List directory contents |
search_text |
Read | Search codebase with ripgrep |
git_status |
Read | Get git status |
git_diff |
Read | Get git diff |
write_file |
Write | Write to a file (prompts for confirmation) |
apply_patch |
Write | Apply a unified diff (prompts for confirmation) |
run_command |
Write | Execute a shell command (dangerous patterns blocked; prompts for confirmation) |
Write tools require confirmation unless toolPolicy is set to always or allowlist in config.
jam jira
Connect to Jira (Cloud or on-prem Server/Data Center), browse assigned issues, and start working with AI-generated implementation plans.
jam jira issues # list issues assigned to you
jam jira issues --status "In Progress" # filter by status
jam jira issues --json # JSON output
jam jira view PROJ-123 # view full issue details
jam jira view PROJ-123 --json # JSON output
jam jira start PROJ-123 # fetch issue, create branch, generate plan
jam jira start PROJ-123 --no-branch # skip branch creationjam jira start flow:
- Fetches full issue details (description, subtasks, comments)
- Creates a git branch from the issue key and summary (e.g.
proj-123-add-user-authentication) - Generates an AI implementation plan: summary, key files, steps, testing, edge cases
- Suggests the next step:
jam run "Implement PROJ-123: ..."
Configuration (in .jamrc):
{
"jira": {
"baseUrl": "https://jira.company.com",
"email": "you@company.com",
"branchTemplate": "{type}/{key}-{summary}",
"defaultJql": "project = MYPROJ"
}
}Set your token: export JIRA_API_TOKEN=<your-token> (or apiToken in config).
Branch templates support {key}, {type} (mapped: Bug→fix, Story→feat, Task→chore), and {summary}.
jam cache
Manage the response cache. Identical prompts return cached results instantly, saving API calls.
jam cache stats # show entry count, size, TTL, age range
jam cache stats --json # structured JSON
jam cache clear # delete all cached responses
jam cache prune # remove only expired entriesCaching is enabled by default with a 1-hour TTL. Configure in .jamrc:
{
"cacheEnabled": true,
"cacheTtlSeconds": 3600
}jam auth
jam auth login # validate connectivity to the current provider
jam auth logout # remove stored credentials from keychainjam config
jam config show # print merged effective config as JSON
jam config init # create .jam/config.json in the current directory
jam config init --global # create ~/.config/jam/config.jsonjam context
Manage the JAM.md project context file. This file is auto-read by jam ask and jam chat to give the model awareness of your project's architecture, conventions, and goals.
jam context init # generate JAM.md at the workspace root
jam context init --force # overwrite an existing JAM.md
jam context show # display the current JAM.md contentsjam models list
jam models list # list models available from the current provider
jam models list --provider ollama --base-url http://localhost:11434jam history
jam history list # list all saved chat sessions
jam history show <id> # show all messages in a session (first 8 chars of ID work)jam completion install
jam completion install # auto-detects shell
jam completion install --shell bash # bash completion script
jam completion install --shell zsh # zsh completion scriptFollow the printed instructions to add the completion to your shell.
jam doctor
Run system diagnostics:
jam doctorChecks:
- Node.js version (>= 20)
- Config file is valid
- Provider connectivity (Ollama reachable)
- ripgrep availability (optional, JS fallback used if absent)
- keytar availability (optional, env vars used if absent)
Configuration
Config File Locations
Jam merges config in priority order (highest wins):
1. CLI flags (--provider, --model, etc.)
2. .jam/config.json or .jamrc (repo-level)
3. ~/.jam/config.json (user home-dir dotfile — preferred)
4. ~/.config/jam/config.json (XDG user config — fallback)
5. Built-in defaultsRecommended: Use
~/.jam/config.jsonfor your personal settings (provider, API keys, default model). Use.jam/config.jsonat the repo root for project-specific overrides (tool policy, redact patterns).
Full Config Example
{
"defaultProfile": "default",
"profiles": {
"default": {
"provider": "ollama",
"model": "llama3.2",
"baseUrl": "http://localhost:11434",
"temperature": 0.7,
"maxTokens": 4096,
"systemPrompt": "You are a helpful coding assistant."
},
"cloud": {
"provider": "anthropic",
"model": "claude-sonnet-4-20250514"
},
"fast": {
"provider": "groq",
"model": "llama-3.1-8b-instant"
},
"embedded": {
"provider": "embedded",
"model": "smollm2-360m"
}
},
"toolPolicy": "ask_every_time",
"toolAllowlist": [],
"historyEnabled": true,
"logLevel": "warn",
"redactPatterns": ["sk-[a-z0-9]+", "Bearer\\s+\\S+"],
"cacheEnabled": true,
"cacheTtlSeconds": 3600,
"commitConvention": {
"format": "{type}({scope}): {description}",
"types": ["feat", "fix", "chore", "docs", "refactor", "test"],
"autoDetect": true
},
"jira": {
"baseUrl": "https://jira.company.com",
"email": "you@company.com",
"branchTemplate": "{type}/{key}-{summary}",
"defaultJql": "project = MYPROJ"
}
}Config Fields
| Field | Type | Default | Description |
|---|---|---|---|
defaultProfile |
string | "default" |
Active profile name |
profiles |
object | see below | Named provider/model configurations |
toolPolicy |
ask_every_time | always | allowlist | never |
ask_every_time |
How write tools require confirmation |
toolAllowlist |
string[] | [] |
Tools that never prompt (when policy is allowlist) |
historyEnabled |
boolean | true |
Save chat sessions to disk |
logLevel |
silent | error | warn | info | debug |
warn |
Log verbosity |
redactPatterns |
string[] | [] |
Regex patterns redacted from logs |
cacheEnabled |
boolean | true |
Enable response caching |
cacheTtlSeconds |
number | 3600 |
Cache time-to-live in seconds |
Profile Fields
| Field | Type | Description |
|---|---|---|
provider |
string | Provider name (ollama, openai, anthropic, groq, embedded) |
model |
string | Model ID (e.g. llama3.2, claude-sonnet-4-20250514, gpt-4o) |
baseUrl |
string | Provider API base URL |
apiKey |
string | API key (prefer keychain or env vars) |
temperature |
number | Sampling temperature (0-2) |
maxTokens |
number | Max tokens in response |
systemPrompt |
string | Default system prompt |
Commit Convention Fields
| Field | Type | Description |
|---|---|---|
format |
string | Message format template with {type}, {scope}, {description}, {ticket} placeholders |
types |
string[] | Allowed commit types (e.g. feat, fix, chore) |
ticketPattern |
string | Regex for ticket IDs (e.g. "PROJ-\\d+") |
ticketRequired |
boolean | Whether ticket IDs are mandatory |
rules |
string[] | Extra instructions for the AI |
autoDetect |
boolean | Auto-detect convention from git history (default: true) |
Jira Fields
| Field | Type | Description |
|---|---|---|
baseUrl |
string | Jira instance URL (Cloud or on-prem) |
email |
string | Your Jira email (Cloud) or username (Server) |
apiToken |
string | API token (or set JIRA_API_TOKEN env var) |
defaultJql |
string | Default JQL filter appended to queries |
branchTemplate |
string | Branch name template with {key}, {type}, {summary} (default: {key}-{summary}) |
Initialize Config
# Guided setup — detects providers, creates config + JAM.md
jam init
# Or manual config creation:
jam config init --global # creates ~/.config/jam/config.json
jam config init # creates .jam/config.json (repo-level)Using Profiles
# Use a specific profile
jam ask "Hello" --profile cloud
# Or auto-detect from model name
jam ask "Hello" --model claude-sonnet-4-20250514 # auto-selects anthropic providerEnvironment Variables
| Variable | Description |
|---|---|
JAM_API_KEY |
API key fallback (if keytar unavailable) |
JAM_BASE_URL |
Override provider base URL |
OPENAI_API_KEY |
OpenAI API key (used when provider: openai) |
ANTHROPIC_API_KEY |
Anthropic API key (used when provider: anthropic) |
GROQ_API_KEY |
Groq API key (used when provider: groq) |
JIRA_API_TOKEN |
Jira API token for jam jira commands |
Embedded Provider — Experimental
EXPERIMENTAL — The embedded provider is functional but quality is limited by small model sizes. For production workloads, use Ollama or OpenAI.
The embedded provider runs a tiny GGUF model directly in-process via node-llama-cpp. No Ollama installation, no server process, no network calls. Models are only downloaded when you explicitly set provider: "embedded" — it never downloads anything unless you opt in.
Setup
# Install the native dependency (optional — only needed for embedded mode)
npm install node-llama-cpp
# Switch to embedded provider
jam ask "Hello" --provider embedded
# Or set it permanently in your config
jam config init --global # then edit ~/.jam/config.jsonHow It Works
- On first use, Jam auto-downloads a small model (
250 MB) to `/.jam/models/` - The model loads in-process using llama.cpp bindings — no external server
- Streaming, tool-calling, and all standard commands work as usual
Available Models
| Alias | Size (Q4_K_M) | Notes |
|---|---|---|
smollm2-135m |
~100 MB | Ultra-light, very fast, basic quality |
smollm2-360m |
~250 MB | Default — good quality-to-size ratio |
smollm2-1.7b |
~1 GB | Best quality for embedded, needs more RAM |
# Use a specific embedded model alias
jam ask "Explain git rebase" --provider embedded --model smollm2-1.7b
# Or point to any local GGUF file
jam ask "Hello" --provider embedded --model /path/to/custom-model.ggufWhen to Use Embedded vs Ollama
| Scenario | Recommendation |
|---|---|
| No Ollama / can't install system software | Embedded |
| CI/CD pipeline, Docker container, SSH box | Embedded |
| Air-gapped / offline machine | Embedded (after initial model download) |
| Want best quality & larger models (7B+) | Ollama |
| GPU acceleration needed | Ollama |
| Already have Ollama running | Ollama |
Development
npm run dev -- ask "What is 2+2?" # run from source with tsx
npm run build # compile TypeScript to dist/
npm run typecheck # tsc --noEmit
npm run lint # ESLint
npm test # Vitest unit tests (289 tests)
npm run test:watch # watch mode
npm run test:coverage # coverage reportProject Structure
src/
├── index.ts # CLI entry point — command registration (Commander)
├── commands/ # One file per command (ask, chat, run, trace, verify, jira, …)
├── providers/ # LLM adapters — Ollama, OpenAI, Anthropic, Groq, Embedded
├── integrations/ # External service clients (Jira)
├── tools/ # Model-callable tools + registry + permission enforcement
├── config/ # Zod schema, cosmiconfig loader, built-in defaults
├── storage/ # Chat sessions + response cache (file-based)
├── ui/ # Ink/React TUI (chat REPL) + Markdown/streaming renderer
└── utils/ # Shared: streaming, call-graph, agent loop, tokens, secrets, loggerAdding a New Provider
- Implement
ProviderAdapterfromsrc/providers/base.ts:
import type { ProviderAdapter, ProviderInfo, CompletionRequest, StreamChunk } from './base.js';
export class MyProvider implements ProviderAdapter {
readonly info: ProviderInfo = { name: 'myprovider', supportsStreaming: true };
async validateCredentials(): Promise<void> { /* ... */ }
async listModels(): Promise<string[]> { /* ... */ }
async *streamCompletion(request: CompletionRequest): AsyncIterable<StreamChunk> { /* ... */ }
}- Register in
src/providers/factory.ts:
if (provider === 'myprovider') {
const { MyProvider } = await import('./myprovider.js');
return new MyProvider({ apiKey: profile.apiKey });
}- (Optional) Add auto-detection in
inferProviderFromModel():
if (m.startsWith('my-model-')) return 'myprovider';- Use:
jam ask "Hello" --provider myprovider
Contributing
Jam is intentionally built to be easy to extend. The architecture is layered, each concern is isolated, and the three main contribution surfaces — providers, tools, and commands — each have a clean interface to implement.
You don't need to understand the whole codebase to contribute. A new provider is one file. A new tool is one file. The patterns are already established and documented.
- Fork the repository
- Create your feature branch (
git checkout -b feat/amazing-feature) - Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feat/amazing-feature) - Open a Pull Request
Please read our Contributing Guide for details on our code of conduct, development workflow, and pull request process.
Good First Issues
Look for issues labeled good first issue — these are great starting points for new contributors.
What the Codebase Looks Like
- Strict TypeScript throughout — no
any, no guessing what a function does - Tests colocated with source —
foo.ts→foo.test.ts, using Vitest (289 tests across 23 files) - One file per concern — each command, provider, and tool is self-contained
- Zod schema validation — config is validated at load time, not at runtime when it's too late
- Conventional Commits — the git log tells the story of the project
If you can read TypeScript, you can contribute to Jam.
Community
- Issues — Report bugs or request features
- Discussions — Ask questions, share ideas
- Code of Conduct — Our community standards
Security Policy
We take security seriously. If you discover a vulnerability, please do not open a public issue. Instead, follow the responsible disclosure process in our Security Policy.
Roadmap
- Ollama provider (default)
- OpenAI provider
- Anthropic Claude provider
- Groq provider
- Embedded provider (experimental)
- Provider auto-detection from model name
- Structured plan-then-execute agent (
jam run) - Smart commit conventions (auto-detect + config)
- Jira integration (
jam jira) - Verification pipeline (
jam verify) - Call graph tracing (
jam trace) - Response caching (
jam cache) - Actionable error messages with hints
- MCP (Model Context Protocol) support
- Plugin system for custom tools
- Token usage tracking and budgets
- Embeddings & vector search
- Web UI companion
Probable Enhancements
Ideas and directions under consideration. These range from quick wins to deep architectural changes. Contributions, RFCs, and discussion on any of these are welcome.
Plugin System
The tool registry (ToolRegistry.register()) already accepts any ToolDefinition, but tool discovery is hardcoded. A proper plugin system would allow external tools without modifying source.
- Local plugins — load
ToolDefinitionmodules from.jam/plugins/or~/.config/jam/plugins/ - npm plugin packages —
jam plugin install @scope/jam-plugin-dockerdiscovers and registers tools at startup - Plugin manifest — declarative
jam-plugin.jsonwith name, version, tool definitions, required permissions - Sandboxed execution — plugins run with restricted filesystem/network access based on declared capabilities
Skills
Skills are named, composable mini-agents — each with a focused system prompt, a curated tool subset, and a defined output contract.
- Built-in skills —
refactor,test-writer,documenter,security-audit - User-defined skills —
.jam/skills/directory with YAML/JSON skill definitions - Composable — skills can call other skills (e.g.,
refactorinvokestest-writerto verify changes)
Sub-Agents & Task Decomposition
jam run uses a structured plan-then-execute loop. The next evolution is true sub-agent decomposition — routing specialist child agents for independent sub-tasks.
- Planner agent — breaks a complex instruction into an ordered DAG of sub-tasks
- Parallel sub-agents — independent sub-tasks execute concurrently
- Fail-and-retry isolation — a failed sub-agent can be retried without restarting the entire task
MCP (Model Context Protocol) Support
MCP is an open standard for connecting AI models to external tools and data sources. Adding MCP client support would let Jam consume any MCP-compatible server.
Embeddings & Vector Search
The current past-session search uses keyword overlap (Jaccard scoring). Optional local embeddings would enable true semantic search.
- Local embedding model — use Ollama's embedding endpoint so nothing leaves your machine
- Semantic code search —
jam search "authentication flow"returns semantically relevant code - RAG pipeline — retrieve relevant code chunks before prompting
Cost & Token Tracking
TokenUsage is already captured per request but not aggregated or displayed.
- Session cost estimation — estimate cost based on provider pricing
- Budget limits —
maxCostPerSession,maxCostPerDayin config - Usage dashboard —
jam usagecommand showing tokens consumed over time
Acknowledgments
Built with these excellent open source projects:
- Commander.js — CLI framework
- Ink — React for CLIs
- Ollama — Local LLM serving
- Zod — Schema validation
- marked — Markdown rendering
- cosmiconfig — Configuration loading
License
MIT License — Copyright (c) 2026-present Sunil Prakash. All rights reserved.
See LICENSE for the full license text.
Made with care by Sunil Prakash
If you find Jam useful, consider giving it a star on GitHub — it helps others discover the project!