Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@element47/ag) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
ag
A persistent AI coding agent with memory. Any model via OpenRouter.
Built as a tool-calling loop with bash — inspired by How does Claude Code actually work?. Features streaming responses, parallel tool execution, permission prompts, and persistent memory.
Install
npx @iambarryking/ag # run directly (prompts for API key on first use)
npm install -g @iambarryking/ag # or install globallyOr from source:
git clone <repo>
cd simple-agent
npm install && npm run build && npm linkUsage
ag # interactive REPL (prompts before writes/commands)
ag -y # auto-approve all tool calls
ag "what files are here?" # one-shot mode (auto-approves)
ag -m openai/gpt-4o "help me" # specific model
ag -m openrouter/auto "help" # let OpenRouter pick
ag --stats # show memory status
ag --help # all optionsOn first run, ag prompts for your OpenRouter API key and saves it to ~/.ag/config.json. You can also set it via environment variable:
export OPENROUTER_API_KEY=sk-or-v1-...CLI Options
-m, --model <model> Model ID (default: anthropic/claude-sonnet-4.6)
-k, --key <key> API key (or set OPENROUTER_API_KEY)
-s, --system <prompt> Custom system prompt
-b, --base-url <url> API base URL (default: OpenRouter; use for local LLMs)
-n, --max-iterations <n> Max tool-call iterations (default: 25)
-y, --yes Auto-approve all tool calls (skip confirmation prompts)
--stats Show memory file paths and status
-h, --help Show helpREPL Commands
All commands follow the pattern: /noun to show, /noun subcommand to act.
/help Show all commands
/model Show current model
/model <name> Switch model (persists to config)
/model search [query] Browse OpenRouter models
/memory Show all memory + stats
/memory global Show global memory
/memory project Show project memory
/memory clear project|all Clear memory
/plan Show current plan
/plan list List all plans
/plan use <name> Activate an older plan
/context Show context window usage
/context compact Force context compaction now
/config Show config + file paths
/config set <k> <v> Set a config value
/config unset <k> Remove a config value
/tools List loaded tools
/skill List installed skills
/skill search [query] Search skills.sh registry
/skill add <source> Install skill from registry
/skill remove <name> Uninstall a skill
/exit ExitTools
All action-based tools follow the pattern: tool(action, ...params).
| Tool | Actions | Purpose |
|---|---|---|
bash |
— | Run any shell command (dangerous patterns blocked) |
file |
read · list · write · edit |
Read, browse, create, and edit files |
memory |
save |
Persist a fact to global or project memory |
plan |
save, append, switch, list, read |
Manage task plans |
git |
status, init, branch, commit, push |
Git workflow |
grep |
search, find |
Search file contents (regex), find files by glob |
web |
fetch, search |
Fetch web pages, search for current info |
skill |
— | Activate a skill by name |
Custom Tools
Drop a .mjs file in a tools directory and it gets loaded at startup:
~/.ag/tools/ # global (all projects)
.ag/tools/ # project-local (overrides global if same name)Each file exports a default tool object:
// ~/.ag/tools/weather.mjs
export default {
type: "function",
function: {
name: "weather",
description: "Get current weather for a city",
parameters: {
type: "object",
properties: { city: { type: "string", description: "City name" } },
required: ["city"]
}
},
execute: ({ city }) => {
// your logic here -- can be async
return `Weather in ${city}: sunny, 22C`;
}
};That's it. No config, no registry. Use /tools in the REPL to see what's loaded.
Skills
Skills are reusable prompt instructions (with optional tools) that the agent activates on-demand. Browse and install from skills.sh:
/skill search frontend # search the registry
/skill add anthropic/skills@frontend # install
/skill # list installed
/skill remove frontend # uninstallSkills are SKILL.md files with YAML frontmatter:
~/.ag/skills/ # global (all projects)
.ag/skills/ # project-local (overrides global)---
name: my-skill
description: When to use this skill. The agent sees this to decide activation.
---
Your instructions here. The agent loads this content when the skill is activated.Frontmatter fields: name (required), description (required), tools: true (look for tools.mjs alongside), always: true (always inject, don't wait for activation).
The agent sees skill names + descriptions in every prompt. When a task matches, it activates the skill automatically via the skill tool, loading the full instructions into context.
Configuration
Persistent settings are stored in ~/.ag/config.json:
{
"apiKey": "sk-or-v1-...",
"model": "anthropic/claude-sonnet-4.6",
"baseURL": "https://openrouter.ai/api/v1",
"maxIterations": 25,
"tavilyApiKey": "tvly-..."
}Set values via the REPL (/config set model openai/gpt-4o) or edit the file directly. Remove a value with /config unset <key> to revert to the default. CLI flags and environment variables always take priority over config file values.
For web search, get a free Tavily API key at tavily.com (no credit card needed). The agent prompts for it on first use, or set it manually:
export TAVILY_API_KEY=tvly-...
# or in the REPL:
/config set tavilyApiKey tvly-...
/config set TAVILY_API_KEY tvly-... # env var name also worksMemory
Three tiers, all plain markdown you can edit directly:
~/.ag/
config.json # settings: API key, default model, base URL
memory.md # global: preferences, patterns
skills/ # installed skills (from skills.sh or manual)
frontend/SKILL.md
tools/ # custom tools (.mjs files)
projects/
<id>/
memory.md # project: architecture, decisions
plans/ # timestamped plan files
2026-04-13T12-31-22-add-auth.md
history.jsonl # conversation historyAll memory is injected into the system prompt on every API call (capped at ~6000 chars total to avoid context bloat). The agent reads it automatically and writes via the memory and plan tools.
Git workflow with memory
Save your ticket context and PR template to project memory, and the agent will use them when committing and pushing:
you> save to project memory: Current ticket: JIRA-123 Add user auth. PR template: ## What\n## Why\n## Testing
you> create a branch for this ticket and start workingThe agent sees your memory context and will name branches, write commit messages, and format PR descriptions accordingly.
Local LLMs
Point ag at any OpenAI-compatible API:
ag -b http://localhost:11434/v1 "hello" # Ollama
ag -b http://localhost:1234/v1 "hello" # LM StudioOr set it permanently:
# In the REPL:
/config set baseURL http://localhost:11434/v1
/config unset baseURL # back to OpenRouter defaultPermissions
In REPL mode, ag prompts before executing mutating operations:
? bash: npm test (y/n) y
✓ [bash] All tests passed
? file(write): src/utils.ts (y/n) y
✓ [file] Wrote src/utils.ts (24 lines, 680B)Always allowed (no prompt): file(read), file(list), grep(*), memory(*), plan(*), skill(*), git(status), web(search)
Prompted: bash, file(write), file(edit), git(commit/push/branch), web(fetch)
Always blocked: rm -rf /, fork bombs, sudo rm, pipe-to-shell (enforced in code regardless of approval)
Skip prompts with ag -y or --yes. One-shot mode (ag "query") auto-approves.
Streaming
Responses stream token-by-token with progressive markdown rendering. Tool execution shows animated spinners:
⠋ thinking [1/25]
✓ [grep] src/agent.ts:42: export class Agent
⠋ thinking [2/25]
agent> The Agent class is defined in src/agent.ts...Tools execute in parallel when the model returns multiple tool calls.
Workflow
- Environment context (date, OS, git branch, detected stack) is injected into every system prompt.
- A compact project file listing gives the model awareness of project structure.
tool_choice: "auto"encourages tool use over conversational responses.- Dangerous bash commands (
find ~,rm -rf /, etc.) are blocked before execution. - Tool results over 8KB are smart-truncated (first 50 + last 50 lines) to preserve context.
- For multi-step coding tasks, the agent creates a plan before starting and updates it as it goes.
- For simple questions, it just answers directly.
- At 25 iterations the REPL asks if you want to continue.
- At 90% context window usage, ag automatically summarizes older conversation messages to free space. Use
/compactto trigger manually. Only message history is compacted — system prompt, tools, and skills are unaffected.
When to use something else
- Claude Code -- if you have a subscription and want the full harness with MCP, sub-agents, and a polished UI. ag is not trying to replace it.
- aider -- if your workflow is git-centric (commit-per-change, diff-based editing).
- Cursor / Windsurf -- if you want IDE integration. ag is terminal-only.
ag is for when you want a hackable, persistent, model-agnostic agent you fully control.
Architecture
src/
cli.ts # entry point
cli/parser.ts # arg parsing + help
cli/repl.ts # interactive REPL (unified /noun commands)
core/agent.ts # the loop + skill activation
core/config.ts # persistent config (~/.ag/config.json)
core/context.ts # context window usage tracking
core/skills.ts # skill discovery, parsing, loading
core/registry.ts # skills.sh search + GitHub install
core/types.ts # interfaces
core/colors.ts # ANSI colors (respects NO_COLOR)
core/version.ts # version from package.json
memory/memory.ts # three-tier file memory
tools/file.ts # file reading + directory listing
tools/bash.ts # shell execution (with command safeguards)
tools/memory.ts # memory tool
tools/plan.ts # plan management tool
tools/git.ts # git operations tool
tools/grep.ts # code search + file find
tools/web.ts # web fetch + search tool
tools/skill.ts # skill activation toolZero npm dependencies. Node.js 18+ and TypeScript.
License
MIT