Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@iambarryking/ag) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
ag
A persistent AI coding agent with memory. Any model via OpenRouter.
Built as a tool-calling loop with bash -- inspired by How does Claude Code actually work?. Started as 60 lines, grew to ~600 because persistent memory, plans, and a REPL are worth the extra lines.
Install
npx @iambarryking/ag # run directly (prompts for API key on first use)
npm install -g @iambarryking/ag # or install globallyOr from source:
git clone <repo>
cd simple-agent
npm install && npm run build && npm linkUsage
ag # interactive REPL
ag "what files are here?" # one-shot mode
ag -m openai/gpt-4o "help me" # specific model
ag -m openrouter/auto "help" # let OpenRouter pick
ag --stats # show memory status
ag --help # all optionsOn first run, ag prompts for your OpenRouter API key and saves it to ~/.ag/config.json. You can also set it via environment variable:
export OPENROUTER_API_KEY=sk-or-v1-...CLI Options
-m, --model <model> Model ID (default: anthropic/claude-sonnet-4.6)
-k, --key <key> API key (or set OPENROUTER_API_KEY)
-s, --system <prompt> Custom system prompt
-b, --base-url <url> API base URL (default: OpenRouter; use for local LLMs)
-n, --max-iterations <n> Max tool-call iterations (default: 25)
--stats Show memory file paths and status
-h, --help Show helpREPL Commands
All commands follow the pattern: /noun to show, /noun subcommand to act.
/help Show all commands
/model Show current model
/model <name> Switch model (persists to config)
/model search [query] Browse OpenRouter models
/memory Show all memory + stats
/memory global Show global memory
/memory project Show project memory
/memory clear project|all Clear memory
/plan Show current plan
/plan list List all plans
/plan use <name> Activate an older plan
/config Show config + file paths
/config set <k> <v> Set a config value
/tools List loaded tools
/skill List installed skills
/skill search [query] Search skills.sh registry
/skill add <source> Install skill from registry
/skill remove <name> Uninstall a skill
/exit ExitTools
All action-based tools follow the pattern: tool(action, ...params).
| Tool | Actions | Purpose |
|---|---|---|
bash |
— | Run any shell command |
memory |
save |
Persist a fact to global or project memory |
plan |
save, list, read |
Manage task plans |
git |
status, init, branch, commit, push |
Git workflow |
web |
fetch, search |
Fetch web pages, search for current info |
skill |
— | Activate a skill by name |
Custom Tools
Drop a .mjs file in a tools directory and it gets loaded at startup:
~/.ag/tools/ # global (all projects)
.ag/tools/ # project-local (overrides global if same name)Each file exports a default tool object:
// ~/.ag/tools/weather.mjs
export default {
type: "function",
function: {
name: "weather",
description: "Get current weather for a city",
parameters: {
type: "object",
properties: { city: { type: "string", description: "City name" } },
required: ["city"]
}
},
execute: ({ city }) => {
// your logic here -- can be async
return `Weather in ${city}: sunny, 22C`;
}
};That's it. No config, no registry. Use /tools in the REPL to see what's loaded.
Skills
Skills are reusable prompt instructions (with optional tools) that the agent activates on-demand. Browse and install from skills.sh:
/skill search frontend # search the registry
/skill add anthropic/skills@frontend # install
/skill # list installed
/skill remove frontend # uninstallSkills are SKILL.md files with YAML frontmatter:
~/.ag/skills/ # global (all projects)
.ag/skills/ # project-local (overrides global)---
name: my-skill
description: When to use this skill. The agent sees this to decide activation.
---
Your instructions here. The agent loads this content when the skill is activated.Frontmatter fields: name (required), description (required), tools: true (look for tools.mjs alongside), always: true (always inject, don't wait for activation).
The agent sees skill names + descriptions in every prompt. When a task matches, it activates the skill automatically via the skill tool, loading the full instructions into context.
Configuration
Persistent settings are stored in ~/.ag/config.json:
{
"apiKey": "sk-or-v1-...",
"model": "anthropic/claude-sonnet-4.6",
"baseURL": "https://openrouter.ai/api/v1",
"maxIterations": 25,
"tavilyApiKey": "tvly-..."
}Set values via the REPL (/config set model openai/gpt-4o) or edit the file directly. CLI flags and environment variables always take priority over config file values.
For web search, get a free Tavily API key at tavily.com (no credit card needed). The agent prompts for it on first use, or set it manually:
export TAVILY_API_KEY=tvly-...
# or in the REPL:
/config set tavilyApiKey tvly-...
/config set TAVILY_API_KEY tvly-... # env var name also worksMemory
Three tiers, all plain markdown you can edit directly:
~/.ag/
config.json # settings: API key, default model, base URL
memory.md # global: preferences, patterns
skills/ # installed skills (from skills.sh or manual)
frontend/SKILL.md
tools/ # custom tools (.mjs files)
projects/
<id>/
memory.md # project: architecture, decisions
plans/ # timestamped plan files
2026-04-13T12-31-22-add-auth.md
history.jsonl # conversation historyAll memory is injected into the system prompt on every API call (capped at ~6000 chars total to avoid context bloat). The agent reads it automatically and writes via the memory and plan tools.
Git workflow with memory
Save your ticket context and PR template to project memory, and the agent will use them when committing and pushing:
you> save to project memory: Current ticket: JIRA-123 Add user auth. PR template: ## What\n## Why\n## Testing
you> create a branch for this ticket and start workingThe agent sees your memory context and will name branches, write commit messages, and format PR descriptions accordingly.
Local LLMs
Point ag at any OpenAI-compatible API:
ag -b http://localhost:11434/v1 "hello" # Ollama
ag -b http://localhost:1234/v1 "hello" # LM StudioOr set it permanently:
# In the REPL:
/config set baseURL http://localhost:11434/v1Workflow
- For multi-step coding tasks, the agent creates a plan before starting and updates it as it goes.
- For simple questions, it just answers directly.
- At 25 iterations the REPL asks if you want to continue.
When to use something else
- Claude Code -- if you have a subscription and want the full harness with parallel tool calls, MCP, and a polished UI. ag is not trying to replace it.
- aider -- if your workflow is git-centric (commit-per-change, diff-based editing).
- Cursor / Windsurf -- if you want IDE integration. ag is terminal-only.
ag is for when you want a hackable, persistent, model-agnostic agent you fully control in ~600 lines of TypeScript.
Architecture
src/
cli.ts # entry point
cli/parser.ts # arg parsing + help
cli/repl.ts # interactive REPL (unified /noun commands)
core/agent.ts # the loop + skill activation
core/config.ts # persistent config (~/.ag/config.json)
core/skills.ts # skill discovery, parsing, loading
core/registry.ts # skills.sh search + GitHub install
core/types.ts # interfaces
core/colors.ts # ANSI colors (respects NO_COLOR)
core/version.ts # version from package.json
memory/memory.ts # three-tier file memory
tools/bash.ts # shell execution
tools/memory.ts # memory tool
tools/plan.ts # plan management tool
tools/git.ts # git operations tool
tools/web.ts # web fetch + search tool
tools/skill.ts # skill activation toolZero npm dependencies. Node.js 18+ and TypeScript.
License
MIT