Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@iambarryking/ag) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
ag
A persistent AI coding agent with memory. Any model via OpenRouter.
Built as a tool-calling loop with bash -- inspired by How does Claude Code actually work?. Started as 60 lines, grew to ~600 because persistent memory, plans, and a REPL are worth the extra lines.
Install
npx @iambarryking/ag # run directly (prompts for API key on first use)
npm install -g @iambarryking/ag # or install globallyOr from source:
git clone <repo>
cd simple-agent
npm install && npm run build && npm linkUsage
ag # interactive REPL
ag "what files are here?" # one-shot mode
ag -m openai/gpt-4o "help me" # specific model
ag -m openrouter/auto "help" # let OpenRouter pick
ag --stats # show memory status
ag --help # all optionsOn first run, ag prompts for your OpenRouter API key and saves it to ~/.ag/config.json. You can also set it via environment variable:
export OPENROUTER_API_KEY=sk-or-v1-...CLI Options
-m, --model <model> Model ID (default: anthropic/claude-sonnet-4.6)
-k, --key <key> API key (or set OPENROUTER_API_KEY)
-s, --system <prompt> Custom system prompt
-b, --base-url <url> API base URL (default: OpenRouter; use for local LLMs)
-n, --max-iterations <n> Max tool-call iterations (default: 25)
--stats Show memory file paths and status
-h, --help Show helpREPL Commands
/help Show all commands
/model <name> Switch model (e.g. /model openai/gpt-4o)
/model Show current model
/models [query] List OpenRouter models (e.g. /models claude, /models gemini)
/tools List loaded tools (built-in + custom)
/config Show persistent config (API key, model, base URL)
/config set <k> <v> Set a config value (e.g. /config set model openai/gpt-4o)
/stats Memory status
/memory Show global memory
/project Show project memory
/plan Show current (latest) plan
/plan use <name> Activate an older plan (copies as latest)
/plans List all plans
/paths Show memory file paths
/clear project Clear project memory, plans, and history
/clear all Clear everything including global memory
/exit ExitTools
| Tool | Purpose |
|---|---|
bash |
Run any shell command. The universal tool. |
save_memory |
Persist a fact to global or project memory. |
save_plan |
Save a task plan (timestamped, latest auto-loaded). |
list_plans |
List or read previous plans. |
Custom Tools
Drop a .mjs file in a tools directory and it gets loaded at startup:
~/.ag/tools/ # global (all projects)
.ag/tools/ # project-local (overrides global if same name)Each file exports a default tool object:
// ~/.ag/tools/weather.mjs
export default {
type: "function",
function: {
name: "weather",
description: "Get current weather for a city",
parameters: {
type: "object",
properties: { city: { type: "string", description: "City name" } },
required: ["city"]
}
},
execute: ({ city }) => {
// your logic here -- can be async
return `Weather in ${city}: sunny, 22C`;
}
};That's it. No config, no registry. Use /tools in the REPL to see what's loaded.
Configuration
Persistent settings are stored in ~/.ag/config.json:
{
"apiKey": "sk-or-v1-...",
"model": "anthropic/claude-sonnet-4.6",
"baseURL": "https://openrouter.ai/api/v1",
"maxIterations": 25
}Set values via the REPL (/config set model openai/gpt-4o) or edit the file directly. CLI flags and environment variables always take priority over config file values.
Memory
Three tiers, all plain markdown you can edit directly:
~/.ag/
config.json # settings: API key, default model, base URL
memory.md # global: preferences, patterns
projects/
<id>/
memory.md # project: architecture, decisions
plans/ # timestamped plan files
2026-04-13T12-31-22-add-auth.md
history.jsonl # conversation historyAll memory is injected into the system prompt on every API call (capped at ~6000 chars total to avoid context bloat). The agent reads it automatically and writes via save_memory and save_plan.
Local LLMs
Point ag at any OpenAI-compatible API:
ag -b http://localhost:11434/v1 "hello" # Ollama
ag -b http://localhost:1234/v1 "hello" # LM StudioOr set it permanently:
# In the REPL:
/config set baseURL http://localhost:11434/v1Workflow
- For multi-step coding tasks, the agent creates a plan before starting and updates it as it goes.
- For simple questions, it just answers directly.
- At 25 iterations the REPL asks if you want to continue.
When to use something else
- Claude Code -- if you have a subscription and want the full harness with parallel tool calls, MCP, and a polished UI. ag is not trying to replace it.
- aider -- if your workflow is git-centric (commit-per-change, diff-based editing). ag doesn't know about git.
- Cursor / Windsurf -- if you want IDE integration. ag is terminal-only.
ag is for when you want a hackable, persistent, model-agnostic agent you fully control in ~600 lines of TypeScript.
Architecture
src/
cli.ts # entry point
cli/parser.ts # arg parsing + help
cli/repl.ts # interactive REPL
core/agent.ts # the loop
core/config.ts # persistent config (~/.ag/config.json)
core/types.ts # interfaces
core/colors.ts # ANSI colors (respects NO_COLOR)
memory/memory.ts # three-tier file memory
tools/bash.ts # shell execution
tools/memory.ts # save_memory tool
tools/plan.ts # save_plan + list_plansZero npm dependencies. Node.js 18+ and TypeScript.
License
MIT