Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (pi-memctx) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
pi-memctx
Local-first memory context for Pi coding agents.
What it does • Install • Setup • Context • Tools • Commands • Docs • Development
Automatic memory context injection for Pi coding agent.
Your agent forgets everything between sessions. Every new conversation starts from zero — re-discovering project structure, re-reading conventions, re-asking about deploy procedures. pi-memctx fixes this.
The problem
Without persistent memory, coding agents waste time and tokens on every session:
You: "How do I deploy to production?"
Agent: Let me explore the project...
$ find . -name "*.yml" | grep deploy # scanning...
$ cat .github/workflows/ci.yaml # reading...
$ ls kubernetes/ # more scanning...
$ cat README.md # still looking...
→ 30+ seconds, 8 tool calls, misses key detailsThe fix
pi-memctx loads project context before the agent starts thinking:
You: "How do I deploy to production?"
Agent: Based on your deploy runbook:
1. Push to main triggers GitHub Actions
2. CI builds → Docker → ECR → Helm values update
3. ArgoCD auto-syncs to staging
4. Production requires manual approval in ArgoCD
→ 5 seconds, 0 tool calls, all key details correctMeasured impact
Run bash benchmark/setup.sh && bash benchmark/run.sh to measure on your own project.
Typical results across 5 common tasks:
| Metric | Without | With pi-memctx | Gain |
|---|---|---|---|
| Tool calls per task | ~6 | ~1 | 80% fewer |
| Correct facts in response | ~40% | ~95% | 2.4× better |
| Time to answer | ~30s | ~5s | 6× faster |
| Follow-up prompts needed | ~3 | ~0 | First-pass accuracy |
What this means for your team
| If your team runs... | You save... |
|---|---|
| 10 agent tasks/day | ~500K tokens/month, ~25 min/month |
| 20 agent tasks/day | ~1M tokens/month, ~50 min/month |
| 50 agent tasks/day | ~2.5M tokens/month, ~2 hours/month |
Less tokens = lower API cost. Better answers = less rework. Faster responses = less waiting.
Install
pi install git:github.com/weauratech/pi-memctxQuick start
1. Generate a pack from your project
cd /path/to/your/repos
pi -e pi-memctx
# Inside pi:
/pack-generateThis scans your repos for README.md, CLAUDE.md, go.mod, package.json and builds a memory pack automatically.
2. Let the agent learn organically
As you work, the agent discovers and saves knowledge:
You: "remember that we use pgx instead of database/sql"
Agent: Saved decision: pgx-over-database-sql
→ packs/my-project/50-decisions/pgx-over-database-sql.mdThe pack grows over time with real operational knowledge.
3. Knowledge persists across sessions
Next session, the agent already knows:
You: "set up a new database connection"
Agent: Based on your conventions, I'll use pgx with connection pooling
(per your decision in pgx-over-database-sql)...How it works
pi starts → detect pack for cwd → load context
│
user sends prompt ────────────────────┤
│
1. Search pack for relevant memories (qmd semantic or grep)
2. Build prioritized context (manifest → context → search → actions → decisions → runbooks)
3. Inject into system prompt (16K char budget)
│
agent responds ───────────────────────┤
│
4. Agent can save learnings (memctx_save)
5. Session handoff captured on compactionContext priority
Not everything fits. Sections are included by priority — lower-priority content is trimmed first:
| Priority | What | Budget |
|---|---|---|
| 1 | Pack manifest + indexes | 2,000 chars |
| 2 | Context packs (stack, conventions) | 3,000 chars |
| 3 | Search results for current prompt | 2,500 chars |
| 4 | Recent actions | 2,000 chars |
| 5 | Decisions | 2,000 chars |
| 6 | Runbooks | 2,000 chars |
Pack structure
Packs are just Markdown files with frontmatter. Edit them in any editor or Obsidian.
~/.pi/agent/memory-vault/packs/my-project/
00-system/
pi-agent/
memory-manifest.md # Pack entrypoint
resource-map.md # Repos, services, environments
indexes/
context-index.md # Links to context packs
decision-index.md # Links to decisions
runbook-index.md # Links to runbooks
20-context/
backend.md # Stack, architecture, conventions
frontend.md # Framework, components, build commands
50-decisions/
001-hexagonal-arch.md # Why we chose this architecture
002-use-pgx.md # Why pgx over database/sql
70-runbooks/
deploy.md # Step-by-step deploy procedure
terraform.md # Infrastructure operationsTools
memctx_search
Search across pack files:
use memctx_search to find information about deployModes: keyword (fast), semantic (2s), 10s).deep (
Install qmd for semantic search: npm install -g @tobilu/qmd
Without qmd, search uses keyword grep (still works, just less smart).
memctx_save
Persist learnings to the active pack:
save this as a decision: we use integer cents for all monetary valuesTypes: observation, decision, action, runbook, context.
Safety: automatically blocks secrets, tokens, API keys, private keys.
Commands
| Command | What |
|---|---|
/pack |
Switch packs (picker or /pack name) |
/pack-generate |
Generate pack from repo directory |
Multiple packs
With multiple packs, pi-memctx auto-detects the best one based on your working directory:
cd ~/code/my-api # → loads "my-api" pack
cd ~/code/my-infra # → loads "infra" pack
cd ~/code # → loads org-level packSwitch mid-session with /pack.
Pack locations
Packs are resolved in order:
| Priority | Path | Use case |
|---|---|---|
| 1 | MEMCTX_PACKS_PATH env var |
Explicit override |
| 2 | <cwd>/.pi/memory-vault/packs/ |
Project-local (share via git) |
| 3 | ~/.pi/agent/memory-vault/packs/ |
Global default |
Benchmark
Measure the impact on your own project:
# Setup fictional test scenario
bash benchmark/setup.sh
# Run 5 tasks with and without pi-memctx
bash benchmark/run.shDocumentation
Development
npm ci
npm run typecheck
npm test
npm run test:e2e
npm run ciPlease read CONTRIBUTING.md and SECURITY.md before opening a pull request.
License
MIT