JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 397
  • Score
    100M100P100Q99806F
  • License MIT

A Pi extension that observes coding sessions and distills patterns into reusable instincts.

Package Exports

  • pi-continuous-learning
  • pi-continuous-learning/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (pi-continuous-learning) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

pi-continuous-learning

A Pi extension that observes your coding sessions and distills patterns into reusable "instincts" - atomic learned behaviors with confidence scoring, project scoping, and closed-loop feedback validation.

Inspired by everything-claude-code/continuous-learning-v2, reimplemented as a native Pi extension in TypeScript.

How It Works

Pi Session (extension)                     Background analyzer (standalone)
──────────────────────                     ──────────────────────────────────
Extension events                           Runs on a schedule (cron/launchd)
  │                                          │
  v                                          v
Observation Collector                      Reads observations.jsonl per project
  │  writes observations.jsonl               │
  v                                          v
System Prompt Injection                    Haiku LLM analyzes patterns,
  │  injects high-confidence instincts       creates/updates instinct files
  v                                          │
Feedback Loop                              Instinct Files (.md with YAML frontmatter)
  │  records which instincts were active
  v
Confirms, contradicts, or ignores injected instincts

The key idea: the extension watches what you do, learns patterns, injects relevant instincts into future sessions, then validates whether those instincts actually helped — adjusting confidence based on real outcomes rather than observation count alone.

The analyzer runs as a separate background process (not inside your Pi session), so it never causes lag or interference. It processes all your projects in a single pass.

Installation

pi install npm:pi-continuous-learning

This installs the extension globally and makes the pi-cl-analyze CLI available on your PATH.

Requirements

  • Pi >= 0.62.0
  • An LLM provider configured with Pi (subscription or API key — the analyzer defaults to Haiku; see Configuration to change the model)
  • Node.js >= 18

Usage

Once installed, the extension runs automatically in your Pi sessions — observing events and injecting instincts. No configuration required for the extension itself.

To analyze observations and create/update instincts, you need to run the analyzer separately (see Background Analyzer below).

Slash Commands

Command Description
/instinct-status Show all instincts grouped by domain with confidence scores and feedback stats
/instinct-evolve LLM-powered analysis of instincts: suggests merges, promotions, and cleanup
/instinct-export Export instincts to a JSON file (filterable by scope/domain)
/instinct-import <path> Import instincts from a JSON file
/instinct-promote [id] Promote project instincts to global scope
/instinct-projects List all known projects and their instinct counts

LLM Tools

The extension registers tools that the LLM can use during conversation:

Tool Description
instinct_list List instincts with optional scope/domain filters
instinct_read Read a specific instinct by ID
instinct_write Create or update an instinct
instinct_delete Remove an instinct by ID
instinct_merge Merge multiple instincts into one

You can ask Pi things like "show me my instincts", "merge these two instincts", or "delete low-confidence instincts" and it will use these tools.

Background Analyzer

The analyzer is a standalone CLI that processes observations across all your projects and creates/updates instincts using Haiku. It runs outside of Pi sessions for efficiency — one process handles all projects, regardless of how many Pi sessions you have open.

Running manually

pi-cl-analyze

The script:

  1. Iterates all projects in ~/.pi/continuous-learning/projects.json
  2. Skips projects with no new observations since last analysis
  3. Skips projects with fewer than 20 observations (configurable)
  4. For eligible projects: runs confidence decay, then uses Haiku to analyze patterns and write instinct files
  5. Records a cursor so only new observations are processed on subsequent runs

Safety features:

  • Lockfile guard: Only one instance can run at a time. Subsequent invocations exit immediately with code 0.
  • Global timeout: The process exits after 5 minutes regardless of progress.
  • Stale lock detection: If a previous run crashed, the lockfile is automatically cleaned up after 10 minutes or if the owning process is no longer alive.

Setting up a schedule (macOS)

The recommended way to run the analyzer on a recurring schedule on macOS is with launchd, which persists across reboots and handles log rotation.

1. Find the binary path

which pi-cl-analyze

This should print something like /opt/homebrew/bin/pi-cl-analyze. Use this path in the plist below.

2. Create the plist file

cat > ~/Library/LaunchAgents/com.pi-continuous-learning.analyze.plist << EOF
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.pi-continuous-learning.analyze</string>
    <key>ProgramArguments</key>
    <array>
        <string>$(which pi-cl-analyze)</string>
    </array>
    <key>StartInterval</key>
    <integer>300</integer>
    <key>StandardOutPath</key>
    <string>/tmp/pi-cl-analyze.log</string>
    <key>StandardErrorPath</key>
    <string>/tmp/pi-cl-analyze.log</string>
    <key>EnvironmentVariables</key>
    <dict>
        <key>PATH</key>
        <string>$(echo $PATH)</string>
    </dict>
</dict>
</plist>
EOF

Note: The $(which pi-cl-analyze) and $(echo $PATH) substitutions are evaluated when you run the cat command, so the plist will contain the resolved absolute paths from your current shell.

3. Load the schedule

launchctl load ~/Library/LaunchAgents/com.pi-continuous-learning.analyze.plist

The analyzer will now run every 5 minutes (300 seconds) in the background, starting on login. It's safe for overlapping triggers — the lockfile guard ensures only one instance runs.

4. Verify it's running

# Check if the job is loaded
launchctl list | grep pi-continuous-learning

# View recent output
tail -20 /tmp/pi-cl-analyze.log

Disabling the schedule

# Stop and unload (persists across reboots — the job will not restart)
launchctl unload ~/Library/LaunchAgents/com.pi-continuous-learning.analyze.plist

# Optionally remove the plist file entirely
rm ~/Library/LaunchAgents/com.pi-continuous-learning.analyze.plist

Temporarily pausing

# Disable (keeps the plist but prevents it from running)
launchctl unload ~/Library/LaunchAgents/com.pi-continuous-learning.analyze.plist

# Re-enable later
launchctl load ~/Library/LaunchAgents/com.pi-continuous-learning.analyze.plist

Setting up a schedule (Linux/other)

Use cron:

# Edit crontab
crontab -e

# Add this line (runs every 5 minutes):
*/5 * * * * pi-cl-analyze >> /tmp/pi-cl-analyze.log 2>&1

To disable, remove the line from crontab -e.

Example instinct file

Instincts are stored as Markdown files with YAML frontmatter:

---
id: grep-before-edit
title: Grep Before Edit
trigger: "when modifying code files"
confidence: 0.7
domain: "workflow"
source: "personal"
scope: project
project_id: "a1b2c3d4e5f6"
project_name: "my-project"
observation_count: 8
confirmed_count: 5
contradicted_count: 1
inactive_count: 12
---

Always search with grep to find relevant context before editing files.

Confidence Scoring

Confidence comes from two sources:

Discovery (initial, based on observation count):

  • 1-2 observations: 0.3 (tentative)
  • 3-5: 0.5 (moderate)
  • 6-10: 0.7 (strong)
  • 11+: 0.85 (very strong)

Feedback (ongoing, based on real outcomes):

  • Confirmed (behavior aligned with instinct): +0.05
  • Contradicted (behavior went against instinct): -0.15
  • Inactive (instinct irrelevant to the turn): no change
  • Passive decay: -0.02 per week without observations
  • Range: 0.1 min, 0.9 max. Below 0.1 = flagged for removal.

This means an instinct observed 20 times but consistently contradicted in practice will lose confidence. Frequency alone doesn't equal correctness.

Updating

pi install npm:pi-continuous-learning

Your observations, instincts, and configuration are stored separately in ~/.pi/continuous-learning/ and are preserved across updates.

If you have a launchd schedule set up, no changes needed — the plist points to the binary which npm updates in place.

Configuration

Optional. Defaults work out of the box. Override at ~/.pi/continuous-learning/config.json:

{
  "run_interval_minutes": 5,
  "min_observations_to_analyze": 20,
  "min_confidence": 0.5,
  "max_instincts": 20,
  "max_injection_chars": 4000,
  "model": "claude-haiku-4-5",
  "timeout_seconds": 120,
  "active_hours_start": 8,
  "active_hours_end": 23,
  "max_idle_seconds": 1800
}

Only include the fields you want to change — missing fields use the defaults above.

Field Default Description
min_observations_to_analyze 20 Minimum observations before analysis triggers
min_confidence 0.5 Instincts below this are not injected into prompts
max_instincts 20 Maximum instincts injected per turn
max_injection_chars 4000 Character budget for the injection block (~1000 tokens)
model claude-haiku-4-5 Model for the background analyzer (lightweight models recommended to minimize cost)
timeout_seconds 120 Per-project timeout for the analyzer LLM session

Storage

All data stays local on your machine:

~/.pi/continuous-learning/
  config.json                   # Optional overrides
  projects.json                 # Project registry
  analyze.lock                  # Lockfile (present only while analyzer runs)
  instincts/personal/           # Global instincts
  projects/<hash>/
    project.json                # Project metadata + analysis cursor
    observations.jsonl          # Current observations
    observations.archive/       # Archived (auto-purged after 30 days)
    instincts/personal/         # Project-scoped instincts

Privacy & Security

  • All data stays on your machine — no external telemetry
  • Secrets (API keys, tokens, passwords) are scrubbed from observations before writing to disk
  • Only instincts (patterns) can be exported — never raw observations
  • The analyzer uses your existing Pi LLM credentials — no additional keys needed

Development

# Install dependencies
npm install

# Run tests
npm test

# Lint
npm run lint

# Type check
npm run typecheck

# Build (compiles to dist/)
npm run build

# All checks
npm run check

License

MIT