JSPM

  • Created
  • Published
  • Downloads 2132
  • Score
    100M100P100Q111027F
  • License MIT

Your AI Finally Remembers You - Local-first intelligent memory system for AI assistants. Works with Claude, Cursor, Windsurf, VS Code/Copilot, Codex, and 16+ AI tools. 100% local, zero cloud dependencies.

Package Exports

    This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (superlocalmemory) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

    Readme

    SuperLocalMemory V2

    Your AI Finally Remembers You

    ⚑ Created & Architected by Varun Pratap Bhardwaj ⚑
    Solution Architect β€’ Original Creator β€’ 2026

    Stop re-explaining your codebase every session. 100% local. Zero setup. Completely free.

    Python 3.8+ MIT License 100% Local 5 Min Setup Cross Platform Wiki

    Quick Start β€’ Why This? β€’ Features β€’ vs Alternatives β€’ Docs β€’ Issues

    Created by Varun Pratap Bhardwaj β€’ πŸ’– Sponsor β€’ πŸ“œ Attribution Required


    Install in One Command

    npm install -g superlocalmemory

    Or clone manually:

    git clone https://github.com/varun369/SuperLocalMemoryV2.git && cd SuperLocalMemoryV2 && ./install.sh

    Both methods auto-detect and configure 16+ IDEs and AI tools β€” Cursor, VS Code/Copilot, Codex, Claude, Windsurf, Gemini CLI, JetBrains, and more.


    The Problem

    Every time you start a new Claude session:

    You: "Remember that authentication bug we fixed last week?"
    Claude: "I don't have access to previous conversations..."
    You: *sighs and explains everything again*

    AI assistants forget everything between sessions. You waste time re-explaining your:

    • Project architecture
    • Coding preferences
    • Previous decisions
    • Debugging history

    The Solution

    # Install in one command
    npm install -g superlocalmemory
    
    # Save a memory
    superlocalmemoryv2:remember "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"
    
    # Later, in a new session...
    superlocalmemoryv2:recall "auth bug"
    # βœ“ Found: "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"

    Your AI now remembers everything. Forever. Locally. For free.


    πŸš€ Quick Start

    npm install -g superlocalmemory

    Mac/Linux (Manual)

    git clone https://github.com/varun369/SuperLocalMemoryV2.git
    cd SuperLocalMemoryV2
    ./install.sh

    Windows (PowerShell)

    git clone https://github.com/varun369/SuperLocalMemoryV2.git
    cd SuperLocalMemoryV2
    .\install.ps1

    Verify Installation

    superlocalmemoryv2:status
    # βœ“ Database: OK (0 memories)
    # βœ“ Graph: Ready
    # βœ“ Patterns: Ready

    That's it. No Docker. No API keys. No cloud accounts. No configuration.

    Updating to Latest Version

    npm users:

    # Update to latest version
    npm update -g superlocalmemory
    
    # Or force latest
    npm install -g superlocalmemory@latest
    
    # Install specific version
    npm install -g superlocalmemory@2.3.7

    Manual install users:

    cd SuperLocalMemoryV2
    git pull origin main
    ./install.sh  # Mac/Linux
    # or
    .\install.ps1  # Windows

    Your data is safe: Updates preserve your database and all memories.

    Start the Visualization Dashboard

    # Launch the interactive web UI
    python3 ~/.claude-memory/ui_server.py
    
    # Opens at http://localhost:8765
    # Features: Timeline view, search explorer, graph visualization

    🎨 Visualization Dashboard

    NEW in v2.2.0: Interactive web-based dashboard for exploring your memories visually.

    Features

    Feature Description
    πŸ“ˆ Timeline View See your memories chronologically with importance indicators
    πŸ” Search Explorer Real-time semantic search with score visualization
    πŸ•ΈοΈ Graph Visualization Interactive knowledge graph with clusters and relationships
    πŸ“Š Statistics Dashboard Memory trends, tag clouds, pattern insights
    🎯 Advanced Filters Filter by tags, importance, date range, clusters

    Quick Tour

    # 1. Start dashboard
    python ~/.claude-memory/ui_server.py
    
    # 2. Navigate to http://localhost:8765
    
    # 3. Explore your memories:
    #    - Timeline: See memories over time
    #    - Search: Find with semantic scoring
    #    - Graph: Visualize relationships
    #    - Stats: Analyze patterns

    [[Complete Dashboard Guide β†’|Visualization-Dashboard]]


    SuperLocalMemory V2.2.0 implements hybrid search combining multiple strategies for maximum accuracy.

    Search Strategies

    Strategy Method Best For Speed
    Semantic Search TF-IDF vectors + cosine similarity Conceptual queries ("authentication patterns") 45ms
    Full-Text Search SQLite FTS5 with ranking Exact phrases ("JWT tokens expire") 30ms
    Graph-Enhanced Knowledge graph traversal Related concepts ("show auth-related") 60ms
    Hybrid Mode All three combined General queries 80ms

    Search Examples

    # Semantic: finds conceptually similar
    slm recall "security best practices"
    # Matches: "JWT implementation", "OAuth flow", "CSRF protection"
    
    # Exact: finds literal text
    slm recall "PostgreSQL 15"
    # Matches: exactly "PostgreSQL 15"
    
    # Graph: finds related via clusters
    slm recall "authentication" --use-graph
    # Matches: JWT, OAuth, sessions (via "Auth & Security" cluster)
    
    # Hybrid: best of all worlds (default)
    slm recall "API design patterns"
    # Combines semantic + exact + graph for optimal results

    Search Performance by Dataset Size

    Memories Semantic FTS5 Graph Hybrid
    100 35ms 25ms 50ms 65ms
    500 45ms 30ms 60ms 80ms
    1,000 55ms 35ms 70ms 95ms
    5,000 85ms 50ms 110ms 150ms

    All search strategies remain sub-second even with 5,000+ memories.


    ⚑ Performance

    Benchmarks (v2.2.0)

    Operation Time Comparison Notes
    Add Memory < 10ms - Instant indexing
    Search (Hybrid) 80ms 3.3x faster than v1 500 memories
    Graph Build < 2s - 100 memories
    Pattern Learning < 2s - Incremental
    Dashboard Load < 500ms - 1,000 memories
    Timeline Render < 300ms - All memories

    Storage Efficiency

    Tier Description Compression Savings
    Tier 1 Active memories (0-30 days) None -
    Tier 2 Warm memories (30-90 days) 60% Progressive summarization
    Tier 3 Cold storage (90+ days) 96% JSON archival

    Example: 1,000 memories with mixed ages = ~15MB (vs 380MB uncompressed)

    Scalability

    Dataset Size Search Time Graph Build RAM Usage
    100 memories 35ms 0.5s < 30MB
    500 memories 45ms 2s < 50MB
    1,000 memories 55ms 5s < 80MB
    5,000 memories 85ms 30s < 150MB

    Tested up to 10,000 memories with linear scaling and no degradation.


    🌐 Works Everywhere

    SuperLocalMemory V2 is the ONLY memory system that works across ALL your tools:

    Supported IDEs & Tools

    Tool Integration How It Works
    Claude Code βœ… Skills + MCP /superlocalmemoryv2:remember
    Cursor βœ… MCP + Skills AI uses memory tools natively
    Windsurf βœ… MCP + Skills Native memory access
    Claude Desktop βœ… MCP Built-in support
    OpenAI Codex βœ… MCP + Skills Auto-configured (TOML)
    VS Code / Copilot βœ… MCP + Skills .vscode/mcp.json
    Continue.dev βœ… MCP + Skills /slm-remember
    Cody βœ… Custom Commands /slm-remember
    Gemini CLI βœ… MCP + Skills Native MCP + skills
    JetBrains IDEs βœ… MCP Via AI Assistant settings
    Zed Editor βœ… MCP Native MCP tools
    OpenCode βœ… MCP Native MCP tools
    Perplexity βœ… MCP Native MCP tools
    Antigravity βœ… MCP + Skills Native MCP tools
    ChatGPT βœ… MCP Connector search() + fetch() via HTTP tunnel
    Aider βœ… Smart Wrapper aider-smart with context
    Any Terminal βœ… Universal CLI slm remember "content"

    Three Ways to Access

    1. MCP (Model Context Protocol) - Auto-configured for Cursor, Windsurf, Claude Desktop

      • AI assistants get natural access to your memory
      • No manual commands needed
      • "Remember that we use FastAPI" just works
    2. Skills & Commands - For Claude Code, Continue.dev, Cody

      • /superlocalmemoryv2:remember in Claude Code
      • /slm-remember in Continue.dev and Cody
      • Familiar slash command interface
    3. Universal CLI - Works in any terminal or script

      • slm remember "content" - Simple, clean syntax
      • slm recall "query" - Search from anywhere
      • aider-smart - Aider with auto-context injection

    All three methods use the SAME local database. No data duplication, no conflicts.

    Auto-Detection

    Installation automatically detects and configures:

    • Existing IDEs (Cursor, Windsurf, VS Code)
    • Installed tools (Aider, Continue, Cody)
    • Shell environment (bash, zsh)

    Zero manual configuration required. It just works.

    Manual Setup for Other Apps

    Want to use SuperLocalMemory in ChatGPT, Perplexity, Zed, or other MCP-compatible tools?

    πŸ“˜ Complete setup guide: docs/MCP-MANUAL-SETUP.md

    Covers:

    • ChatGPT Desktop - Add via Settings β†’ MCP
    • Perplexity - Configure via app settings
    • Zed Editor - JSON configuration
    • Cody - VS Code/JetBrains setup
    • Custom MCP clients - Python/HTTP integration

    All tools connect to the same local database - no data duplication.


    πŸ’‘ Why SuperLocalMemory?

    For Developers Who Use AI Daily

    Scenario Without Memory With SuperLocalMemory
    New Claude session Re-explain entire project recall "project context" β†’ instant context
    Debugging "We tried X last week..." starts over Knowledge graph shows related past fixes
    Code preferences "I prefer React..." every time Pattern learning knows your style
    Multi-project Context constantly bleeds Separate profiles per project

    Built on 2026 Research

    Not another simple key-value store. SuperLocalMemory implements cutting-edge memory architecture:

    • PageIndex (Meta AI) β†’ Hierarchical memory organization
    • GraphRAG (Microsoft) β†’ Knowledge graph with auto-clustering
    • xMemory (Stanford) β†’ Identity pattern learning
    • A-RAG β†’ Multi-level retrieval with context awareness

    The only open-source implementation combining all four approaches.


    πŸ†š vs Alternatives

    The Hard Truth About "Free" Tiers

    Solution Free Tier Limits Paid Price What's Missing
    Mem0 10K memories, limited API Usage-based No pattern learning, not local
    Zep Limited credits $50/month Credit system, cloud-only
    Supermemory 1M tokens, 10K queries $19-399/mo Not local, no graphs
    Personal.AI ❌ No free tier $33/month Cloud-only, closed ecosystem
    Letta/MemGPT Self-hosted (complex) TBD Requires significant setup
    SuperLocalMemory V2 Unlimited $0 forever Nothing.

    Feature Comparison (What Actually Matters)

    Feature Mem0 Zep Khoj Letta SuperLocalMemory V2
    Works in Cursor Cloud Only ❌ ❌ ❌ βœ… Local
    Works in Windsurf Cloud Only ❌ ❌ ❌ βœ… Local
    Works in VS Code 3rd Party ❌ Partial ❌ βœ… Native
    Works in Claude ❌ ❌ ❌ ❌ βœ…
    Works with Aider ❌ ❌ ❌ ❌ βœ…
    Universal CLI ❌ ❌ ❌ ❌ βœ…
    7-Layer Universal Architecture ❌ ❌ ❌ ❌ βœ…
    Pattern Learning ❌ ❌ ❌ ❌ βœ…
    Multi-Profile Support ❌ ❌ ❌ Partial βœ…
    Knowledge Graphs βœ… βœ… ❌ ❌ βœ…
    100% Local ❌ ❌ Partial Partial βœ…
    Zero Setup ❌ ❌ ❌ ❌ βœ…
    Progressive Compression ❌ ❌ ❌ ❌ βœ…
    Completely Free Limited Limited Partial βœ… βœ…

    SuperLocalMemory V2 is the ONLY solution that:

    • βœ… Works across 16+ IDEs and CLI tools
    • βœ… Remains 100% local (no cloud dependencies)
    • βœ… Completely free with unlimited memories

    See full competitive analysis β†’


    ✨ Features

    Multi-Layer Memory Architecture

    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚  Layer 9: VISUALIZATION (NEW v2.2.0)                        β”‚
    β”‚  Interactive dashboard: timeline, search, graph explorer    β”‚
    β”‚  Real-time analytics and visual insights                    β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚  Layer 8: HYBRID SEARCH (NEW v2.2.0)                        β”‚
    β”‚  Combines: Semantic + FTS5 + Graph traversal                β”‚
    β”‚  80ms response time with maximum accuracy                   β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚  Layer 7: UNIVERSAL ACCESS                                  β”‚
    β”‚  MCP + Skills + CLI (works everywhere)                      β”‚
    β”‚  16+ IDEs with single database                              β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚  Layer 6: MCP INTEGRATION                                   β”‚
    β”‚  Model Context Protocol: 6 tools, 4 resources, 2 prompts    β”‚
    β”‚  Auto-configured for Cursor, Windsurf, Claude               β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚  Layer 5: SKILLS LAYER                                      β”‚
    β”‚  6 universal slash-commands for AI assistants               β”‚
    β”‚  Compatible with Claude Code, Continue, Cody                β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚  Layer 4: PATTERN LEARNING                                  β”‚
    β”‚  Learns: coding style, preferences, terminology             β”‚
    β”‚  "You prefer React over Vue" (73% confidence)               β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚  Layer 3: KNOWLEDGE GRAPH                                   β”‚
    β”‚  Auto-clusters: "Auth & Tokens", "Performance", "Testing"   β”‚
    β”‚  Discovers relationships you didn't know existed            β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚  Layer 2: HIERARCHICAL INDEX                                β”‚
    β”‚  Tree structure for fast navigation                         β”‚
    β”‚  O(log n) lookups instead of O(n) scans                     β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚  Layer 1: RAW STORAGE                                       β”‚
    β”‚  SQLite + Full-text search + TF-IDF vectors                 β”‚
    β”‚  Compression: 60-96% space savings                          β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

    Knowledge Graph (It's Magic)

    # Build the graph from your memories
    python ~/.claude-memory/graph_engine.py build
    
    # Output:
    # βœ“ Processed 47 memories
    # βœ“ Created 12 clusters:
    #   - "Authentication & Tokens" (8 memories)
    #   - "Performance Optimization" (6 memories)
    #   - "React Components" (11 memories)
    #   - "Database Queries" (5 memories)
    #   ...

    The graph automatically discovers relationships. Ask "what relates to auth?" and get JWT, session management, token refreshβ€”even if you never tagged them together.

    Pattern Learning (It Knows You)

    # Learn patterns from your memories
    python ~/.claude-memory/pattern_learner.py update
    
    # Get your coding identity
    python ~/.claude-memory/pattern_learner.py context 0.5
    
    # Output:
    # Your Coding Identity:
    # - Framework preference: React (73% confidence)
    # - Style: Performance over readability (58% confidence)
    # - Testing: Jest + React Testing Library (65% confidence)
    # - API style: REST over GraphQL (81% confidence)

    Your AI assistant can now match your preferences automatically.

    Multi-Profile Support

    # Work profile
    superlocalmemoryv2:profile create work --description "Day job"
    superlocalmemoryv2:profile switch work
    
    # Personal projects
    superlocalmemoryv2:profile create personal
    superlocalmemoryv2:profile switch personal
    
    # Client projects (completely isolated)
    superlocalmemoryv2:profile create client-acme

    Each profile has isolated memories, graphs, and patterns. No context bleeding.


    πŸ“– Documentation

    Guide Description
    Quick Start Get running in 5 minutes
    Installation Detailed setup instructions
    Visualization Dashboard Interactive web UI guide (NEW v2.2.0)
    CLI Reference All commands explained
    Knowledge Graph How clustering works
    Pattern Learning Identity extraction
    Profiles Guide Multi-context management
    API Reference Python API documentation

    πŸ”§ CLI Commands

    # Memory Operations
    superlocalmemoryv2:remember "content" --tags tag1,tag2  # Save memory
    superlocalmemoryv2:recall "search query"                 # Search
    superlocalmemoryv2:list                                  # Recent memories
    superlocalmemoryv2:status                                # System health
    
    # Profile Management
    superlocalmemoryv2:profile list                          # Show all profiles
    superlocalmemoryv2:profile create <name>                 # New profile
    superlocalmemoryv2:profile switch <name>                 # Switch context
    
    # Knowledge Graph
    python ~/.claude-memory/graph_engine.py build            # Build graph
    python ~/.claude-memory/graph_engine.py stats            # View clusters
    python ~/.claude-memory/graph_engine.py related --id 5   # Find related
    
    # Pattern Learning
    python ~/.claude-memory/pattern_learner.py update        # Learn patterns
    python ~/.claude-memory/pattern_learner.py context 0.5   # Get identity
    
    # Reset (Use with caution!)
    superlocalmemoryv2:reset soft                            # Clear memories
    superlocalmemoryv2:reset hard --confirm                  # Nuclear option

    πŸ“Š Performance

    SEO: Performance benchmarks, memory system speed, search latency, visualization dashboard performance

    Metric Result Notes
    Hybrid search 80ms Semantic + FTS5 + Graph combined
    Semantic search 45ms 3.3x faster than v1
    FTS5 search 30ms Exact phrase matching
    Graph build (100 memories) < 2 seconds Leiden clustering
    Pattern learning < 2 seconds Incremental updates
    Dashboard load < 500ms 1,000 memories
    Timeline render < 300ms All memories visualized
    Storage compression 60-96% reduction Progressive tiering
    Memory overhead < 50MB RAM Lightweight

    Tested up to 10,000 memories with sub-second search times and linear scaling.


    🀝 Contributing

    We welcome contributions! See CONTRIBUTING.md for guidelines.

    Areas for contribution:

    • Additional pattern categories
    • Graph visualization UI
    • Integration with more AI assistants
    • Performance optimizations
    • Documentation improvements

    πŸ’– Support This Project

    If SuperLocalMemory saves you time, consider supporting its development:


    πŸ“œ License

    MIT License β€” use freely, even commercially. Just include the license.


    πŸ‘¨β€πŸ’» Author

    Varun Pratap Bhardwaj β€” Solution Architect

    GitHub

    Building tools that make AI actually useful for developers.


    100% local. 100% private. 100% yours.

    Star on GitHub