Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (superlocalmemory) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Your AI Finally Remembers You
β‘ Created & Architected by Varun Pratap Bhardwaj β‘
Solution Architect β’ Original Creator β’ 2026
Stop re-explaining your codebase every session. 100% local. Zero setup. Completely free.
Quick Start β’ Why This? β’ Features β’ vs Alternatives β’ Docs β’ Issues
Created by Varun Pratap Bhardwaj β’ π Sponsor β’ π Attribution Required
Install in One Command
npm install -g superlocalmemoryOr clone manually:
git clone https://github.com/varun369/SuperLocalMemoryV2.git && cd SuperLocalMemoryV2 && ./install.shBoth methods auto-detect and configure 16+ IDEs and AI tools β Cursor, VS Code/Copilot, Codex, Claude, Windsurf, Gemini CLI, JetBrains, and more.
The Problem
Every time you start a new Claude session:
You: "Remember that authentication bug we fixed last week?"
Claude: "I don't have access to previous conversations..."
You: *sighs and explains everything again*AI assistants forget everything between sessions. You waste time re-explaining your:
- Project architecture
- Coding preferences
- Previous decisions
- Debugging history
The Solution
# Install in one command
npm install -g superlocalmemory
# Save a memory
superlocalmemoryv2:remember "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"
# Later, in a new session...
superlocalmemoryv2:recall "auth bug"
# β Found: "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"Your AI now remembers everything. Forever. Locally. For free.
π Quick Start
npm (Recommended β All Platforms)
npm install -g superlocalmemoryMac/Linux (Manual)
git clone https://github.com/varun369/SuperLocalMemoryV2.git
cd SuperLocalMemoryV2
./install.shWindows (PowerShell)
git clone https://github.com/varun369/SuperLocalMemoryV2.git
cd SuperLocalMemoryV2
.\install.ps1Verify Installation
superlocalmemoryv2:status
# β Database: OK (0 memories)
# β Graph: Ready
# β Patterns: ReadyThat's it. No Docker. No API keys. No cloud accounts. No configuration.
Updating to Latest Version
npm users:
# Update to latest version
npm update -g superlocalmemory
# Or force latest
npm install -g superlocalmemory@latest
# Install specific version
npm install -g superlocalmemory@2.3.7Manual install users:
cd SuperLocalMemoryV2
git pull origin main
./install.sh # Mac/Linux
# or
.\install.ps1 # WindowsYour data is safe: Updates preserve your database and all memories.
Start the Visualization Dashboard
# Launch the interactive web UI
python3 ~/.claude-memory/ui_server.py
# Opens at http://localhost:8765
# Features: Timeline view, search explorer, graph visualizationπ¨ Visualization Dashboard
NEW in v2.2.0: Interactive web-based dashboard for exploring your memories visually.
Features
| Feature | Description |
|---|---|
| π Timeline View | See your memories chronologically with importance indicators |
| π Search Explorer | Real-time semantic search with score visualization |
| πΈοΈ Graph Visualization | Interactive knowledge graph with clusters and relationships |
| π Statistics Dashboard | Memory trends, tag clouds, pattern insights |
| π― Advanced Filters | Filter by tags, importance, date range, clusters |
Quick Tour
# 1. Start dashboard
python ~/.claude-memory/ui_server.py
# 2. Navigate to http://localhost:8765
# 3. Explore your memories:
# - Timeline: See memories over time
# - Search: Find with semantic scoring
# - Graph: Visualize relationships
# - Stats: Analyze patterns[[Complete Dashboard Guide β|Visualization-Dashboard]]
π Advanced Search
SuperLocalMemory V2.2.0 implements hybrid search combining multiple strategies for maximum accuracy.
Search Strategies
| Strategy | Method | Best For | Speed |
|---|---|---|---|
| Semantic Search | TF-IDF vectors + cosine similarity | Conceptual queries ("authentication patterns") | 45ms |
| Full-Text Search | SQLite FTS5 with ranking | Exact phrases ("JWT tokens expire") | 30ms |
| Graph-Enhanced | Knowledge graph traversal | Related concepts ("show auth-related") | 60ms |
| Hybrid Mode | All three combined | General queries | 80ms |
Search Examples
# Semantic: finds conceptually similar
slm recall "security best practices"
# Matches: "JWT implementation", "OAuth flow", "CSRF protection"
# Exact: finds literal text
slm recall "PostgreSQL 15"
# Matches: exactly "PostgreSQL 15"
# Graph: finds related via clusters
slm recall "authentication" --use-graph
# Matches: JWT, OAuth, sessions (via "Auth & Security" cluster)
# Hybrid: best of all worlds (default)
slm recall "API design patterns"
# Combines semantic + exact + graph for optimal resultsSearch Performance by Dataset Size
| Memories | Semantic | FTS5 | Graph | Hybrid |
|---|---|---|---|---|
| 100 | 35ms | 25ms | 50ms | 65ms |
| 500 | 45ms | 30ms | 60ms | 80ms |
| 1,000 | 55ms | 35ms | 70ms | 95ms |
| 5,000 | 85ms | 50ms | 110ms | 150ms |
All search strategies remain sub-second even with 5,000+ memories.
β‘ Performance
Benchmarks (v2.2.0)
| Operation | Time | Comparison | Notes |
|---|---|---|---|
| Add Memory | < 10ms | - | Instant indexing |
| Search (Hybrid) | 80ms | 3.3x faster than v1 | 500 memories |
| Graph Build | < 2s | - | 100 memories |
| Pattern Learning | < 2s | - | Incremental |
| Dashboard Load | < 500ms | - | 1,000 memories |
| Timeline Render | < 300ms | - | All memories |
Storage Efficiency
| Tier | Description | Compression | Savings |
|---|---|---|---|
| Tier 1 | Active memories (0-30 days) | None | - |
| Tier 2 | Warm memories (30-90 days) | 60% | Progressive summarization |
| Tier 3 | Cold storage (90+ days) | 96% | JSON archival |
Example: 1,000 memories with mixed ages = ~15MB (vs 380MB uncompressed)
Scalability
| Dataset Size | Search Time | Graph Build | RAM Usage |
|---|---|---|---|
| 100 memories | 35ms | 0.5s | < 30MB |
| 500 memories | 45ms | 2s | < 50MB |
| 1,000 memories | 55ms | 5s | < 80MB |
| 5,000 memories | 85ms | 30s | < 150MB |
Tested up to 10,000 memories with linear scaling and no degradation.
π Works Everywhere
SuperLocalMemory V2 is the ONLY memory system that works across ALL your tools:
Supported IDEs & Tools
| Tool | Integration | How It Works |
|---|---|---|
| Claude Code | β Skills + MCP | /superlocalmemoryv2:remember |
| Cursor | β MCP + Skills | AI uses memory tools natively |
| Windsurf | β MCP + Skills | Native memory access |
| Claude Desktop | β MCP | Built-in support |
| OpenAI Codex | β MCP + Skills | Auto-configured (TOML) |
| VS Code / Copilot | β MCP + Skills | .vscode/mcp.json |
| Continue.dev | β MCP + Skills | /slm-remember |
| Cody | β Custom Commands | /slm-remember |
| Gemini CLI | β MCP + Skills | Native MCP + skills |
| JetBrains IDEs | β MCP | Via AI Assistant settings |
| Zed Editor | β MCP | Native MCP tools |
| OpenCode | β MCP | Native MCP tools |
| Perplexity | β MCP | Native MCP tools |
| Antigravity | β MCP + Skills | Native MCP tools |
| ChatGPT | β MCP Connector | search() + fetch() via HTTP tunnel |
| Aider | β Smart Wrapper | aider-smart with context |
| Any Terminal | β Universal CLI | slm remember "content" |
Three Ways to Access
MCP (Model Context Protocol) - Auto-configured for Cursor, Windsurf, Claude Desktop
- AI assistants get natural access to your memory
- No manual commands needed
- "Remember that we use FastAPI" just works
Skills & Commands - For Claude Code, Continue.dev, Cody
/superlocalmemoryv2:rememberin Claude Code/slm-rememberin Continue.dev and Cody- Familiar slash command interface
Universal CLI - Works in any terminal or script
slm remember "content"- Simple, clean syntaxslm recall "query"- Search from anywhereaider-smart- Aider with auto-context injection
All three methods use the SAME local database. No data duplication, no conflicts.
Auto-Detection
Installation automatically detects and configures:
- Existing IDEs (Cursor, Windsurf, VS Code)
- Installed tools (Aider, Continue, Cody)
- Shell environment (bash, zsh)
Zero manual configuration required. It just works.
Manual Setup for Other Apps
Want to use SuperLocalMemory in ChatGPT, Perplexity, Zed, or other MCP-compatible tools?
π Complete setup guide: docs/MCP-MANUAL-SETUP.md
Covers:
- ChatGPT Desktop - Add via Settings β MCP
- Perplexity - Configure via app settings
- Zed Editor - JSON configuration
- Cody - VS Code/JetBrains setup
- Custom MCP clients - Python/HTTP integration
All tools connect to the same local database - no data duplication.
π‘ Why SuperLocalMemory?
For Developers Who Use AI Daily
| Scenario | Without Memory | With SuperLocalMemory |
|---|---|---|
| New Claude session | Re-explain entire project | recall "project context" β instant context |
| Debugging | "We tried X last week..." starts over | Knowledge graph shows related past fixes |
| Code preferences | "I prefer React..." every time | Pattern learning knows your style |
| Multi-project | Context constantly bleeds | Separate profiles per project |
Built on 2026 Research
Not another simple key-value store. SuperLocalMemory implements cutting-edge memory architecture:
- PageIndex (Meta AI) β Hierarchical memory organization
- GraphRAG (Microsoft) β Knowledge graph with auto-clustering
- xMemory (Stanford) β Identity pattern learning
- A-RAG β Multi-level retrieval with context awareness
The only open-source implementation combining all four approaches.
π vs Alternatives
The Hard Truth About "Free" Tiers
| Solution | Free Tier Limits | Paid Price | What's Missing |
|---|---|---|---|
| Mem0 | 10K memories, limited API | Usage-based | No pattern learning, not local |
| Zep | Limited credits | $50/month | Credit system, cloud-only |
| Supermemory | 1M tokens, 10K queries | $19-399/mo | Not local, no graphs |
| Personal.AI | β No free tier | $33/month | Cloud-only, closed ecosystem |
| Letta/MemGPT | Self-hosted (complex) | TBD | Requires significant setup |
| SuperLocalMemory V2 | Unlimited | $0 forever | Nothing. |
Feature Comparison (What Actually Matters)
| Feature | Mem0 | Zep | Khoj | Letta | SuperLocalMemory V2 |
|---|---|---|---|---|---|
| Works in Cursor | Cloud Only | β | β | β | β Local |
| Works in Windsurf | Cloud Only | β | β | β | β Local |
| Works in VS Code | 3rd Party | β | Partial | β | β Native |
| Works in Claude | β | β | β | β | β |
| Works with Aider | β | β | β | β | β |
| Universal CLI | β | β | β | β | β |
| 7-Layer Universal Architecture | β | β | β | β | β |
| Pattern Learning | β | β | β | β | β |
| Multi-Profile Support | β | β | β | Partial | β |
| Knowledge Graphs | β | β | β | β | β |
| 100% Local | β | β | Partial | Partial | β |
| Zero Setup | β | β | β | β | β |
| Progressive Compression | β | β | β | β | β |
| Completely Free | Limited | Limited | Partial | β | β |
SuperLocalMemory V2 is the ONLY solution that:
- β Works across 16+ IDEs and CLI tools
- β Remains 100% local (no cloud dependencies)
- β Completely free with unlimited memories
See full competitive analysis β
β¨ Features
Multi-Layer Memory Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Layer 9: VISUALIZATION (NEW v2.2.0) β
β Interactive dashboard: timeline, search, graph explorer β
β Real-time analytics and visual insights β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 8: HYBRID SEARCH (NEW v2.2.0) β
β Combines: Semantic + FTS5 + Graph traversal β
β 80ms response time with maximum accuracy β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 7: UNIVERSAL ACCESS β
β MCP + Skills + CLI (works everywhere) β
β 16+ IDEs with single database β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 6: MCP INTEGRATION β
β Model Context Protocol: 6 tools, 4 resources, 2 prompts β
β Auto-configured for Cursor, Windsurf, Claude β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 5: SKILLS LAYER β
β 6 universal slash-commands for AI assistants β
β Compatible with Claude Code, Continue, Cody β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 4: PATTERN LEARNING β
β Learns: coding style, preferences, terminology β
β "You prefer React over Vue" (73% confidence) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 3: KNOWLEDGE GRAPH β
β Auto-clusters: "Auth & Tokens", "Performance", "Testing" β
β Discovers relationships you didn't know existed β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 2: HIERARCHICAL INDEX β
β Tree structure for fast navigation β
β O(log n) lookups instead of O(n) scans β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 1: RAW STORAGE β
β SQLite + Full-text search + TF-IDF vectors β
β Compression: 60-96% space savings β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββKnowledge Graph (It's Magic)
# Build the graph from your memories
python ~/.claude-memory/graph_engine.py build
# Output:
# β Processed 47 memories
# β Created 12 clusters:
# - "Authentication & Tokens" (8 memories)
# - "Performance Optimization" (6 memories)
# - "React Components" (11 memories)
# - "Database Queries" (5 memories)
# ...The graph automatically discovers relationships. Ask "what relates to auth?" and get JWT, session management, token refreshβeven if you never tagged them together.
Pattern Learning (It Knows You)
# Learn patterns from your memories
python ~/.claude-memory/pattern_learner.py update
# Get your coding identity
python ~/.claude-memory/pattern_learner.py context 0.5
# Output:
# Your Coding Identity:
# - Framework preference: React (73% confidence)
# - Style: Performance over readability (58% confidence)
# - Testing: Jest + React Testing Library (65% confidence)
# - API style: REST over GraphQL (81% confidence)Your AI assistant can now match your preferences automatically.
Multi-Profile Support
# Work profile
superlocalmemoryv2:profile create work --description "Day job"
superlocalmemoryv2:profile switch work
# Personal projects
superlocalmemoryv2:profile create personal
superlocalmemoryv2:profile switch personal
# Client projects (completely isolated)
superlocalmemoryv2:profile create client-acmeEach profile has isolated memories, graphs, and patterns. No context bleeding.
π Documentation
| Guide | Description |
|---|---|
| Quick Start | Get running in 5 minutes |
| Installation | Detailed setup instructions |
| Visualization Dashboard | Interactive web UI guide (NEW v2.2.0) |
| CLI Reference | All commands explained |
| Knowledge Graph | How clustering works |
| Pattern Learning | Identity extraction |
| Profiles Guide | Multi-context management |
| API Reference | Python API documentation |
π§ CLI Commands
# Memory Operations
superlocalmemoryv2:remember "content" --tags tag1,tag2 # Save memory
superlocalmemoryv2:recall "search query" # Search
superlocalmemoryv2:list # Recent memories
superlocalmemoryv2:status # System health
# Profile Management
superlocalmemoryv2:profile list # Show all profiles
superlocalmemoryv2:profile create <name> # New profile
superlocalmemoryv2:profile switch <name> # Switch context
# Knowledge Graph
python ~/.claude-memory/graph_engine.py build # Build graph
python ~/.claude-memory/graph_engine.py stats # View clusters
python ~/.claude-memory/graph_engine.py related --id 5 # Find related
# Pattern Learning
python ~/.claude-memory/pattern_learner.py update # Learn patterns
python ~/.claude-memory/pattern_learner.py context 0.5 # Get identity
# Reset (Use with caution!)
superlocalmemoryv2:reset soft # Clear memories
superlocalmemoryv2:reset hard --confirm # Nuclear optionπ Performance
SEO: Performance benchmarks, memory system speed, search latency, visualization dashboard performance
| Metric | Result | Notes |
|---|---|---|
| Hybrid search | 80ms | Semantic + FTS5 + Graph combined |
| Semantic search | 45ms | 3.3x faster than v1 |
| FTS5 search | 30ms | Exact phrase matching |
| Graph build (100 memories) | < 2 seconds | Leiden clustering |
| Pattern learning | < 2 seconds | Incremental updates |
| Dashboard load | < 500ms | 1,000 memories |
| Timeline render | < 300ms | All memories visualized |
| Storage compression | 60-96% reduction | Progressive tiering |
| Memory overhead | < 50MB RAM | Lightweight |
Tested up to 10,000 memories with sub-second search times and linear scaling.
π€ Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Areas for contribution:
- Additional pattern categories
- Graph visualization UI
- Integration with more AI assistants
- Performance optimizations
- Documentation improvements
π Support This Project
If SuperLocalMemory saves you time, consider supporting its development:
- β Star this repo β helps others discover it
- π Report bugs β open an issue
- π‘ Suggest features β start a discussion
- β Buy me a coffee β buymeacoffee.com/varunpratah
- πΈ PayPal β paypal.me/varunpratapbhardwaj
- π Sponsor β GitHub Sponsors
π License
MIT License β use freely, even commercially. Just include the license.
π¨βπ» Author
Varun Pratap Bhardwaj β Solution Architect
Building tools that make AI actually useful for developers.
100% local. 100% private. 100% yours.