Package Exports
- @sid7vish/universal-rag-mcp
- @sid7vish/universal-rag-mcp/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@sid7vish/universal-rag-mcp) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Universal RAG MCP
Intelligent cross-platform memory system for AI assistants via Model Context Protocol
Give your AI assistants a persistent, searchable memory that works across Claude, ChatGPT, Gemini, and more. Features intelligent chunking, automatic deduplication, and semantic search.
✨ Features
- 🧠 Smart Memory - Intelligent chunking filters noise, keeps only important information
- 🔍 Semantic Search - Find information by meaning, not just keywords
- 🚫 Auto-Deduplication - Tracks mention count instead of saving duplicates
- 🌐 Cross-Platform - Same memory across Claude, ChatGPT, Gemini, etc.
- ⚡ Fast - 2-second indexing, sub-100ms searches
- 🎯 Accurate - Multi-question support with parallel searches
- 🔒 Your Data - Stored in your Firebase/Pinecone accounts
Features
- Cross-platform: Same memory in Claude Desktop, ChatGPT, Gemini, and more
- Your data: You control it - stored in your Firebase/Pinecone accounts
- Zero config: 5-minute setup wizard handles everything
- Smart search: Semantic search with automatic reranking
- Fast: In-memory cache + hot/warm/cold storage tiers
Quick Start
1. Install
npm install -g universal-rag-mcp2. Setup (5 minutes)
universal-rag-mcp initYou'll need 3 free API keys:
- Firebase (database) - https://console.firebase.google.com
- Pinecone (vector search) - https://app.pinecone.io/signup
- Voyage AI (embeddings) - https://dash.voyageai.com/api-keys
3. Use
Restart Claude Desktop (or your AI platform), then:
You: Remember that I love TypeScript
AI: I'll remember that!
You: What programming languages do I like?
AI: Based on your memory, you love TypeScript!How It Works
- Add to memory: Conversations are automatically chunked and embedded
- Search memory: Semantic search finds relevant context from past conversations
- Storage tiers:
- Hot: Last 10 queries (5ms access)
- Warm: Recent conversations (50-200ms)
- Cold: Old conversations (200-500ms)
Architecture
┌─────────────────┐
│ AI Platform │ (Claude, ChatGPT, etc.)
└────────┬────────┘
│ MCP Protocol
┌────────▼────────┐
│ This Tool │
└────────┬────────┘
│
┌────┴────┬────────┬──────────┐
▼ ▼ ▼ ▼
Firebase Pinecone Voyage In-Memory
(data) (vectors) (embed) (cache)API Keys Needed
Mandatory (3)
Firebase Firestore - Store conversation data
- Free tier: 1GB storage, 50K reads/day
- Setup: 2 minutes
Pinecone - Vector search
- Free tier: 5M vectors
- Setup: 1 minute
Voyage AI - Generate embeddings
- Free tier: 10M tokens
- Setup: 1 minute
Optional (0)
None! We use built-in MMR reranking and in-memory caching instead of Cohere/Redis.
Commands
# Initial setup
universal-rag-mcp init
# Check status
universal-rag-mcp status
# Show config location
universal-rag-mcp configMCP Tools
search_memory
Search your conversation history:
{
"query": "What do I like?",
"limit": 5,
"minScore": 0.6,
"platform": "claude"
}add_to_memory
Store new content:
{
"content": "I love TypeScript",
"platform": "claude",
"role": "user"
}Configuration
Config stored at: ~/.config/universal-rag-mcp/config.json
{
"userId": "your-email@example.com",
"firebase": {
"projectId": "your-project",
"serviceAccountPath": "/path/to/key.json"
},
"pinecone": {
"apiKey": "your-key",
"indexName": "rag-your-id"
},
"voyage": {
"apiKey": "your-key"
}
}Development
# Clone repo
git clone https://github.com/yourusername/universal-rag-mcp
cd universal-rag-mcp
# Install dependencies
npm install
# Build
npm run build
# Run
npm startLicense
MIT