Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (memory-qdrant-mcp) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Memory Qdrant MCP
An MCP (Model Context Protocol) server that provides memory management capabilities using Qdrant vector database for storing and retrieving project context, decisions, progress, and patterns.
Features
- Memory Management: Log and query project memories across different categories
- Vector Search: Semantic search through memory entries using embeddings
- Multiple Providers: Support for Gemini, Ollama, and FastEmbed embedding providers
- MCP Integration: Full MCP stdio server implementation
- REST API: Additional HTTP endpoints for direct access
Installation
Using npx (Recommended)
npx memory-qdrant-mcpThis will download and run the server automatically.
Manual Installation
npm install -g memory-qdrant-mcp
memory-qdrant-mcpFrom Source
git clone <repository-url>
cd memory-qdrant-mcp
npm install
npm run build # if needed
node server/index.jsPrerequisites
Qdrant Database: The server requires a running Qdrant instance
Option 1: Simple Docker run
docker run -p 6333:6333 qdrant/qdrant
Option 2: Docker Compose (Recommended for production) Create a
docker-compose.ymlfile:services: qdrant: image: qdrant/qdrant:latest container_name: qdrant restart: unless-stopped ports: - "6333:6333" - "6334:6334" environment: QDRANT__SERVICE__CORS: "true" volumes: - qdrant_data:/qdrant/storage volumes: qdrant_data: driver: localThen run:
docker-compose up -d
Environment Variables: Copy
.env.exampleto.envand configure:cp .env.example .envEmbedding Provider (choose one):
# Option 1: Google Gemini (default, fast, recommended) GEMINI_API_KEY=your_gemini_api_key_here # Option 2: Ollama (local, free, slower) OLLAMA_BASE_URL=http://localhost:11434 # Option 3: OpenRouter (no embedding models available - use Gemini or Ollama above)Model Configuration (optional, defaults provided):
EMBEDDING_MODEL=models/text-embedding-004 # Gemini default (use nomic-embed-text:v1.5 for Ollama) SUMMARIZER_MODEL=openai/gpt-oss-20b:free # OpenRouter default (use gemini/gemini-2.0-flash-exp for Gemini, Any model in ollama is slow) DEFAULT_TOP_K_MEMORY_QUERY=3 # Search result limitRequired:
QDRANT_URL=http://localhost:6333
MCP Configuration
For VSCode GitHub Copilot
Create or update the MCP settings file at:
- Windows:
%APPDATA%\Code\User\globalStorage\github.copilot-chat\settings\mcp.json - macOS:
~/Library/Application Support/Code/User/globalStorage/github.copilot-chat/settings/mcp.json - Linux:
~/.config/Code/User/globalStorage/github.copilot-chat/settings/mcp.json
Add the following configuration:
{
"mcpServers": {
"memory-qdrant-mcp": {
"command": "npx",
"args": ["memory-qdrant-mcp"],
"env": {
"QDRANT_URL": "http://localhost:6333",
"GEMINI_API_KEY": "your_gemini_api_key_here",
"OPENROUTER_API_KEY": "your_openrouter_api_key_here",
"OLLAMA_BASE_URL": "http://localhost:11434",
"EMBEDDING_MODEL": "models/text-embedding-004",
"DEFAULT_TOP_K_MEMORY_QUERY": "3",
"SUMMARIZER_MODEL": "openai/gpt-oss-20b:free"
}
}
}
}For Roo
Add to your Roo MCP settings:
{
"mcpServers": {
"memory-qdrant-mcp": {
"command": "npx",
"args": ["memory-qdrant-mcp"]
}
}
}Available Tools
log_memory
Log a memory entry to the vector database.
Parameters:
project_name(string): Name of the projectmemory_type(string): Type of memory (productContext, activeContext, systemPatterns, decisionLog, progress)content(string): Content to logtop_level_id(string, optional): Optional top level ID
query_memory
Query memory entries from the vector database.
Parameters:
project_name(string): Name of the projectquery_text(string): Query text for semantic searchmemory_type(string, optional): Optional memory type filtertop_k(number, optional): Number of results to return (default: 3)
log_decision
Log a decision entry.
Parameters:
project_name(string): Name of the projectdecision_text(string): Decision texttop_level_id(string, optional): Optional top level ID
log_progress
Log a progress entry.
Parameters:
project_name(string): Name of the projectprogress_text(string): Progress texttop_level_id(string, optional): Optional top level ID
summarize_text
Summarize the given text.
Parameters:
text(string): Text to summarize
Publishing to npm
To publish your own version:
Update
package.jsonwith your information:- Change
nameto a unique package name - Update
author,repository,homepage - Ensure version is appropriate
- Change
Login to npm:
npm loginPublish:
npm publishUsers can then install and run:
npx your-package-name
Development
Project Structure
memory-qdrant-mcp/
├── server/
│ ├── index.js # Main MCP server
│ ├── mcp_tools/ # MCP tool implementations
│ │ ├── memoryBankTools.js
│ │ ├── store.js
│ │ └── summarizer.js
│ ├── embeddings/ # Embedding providers
│ │ ├── providerBase.js
│ │ ├── geminiVertex.js
│ │ ├── ollama.js
│ │ └── fastEmbed.js
│ └── config.js # Configuration
├── memory-bank/ # Project documentation
├── package.json
└── README.mdAdding New Tools
- Implement the tool in
server/mcp_tools/ - Register it in
server/index.js - Update this README
License
MIT