Package Exports
- @memory-mesh/mcp-server
- @memory-mesh/mcp-server/dist/server.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@memory-mesh/mcp-server) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Memory Mesh MCP Server
โ ๏ธ Beta Release (v0.x.x): This package is in active development. APIs may change between versions. Not recommended for production use yet.
AI-powered knowledge management through the Model Context Protocol (MCP). Seamlessly create, search, and manage team knowledge fragments directly from your AI assistant conversations.
๐ Quick Start
1. Initialize Your Project
npx @memory-mesh/mcp-server initThis creates local IDE configuration files.
2. Install MCP Server
npx @memory-mesh/mcp-server install \
--workspace-id your-workspace-uuid \
--api-token your-access-tokenThis configures your IDE's mcp.json to connect to Memory Mesh.
3. Start Using
Restart your IDE and start using Memory Mesh tools in your AI assistant!
๐ Installation Options
Basic Installation
npx @memory-mesh/mcp-server install --workspace-id <uuid> --api-token <token>Custom Configuration
npx @memory-mesh/mcp-server install \
--workspace-id <uuid> \
--api-token <token> \
--memory-mesh-url https://memory-mesh.com \
--db-path ~/custom/path/cache.dbAvailable Options
| Option | Description | Default |
|---|---|---|
--workspace-id <uuid> |
Memory Mesh workspace ID | Required |
--api-token <token> |
Memory Mesh API access token | Required |
--memory-mesh-url <url> |
Memory Mesh API URL | https://memory-mesh.com |
--db-path <path> |
Custom database file path | ~/.memory-mesh-mcp/cache.db |
โ๏ธ Configuration
MCP Configuration (Recommended)
The install command automatically configures your IDE's mcp.json file:
{
"mcpServers": {
"memory-mesh": {
"command": "npx",
"args": ["@memory-mesh/mcp-server"],
"env": {
"MEMORY_MESH_URL": "https://memory-mesh.com",
"MEMORY_MESH_ACCESS_TOKEN": "your-access-token",
"DEFAULT_WORKSPACE_ID": "your-workspace-uuid",
"DB_PATH": "~/.memory-mesh-mcp/cache.db"
}
}
}
}Configuration Parameters
| Parameter | Type | Description | Default | Required |
|---|---|---|---|---|
| MEMORY_MESH_URL | string | Memory Mesh API URL | https://memory-mesh.com |
|
| MEMORY_MESH_ACCESS_TOKEN | string | Memory Mesh API access token | - | โ |
| DEFAULT_WORKSPACE_ID | string | Default workspace UUID | - | |
| DB_PATH | string | Database file path | ~/.memory-mesh-mcp/cache.db |
|
| AUTO_TAGGING | boolean | Enable automatic tagging | true |
|
| SEARCH_THRESHOLD | number | Search similarity threshold | 0.7 |
|
| LOG_LEVEL | string | Logging level (debug, info, warn, error) | info |
Multiple Workspace Configuration
You can configure multiple Memory Mesh workspaces for different projects:
{
"mcpServers": {
"memory-mesh-personal": {
"command": "npx",
"args": ["@memory-mesh/mcp-server"],
"env": {
"MEMORY_MESH_ACCESS_TOKEN": "personal-token",
"DEFAULT_WORKSPACE_ID": "personal-workspace-uuid"
}
},
"memory-mesh-work": {
"command": "npx",
"args": ["@memory-mesh/mcp-server"],
"env": {
"MEMORY_MESH_ACCESS_TOKEN": "work-token",
"DEFAULT_WORKSPACE_ID": "work-workspace-uuid"
}
}
}
}Database Path Configuration
The MCP server stores local cache and stream cursors in a SurrealDB file. You can configure the database location:
1. Via Install Command (Recommended)
npx @memory-mesh/mcp-server install \
--workspace-id <uuid> \
--api-token <token> \
--db-path ~/my-project/memory-mesh-cache.db2. Via MCP Configuration
{
"mcpServers": {
"memory-mesh": {
"command": "npx",
"args": ["@memory-mesh/mcp-server"],
"env": {
"MEMORY_MESH_ACCESS_TOKEN": "your-token",
"DEFAULT_WORKSPACE_ID": "your-workspace-uuid",
"DB_PATH": "~/my-project/memory-mesh-cache.db"
}
}
}
}Default Locations
- Default:
~/.memory-mesh-mcp/cache.db - Custom: Any path you specify
- Relative paths: Resolved relative to current working directory
- Tilde expansion:
~expands to user home directory
๐ง Available Tools
Your AI assistant will have access to these Memory Mesh tools:
create_memory_fragment
Create memory fragments from solved problems and insights:
- Use when solving problems that should be documented
- Automatically tagged with repository context
- Choose appropriate fragment type (knowledge, recipe, solution, template)
search_memory_fragments
Search existing team knowledge:
- Use when asking about problems or patterns
- Search before providing generic solutions
- Include repository context in searches
get_fragment_types
Get available fragment types for the workspace:
- Use to show available types of knowledge you can create
- Help choose appropriate type for content
explore_fragment_graph
Explore related knowledge through tag connections:
- Use when you want to discover related knowledge
- Follow tag relationships to find connected insights
๐๏ธ Architecture
Hybrid Data Flow
- Fragment Creation: Direct API calls to Memory Mesh service
- Fragment Search: Fast queries from local SurrealDB cache
- Real-time Sync: WebSocket streaming keeps cache current
- Resilient Connection: Automatic retry with exponential backoff
Database Structure
~/.memory-mesh-mcp/
โโโ cache.db # SurrealDB file with fragments and stream cursors
โโโ logs/
โโโ server.log # MCP server logs
โโโ websocket.log # WebSocket connection logs
โโโ sync.log # Fragment sync logs๐ง Development
Requirements
- Runtime: Bun or Node.js 18+
- MCP Client: Claude Desktop, Cursor, or other MCP-compatible AI assistant
Local Development
# Clone the repository
git clone https://github.com/flowcore-io/memory-mesh-mcp
cd memory-mesh-mcp
# Install dependencies
bun install
# Run in development mode
bun run dev:server
# Build for production
bun run build๐งช Testing
We use a comprehensive testing strategy with Bun's built-in test framework to ensure reliability and MCP protocol compliance.
Test Architecture
- Unit Tests: Individual components and utilities (80% coverage)
- Integration Tests: API integration with mocked responses (90% coverage)
- End-to-End Tests: Full MCP server and client communication
- Protocol Compliance: MCP protocol validation via test client
Running Tests
# Run all tests
bun test
# Run specific test suites
bun run test:unit # Unit tests only
bun run test:integration # Integration tests with mocked APIs
bun run test:e2e # End-to-end MCP protocol tests
# Development testing
bun run test:watch # Watch mode for development
bun run test:coverage # Run with coverage reporting
bun run test:debug # Verbose output for debuggingTest Structure
tests/
โโโ unit/ # Unit tests for individual components
โ โโโ clients/ # Memory Mesh API client tests
โ โโโ utils/ # Repository processing, tagging logic
โ โโโ config/ # Configuration loading tests
โโโ integration/ # Integration tests with mocked APIs
โ โโโ tools/ # MCP tool tests with mock responses
โ โโโ api-integration/ # Memory Mesh API integration tests
โ โโโ streaming/ # SSE streaming tests
โโโ e2e/ # End-to-end MCP protocol tests
โ โโโ protocol-compliance/ # MCP protocol validation
โ โโโ workflows/ # Complete user workflow tests
โ โโโ server-lifecycle/ # Server startup and shutdown tests
โโโ fixtures/ # Test data and mock responsesAPI Mocking
Tests use MSW (Mock Service Worker) to mock Memory Mesh API responses:
// Example test with mocked API
test('creates fragment with repository context', async () => {
// API response is automatically mocked
const result = await createFragment({
workspaceId: 'test-workspace',
title: 'Test Fragment',
content: 'Test content',
repository: 'my-project'
});
expect(result.success).toBe(true);
expect(result.tags).toContain('repo:my-project');
});MCP Protocol Testing
We use a custom test MCP client to validate protocol compliance:
// Example MCP protocol test
test('server provides correct tool schemas', async () => {
const client = new TestMCPClient();
await client.start();
const tools = await client.listTools();
expect(tools.tools).toHaveLength(4);
const createTool = tools.tools.find(t => t.name === 'create_memory_fragment');
expect(createTool.inputSchema.required).toContain('workspaceId');
await client.stop();
});Coverage Requirements
- Overall Coverage: 80% minimum
- Critical Paths: 90% for MCP tools and API integration
- Error Handling: 100% for error scenarios
- Protocol Compliance: 100% for MCP protocol interactions
Writing Tests
When adding new features:
- Write unit tests for individual functions and utilities
- Add integration tests with mocked API responses
- Include error scenario testing for edge cases
- Add MCP protocol tests for new tools or resources
- Update test fixtures with realistic data
CI/CD Integration
Tests run automatically in GitHub Actions:
- โ All commits: Unit and integration tests
- โ Pull requests: Full test suite with coverage reporting
- โ Releases: Complete test validation before publishing
Debugging Tests
# Run single test file
bun test tests/unit/utils/repository.test.ts
# Run tests matching pattern
bun test --grep "fragment creation"
# Debug with verbose output
bun test --verbose tests/integration/
# Generate coverage report
bun run test:coverage
open coverage/index.html # View coverage in browserCI/CD with Blacksmith v2
This project uses Blacksmith v2 containers for faster, more cost-effective CI/CD:
- 2x faster builds with bare-metal gaming CPUs
- 4x faster cache downloads with co-located dependency caching
- 75% cost reduction compared to GitHub-hosted runners
- Unlimited concurrency for parallel job execution
Blacksmith Runners Used
- Main CI:
blacksmith-4vcpu-ubuntu-2204(4 vCPU) for comprehensive testing - Release Please:
blacksmith-4vcpu-ubuntu-2204(4 vCPU) for release PR creation - Build & Publish:
blacksmith-4vcpu-ubuntu-2204(4 vCPU) for production builds and npm publishing
Performance Benefits
- Node.js matrix testing: 3 versions tested in parallel on high-performance containers
- Cross-platform builds: Ubuntu builds on Blacksmith, Windows/macOS on standard runners
- Optimized caching: Uses
useblacksmith/setup-node@v5with Bun caching for faster dependency installation - Enhanced observability: Better CI/CD monitoring and debugging through Blacksmith dashboard
Testing Installation
# Test CLI commands
bun run dev:cli
# Test MCP server
bun run dev:server
# Verify installation (for end users)
node scripts/verify-install.js๐ Troubleshooting
Connection Issues
- Check your workspace ID: Get it from the Memory Mesh web interface
- Verify your API token: Ensure it has access to the workspace
- Check logs: Look in
~/.memory-mesh-mcp/logs/for error details
Database Issues
- Permission errors: Ensure the database directory is writable
- Disk space: Check available disk space for the database file
- Path issues: Verify the database path exists and is accessible
WebSocket Issues
The MCP server automatically handles connection issues with exponential backoff retry. Check the WebSocket logs if you suspect connectivity problems.
๐ Documentation
๐ค Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
๐ Releases
This project uses Release Please for automated releases based on Conventional Commits.
Release Process
Commit with conventional format:
feat: add new feature fix: resolve bug docs: update documentationRelease Please creates PR: When commits are pushed to
main, Release Please analyzes the commits and creates a release PR if needed.Merge release PR: Merging the release PR triggers:
- Version bump in
package.json - CHANGELOG.md update
- GitHub release creation
- Version bump in
Automatic publishing: GitHub release published event triggers separate build workflow for npm publishing
Conventional Commit Types
feat:โ Minor version bump (new features)fix:โ Patch version bump (bug fixes)feat!:orBREAKING CHANGE:โ Major version bumpdocs:,chore:,ci:,test:โ No version bump (included in changelog)
Current Release
See CHANGELOG.md for detailed changes and Releases for all versions.
๐ License
MIT License - see LICENSE for details.
๐ Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Memory Mesh Docs
Release Notes
This project uses Release Please for automated releases.