JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 2
  • Score
    100M100P100Q41796F
  • License MIT

Memory Mesh MCP Server - AI-powered knowledge management for development teams

Package Exports

  • @memory-mesh/mcp-server
  • @memory-mesh/mcp-server/dist/server.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@memory-mesh/mcp-server) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

Memory Mesh MCP Server

npm version Node.js Version

โš ๏ธ Beta Release (v0.x.x): This package is in active development. APIs may change between versions. Not recommended for production use yet.

AI-powered knowledge management through the Model Context Protocol (MCP). Seamlessly create, search, and manage team knowledge fragments directly from your AI assistant conversations.

๐Ÿš€ Quick Start

1. Initialize Your Project

npx @memory-mesh/mcp-server init

This creates local IDE configuration files and environment templates.

2. Install MCP Server

npx @memory-mesh/mcp-server install \
  --workspace-id your-workspace-uuid \
  --api-token your-access-token

This configures your IDE to connect to Memory Mesh.

3. Start Using

Restart your IDE and start using Memory Mesh tools in your AI assistant!

๐Ÿ“‹ Installation Options

Basic Installation

npx @memory-mesh/mcp-server install --workspace-id <uuid> --api-token <token>

Custom Configuration

npx @memory-mesh/mcp-server install \
  --workspace-id <uuid> \
  --api-token <token> \
  --memory-mesh-url https://memory-mesh.com \
  --db-path ~/custom/path/cache.db

Available Options

Option Description Default
--workspace-id <uuid> Memory Mesh workspace ID Required
--api-token <token> Memory Mesh API access token Required
--memory-mesh-url <url> Memory Mesh API URL https://memory-mesh.com
--db-path <path> Custom database file path ~/.memory-mesh-mcp/cache.db

โš™๏ธ Configuration

Environment Variables

You can also configure the MCP server using environment variables:

Variable Type Description Default Required
MEMORY_MESH_URL string Memory Mesh API URL https://memory-mesh.com
MEMORY_MESH_ACCESS_TOKEN string Memory Mesh API access token - โœ“
DEFAULT_WORKSPACE_ID string Default workspace UUID -
DB_PATH string Database file path ~/.memory-mesh-mcp/cache.db
AUTO_TAGGING boolean Enable automatic tagging true
SEARCH_THRESHOLD number Search similarity threshold 0.7
LOG_LEVEL string Logging level (debug, info, warn, error) info

Manual Configuration

Create a .env.memory-mesh file in your project:

# Memory Mesh Connection (Required)
MEMORY_MESH_URL=https://memory-mesh.com
MEMORY_MESH_ACCESS_TOKEN=your_access_token_here

# Optional Configuration
DEFAULT_WORKSPACE_ID=your_workspace_uuid_here
AUTO_TAGGING=true
SEARCH_THRESHOLD=0.7
LOG_LEVEL=info

# Local Database
DB_PATH=~/.memory-mesh-mcp/cache.db

# WebSocket Resilience Configuration
WS_INITIAL_RETRY_DELAY=1000
WS_MAX_RETRY_DELAY=30000
WS_BACKOFF_MULTIPLIER=1.5
STREAM_BATCH_SIZE=100
CURSOR_UPDATE_INTERVAL=1000

Database Path Configuration

The MCP server stores local cache and stream cursors in a SurrealDB file. You can configure the database location in several ways:

npx @memory-mesh/mcp-server install \
  --workspace-id <uuid> \
  --api-token <token> \
  --db-path ~/my-project/memory-mesh-cache.db

2. Via Environment Variable

export DB_PATH=~/my-project/memory-mesh-cache.db
npx @memory-mesh/mcp-server

3. Via Configuration File

# .env.memory-mesh
DB_PATH=~/my-project/memory-mesh-cache.db

Default Locations

  • Default: ~/.memory-mesh-mcp/cache.db
  • Custom: Any path you specify
  • Relative paths: Resolved relative to current working directory
  • Tilde expansion: ~ expands to user home directory

๐Ÿง  Available Tools

Your AI assistant will have access to these Memory Mesh tools:

create_memory_fragment

Create memory fragments from solved problems and insights:

  • Use when solving problems that should be documented
  • Automatically tagged with repository context
  • Choose appropriate fragment type (knowledge, recipe, solution, template)

search_memory_fragments

Search existing team knowledge:

  • Use when asking about problems or patterns
  • Search before providing generic solutions
  • Include repository context in searches

get_fragment_types

Get available fragment types for the workspace:

  • Use to show available types of knowledge you can create
  • Help choose appropriate type for content

explore_fragment_graph

Explore related knowledge through tag connections:

  • Use when you want to discover related knowledge
  • Follow tag relationships to find connected insights

๐Ÿ—๏ธ Architecture

Hybrid Data Flow

  • Fragment Creation: Direct API calls to Memory Mesh service
  • Fragment Search: Fast queries from local SurrealDB cache
  • Real-time Sync: WebSocket streaming keeps cache current
  • Resilient Connection: Automatic retry with exponential backoff

Database Structure

~/.memory-mesh-mcp/
โ”œโ”€โ”€ cache.db              # SurrealDB file with fragments and stream cursors
โ””โ”€โ”€ logs/
    โ”œโ”€โ”€ server.log       # MCP server logs
    โ”œโ”€โ”€ websocket.log    # WebSocket connection logs
    โ””โ”€โ”€ sync.log         # Fragment sync logs

๐Ÿ”ง Development

Requirements

  • Runtime: Bun or Node.js 18+
  • MCP Client: Claude Desktop, Cursor, or other MCP-compatible AI assistant

Local Development

# Clone the repository
git clone https://github.com/flowcore-io/memory-mesh-mcp
cd memory-mesh-mcp

# Install dependencies
bun install

# Run in development mode
bun run dev:server

# Build for production
bun run build

๐Ÿงช Testing

We use a comprehensive testing strategy with Bun's built-in test framework to ensure reliability and MCP protocol compliance.

Test Architecture

  • Unit Tests: Individual components and utilities (80% coverage)
  • Integration Tests: API integration with mocked responses (90% coverage)
  • End-to-End Tests: Full MCP server and client communication
  • Protocol Compliance: MCP protocol validation via test client

Running Tests

# Run all tests
bun test

# Run specific test suites
bun run test:unit          # Unit tests only
bun run test:integration   # Integration tests with mocked APIs
bun run test:e2e          # End-to-end MCP protocol tests

# Development testing
bun run test:watch        # Watch mode for development
bun run test:coverage     # Run with coverage reporting
bun run test:debug        # Verbose output for debugging

Test Structure

tests/
โ”œโ”€โ”€ unit/                 # Unit tests for individual components
โ”‚   โ”œโ”€โ”€ clients/         # Memory Mesh API client tests
โ”‚   โ”œโ”€โ”€ utils/           # Repository processing, tagging logic
โ”‚   โ””โ”€โ”€ config/          # Configuration loading tests
โ”œโ”€โ”€ integration/         # Integration tests with mocked APIs
โ”‚   โ”œโ”€โ”€ tools/           # MCP tool tests with mock responses
โ”‚   โ”œโ”€โ”€ api-integration/ # Memory Mesh API integration tests
โ”‚   โ””โ”€โ”€ streaming/       # SSE streaming tests
โ”œโ”€โ”€ e2e/                 # End-to-end MCP protocol tests
โ”‚   โ”œโ”€โ”€ protocol-compliance/ # MCP protocol validation
โ”‚   โ”œโ”€โ”€ workflows/       # Complete user workflow tests
โ”‚   โ””โ”€โ”€ server-lifecycle/ # Server startup and shutdown tests
โ””โ”€โ”€ fixtures/            # Test data and mock responses

API Mocking

Tests use MSW (Mock Service Worker) to mock Memory Mesh API responses:

// Example test with mocked API
test('creates fragment with repository context', async () => {
  // API response is automatically mocked
  const result = await createFragment({
    workspaceId: 'test-workspace',
    title: 'Test Fragment',
    content: 'Test content',
    repository: 'my-project'
  });
  
  expect(result.success).toBe(true);
  expect(result.tags).toContain('repo:my-project');
});

MCP Protocol Testing

We use a custom test MCP client to validate protocol compliance:

// Example MCP protocol test
test('server provides correct tool schemas', async () => {
  const client = new TestMCPClient();
  await client.start();
  
  const tools = await client.listTools();
  expect(tools.tools).toHaveLength(4);
  
  const createTool = tools.tools.find(t => t.name === 'create_memory_fragment');
  expect(createTool.inputSchema.required).toContain('workspaceId');
  
  await client.stop();
});

Coverage Requirements

  • Overall Coverage: 80% minimum
  • Critical Paths: 90% for MCP tools and API integration
  • Error Handling: 100% for error scenarios
  • Protocol Compliance: 100% for MCP protocol interactions

Writing Tests

When adding new features:

  1. Write unit tests for individual functions and utilities
  2. Add integration tests with mocked API responses
  3. Include error scenario testing for edge cases
  4. Add MCP protocol tests for new tools or resources
  5. Update test fixtures with realistic data

CI/CD Integration

Tests run automatically in GitHub Actions:

  • โœ… All commits: Unit and integration tests
  • โœ… Pull requests: Full test suite with coverage reporting
  • โœ… Releases: Complete test validation before publishing

Debugging Tests

# Run single test file
bun test tests/unit/utils/repository.test.ts

# Run tests matching pattern
bun test --grep "fragment creation"

# Debug with verbose output
bun test --verbose tests/integration/

# Generate coverage report
bun run test:coverage
open coverage/index.html  # View coverage in browser

CI/CD with Blacksmith v2

This project uses Blacksmith v2 containers for faster, more cost-effective CI/CD:

  • 2x faster builds with bare-metal gaming CPUs
  • 4x faster cache downloads with co-located dependency caching
  • 75% cost reduction compared to GitHub-hosted runners
  • Unlimited concurrency for parallel job execution

Blacksmith Runners Used

  • Main CI: blacksmith-4vcpu-ubuntu-2204 (4 vCPU) for comprehensive testing
  • Release Please: blacksmith-4vcpu-ubuntu-2204 (4 vCPU) for release PR creation
  • Build & Publish: blacksmith-4vcpu-ubuntu-2204 (4 vCPU) for production builds and npm publishing

Performance Benefits

  • Node.js matrix testing: 3 versions tested in parallel on high-performance containers
  • Cross-platform builds: Ubuntu builds on Blacksmith, Windows/macOS on standard runners
  • Optimized caching: Uses useblacksmith/setup-node@v5 with Bun caching for faster dependency installation
  • Enhanced observability: Better CI/CD monitoring and debugging through Blacksmith dashboard

Testing Installation

# Test CLI commands
bun run dev:cli

# Test MCP server
bun run dev:server

# Verify installation (for end users)
node scripts/verify-install.js

๐Ÿ” Troubleshooting

Connection Issues

  1. Check your workspace ID: Get it from the Memory Mesh web interface
  2. Verify your API token: Ensure it has access to the workspace
  3. Check logs: Look in ~/.memory-mesh-mcp/logs/ for error details

Database Issues

  1. Permission errors: Ensure the database directory is writable
  2. Disk space: Check available disk space for the database file
  3. Path issues: Verify the database path exists and is accessible

WebSocket Issues

The MCP server automatically handles connection issues with exponential backoff retry. Check the WebSocket logs if you suspect connectivity problems.

๐Ÿ“š Documentation

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

๐Ÿ“ Releases

This project uses Release Please for automated releases based on Conventional Commits.

Release Process

  1. Commit with conventional format:

    feat: add new feature
    fix: resolve bug
    docs: update documentation
  2. Release Please creates PR: When commits are pushed to main, Release Please analyzes the commits and creates a release PR if needed.

  3. Merge release PR: Merging the release PR triggers:

    • Version bump in package.json
    • CHANGELOG.md update
    • GitHub release creation
  4. Automatic publishing: GitHub release published event triggers separate build workflow for npm publishing

Conventional Commit Types

  • feat: โ†’ Minor version bump (new features)
  • fix: โ†’ Patch version bump (bug fixes)
  • feat!: or BREAKING CHANGE: โ†’ Major version bump
  • docs:, chore:, ci:, test: โ†’ No version bump (included in changelog)

Current Release

See CHANGELOG.md for detailed changes and Releases for all versions.

๐Ÿ“„ License

MIT License - see LICENSE for details.

๐Ÿ†˜ Support

Release Notes

This project uses Release Please for automated releases.