Package Exports
- @fastmcp-me/mirror-mcp
- @fastmcp-me/mirror-mcp/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@fastmcp-me/mirror-mcp) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
mirror-mcp
A Model Context Protocol (MCP) server that provides a reflect tool, enabling LLMs to engage in self-reflection and introspection through recursive questioning and MCP sampling.
Overview
mirror-mcp allows AI models to "look at themselves" by providing a reflection mechanism. When an LLM uses the reflect tool, it can pose questions to itself and receive answers through the Model Context Protocol's sampling capabilities. This creates a powerful feedback loop for self-analysis, reasoning validation, and iterative problem-solving.
Features
- πͺ Self-Reflection Tool: Enables LLMs to ask themselves questions and receive computed responses
- π MCP Sampling Integration: Uses the Model Context Protocol's sampling mechanism for responses
- π¦ npm Installable: Easy installation and deployment
- β‘ Lightweight: Minimal dependencies and fast startup
- π§ Configurable: Customizable reflection parameters and sampling options
Installation
Quick Install for VS Code
MCP Host Configuration
For other MCP-compatible clients, add the following configuration:
{
"type": "stdio",
"command": "npx",
"args": ["mirror-mcp@latest"]
}Via npm
npm install -g mirror-mcpVia npx (no installation required)
npx mirror-mcpFrom Source
git clone https://github.com/toby/mirror-mcp.git
cd mirror-mcp
npm install
npm run build
npm startAPI Reference
Tools
reflect
Enables the LLM to ask itself a question and receive a response through MCP sampling. The tool supports custom system and user prompts to help the LLM self-direct what kind of response it gets.
Self-Direction with Custom Prompts:
- System Prompt: Define the role or perspective for the reflection (e.g., "expert coach", "critical thinker", "creative problem solver")
- User Prompt: Specify the format, structure, or focus of the reflection response
- Default Behavior: When no custom prompts are provided, uses built-in reflection guidance focused on strengths, weaknesses, assumptions, and alternative perspectives
Parameters:
question(string, required): The question the LLM wants to ask itselfcontext(string, optional): Additional context for the reflectionsystem_prompt(string, optional): Custom system prompt to direct the reflection approachuser_prompt(string, optional): Custom user prompt to replace the default reflection instructionsmax_tokens(number, optional): Maximum tokens for the response (default: 500)temperature(number, optional): Sampling temperature (default: 0.8)
Example:
{
"name": "reflect",
"arguments": {
"question": "How confident am I in my previous analysis of the data?",
"context": "Previous analysis showed a 23% increase in user engagement",
"max_tokens": 300,
"temperature": 0.6
}
}Example with custom prompts:
{
"name": "reflect",
"arguments": {
"question": "What are the potential weaknesses in my reasoning?",
"system_prompt": "You are an expert critical thinking coach helping to identify logical fallacies and reasoning gaps.",
"user_prompt": "Analyze my reasoning step-by-step and provide specific examples of potential weaknesses or blind spots.",
"context": "Working on a complex machine learning model evaluation",
"max_tokens": 400,
"temperature": 0.7
}
}Response:
{
"reflection": "Upon reflection, my confidence in the 23% engagement increase analysis is moderate to high. The data sources appear reliable, and the methodology follows standard practices. However, I should consider potential confounding variables such as seasonal effects or concurrent marketing campaigns that might influence the results.",
"metadata": {
"tokens_used": 67,
"reflection_time_ms": 1240
}
}Architecture & Rationale
Design Philosophy
mirror-mcp is built on the principle that self-reflection is crucial for robust AI reasoning. By enabling models to question their own outputs and reasoning processes, we create opportunities for:
- Error Detection: Models can identify potential flaws in their logic
- Confidence Calibration: Self-assessment helps gauge certainty levels
- Iterative Improvement: Reflective questioning can lead to better solutions
- Metacognitive Awareness: Understanding of the model's own reasoning process
Technical Architecture
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β LLM Client βββββΆβ mirror-mcp βββββΆβ MCP Sampling β
β β β β β Infrastructure β
β Calls reflect() β β Processes β β β
β ββββββ reflection ββββββ Returns responseβ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββKey Components
- Reflection Engine: Processes incoming self-directed questions
- Sampling Interface: Interfaces with MCP's sampling capabilities
- Context Manager: Maintains conversation context for coherent reflections
- Response Formatter: Structures reflection responses for optimal consumption
Why MCP?
The Model Context Protocol provides a standardized way for AI models to connect with external resources and tools. By implementing mirror-mcp as an MCP server, we ensure:
- Interoperability: Works with any MCP-compatible client
- Standardization: Follows established protocols for tool integration
- Scalability: Can be deployed alongside other MCP servers
- Future-Proofing: Benefits from ongoing MCP ecosystem development
Sampling Strategy
The reflection mechanism leverages MCP's sampling capabilities to generate thoughtful responses. The sampling process:
- Takes the self-directed question as a prompt
- Applies configurable sampling parameters (temperature, max tokens)
- Generates a response using the underlying model
- Returns the reflection with appropriate metadata
This approach ensures that reflections are generated using the same model capabilities as the original reasoning, creating authentic self-assessment.
Development
Prerequisites
- Node.js 18 or higher
- npm or yarn
- TypeScript (for development)
Development Setup
git clone https://github.com/toby/mirror-mcp.git
cd mirror-mcp
npm install
npm run devTesting
npm testBuilding
npm run buildContributing
We welcome contributions! Please see our Contributing Guidelines for details.
Areas for Contribution
- Enhanced reflection strategies
- Additional sampling parameters
- Performance optimizations
- Documentation improvements
- Test coverage expansion
Related Projects
- Model Context Protocol: The foundational protocol specification
- MCP Ecosystem: Various other MCP servers and tools
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- The Model Context Protocol team for creating the foundational specification
- The broader AI research community working on metacognition and self-reflection
- Contributors and early adopters who help shape this tool
"The unexamined life is not worth living" - Socrates
Enable your AI models to examine their own reasoning with mirror-mcp.