JSPM

mcp-voice-interface

1.0.0
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 1
  • Score
    100M100P100Q53466F
  • License MIT

Browser-based voice input/output for AI Assistant conversations via MCP (Model Context Protocol)

Package Exports

  • mcp-voice-interface
  • mcp-voice-interface/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (mcp-voice-interface) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

MCP Voice Interface

Talk to AI assistants using your voice through a web browser. Compatible with Claude Desktop, OpenCode, and other MCP-enabled AI tools.

Features

🎙️ Voice Conversations - Speak naturally with AI assistants
🌍 30+ Languages - Speech recognition in multiple languages
📱 Remote Access - Use from phone/tablet while AI runs on computer
⚙️ Smart Controls - Collapsible settings, always-on mode, custom voices
⏱️ Dynamic Timeouts - Intelligent wait times based on response length

Easy Installation

🚀 One-Command Setup

Claude Desktop:

npx mcp-voice-interface --install-claude-config
# Restart Claude Desktop and you're ready!

OpenCode (in current project):

npx mcp-voice-interface --install-opencode-config --local
npx mcp-voice-interface --install-opencode-plugin --local
# Start OpenCode and use the converse tool

Claude Code CLI:

npx mcp-voice-interface --install-claude-code-config --local
# Start Claude Code CLI and use voice tools

📦 Manual Installation

From NPM:

npm install -g mcp-voice-interface
mcp-voice-interface

From Source:

git clone <repository-url>
cd mcp-voice-interface
npm install && npm run build && npm start

How to Use

  1. Start the server - Run mcp-voice-interface
  2. Open browser - Visit https://localhost:5114 (opens automatically)
  3. Allow microphone - Grant permissions when prompted
  4. Start talking - Use the converse tool in your AI assistant

Voice Commands in AI Chat

Use the converse tool to start talking:
- converse("Hello! How can I help you today?", timeout: 35)

Browser Interface

The web interface provides:

  • Voice Settings (click ⚙️ to expand)
    • Language selection (30+ options)
    • Voice selection
    • Speech speed control
    • Always-on microphone mode
  • Smart Controls
    • Pause during AI speech (prevents echo)
    • Stop AI when user speaks (natural conversation)
  • Mobile Friendly - Works on phones and tablets

Remote Access

Access from any device on your network:

  1. Find your computer's IP: ifconfig | grep inet (Mac/Linux) or ipconfig (Windows)
  2. Visit https://YOUR_IP:5114 on your phone/browser
  3. Accept the security warning (self-signed certificate)
  4. Grant microphone permissions

Perfect for continuing conversations away from your desk!

Configuration

Environment Variables

export MCP_VOICE_AUTO_OPEN=false  # Disable auto-opening browser
export MCP_VOICE_HTTPS_PORT=5114  # Change HTTPS port

Ports

  • HTTPS: 5114 (required for microphone access)
  • HTTP: 5113 (local access only)

Requirements

  • Node.js 18+
  • Modern browser (Chrome/Safari recommended)
  • Microphone access

Troubleshooting

Certificate warnings on mobile?

  • Tap "Advanced" → "Proceed to site" to accept self-signed certificate

Microphone not working?

  • Ensure you're using HTTPS (not HTTP)
  • Check browser permissions
  • Try refreshing the page

AI not responding to voice?

  • Make sure the converse tool is being used (not just speak)
  • Check that timeouts are properly calculated

Development

npm install
npm run build
npm run dev     # Watch mode
npm run start   # Run server

License

MIT