JSPM

utter-cli

1.0.0
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • 0
  • Score
    100M100P100Q23636F
  • License MIT

CLI for Text-to-Speech using ElevenLabs - designed for humans and AI agents

Package Exports

    This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (utter-cli) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

    Readme

    Utter

    A Text-to-Speech CLI using ElevenLabs, designed for humans and AI agents.

    Give your AI agent a voice. Utter converts text to speech from the terminal — use it manually, let your agent call it as a tool, or wire it up as a hook so every response is spoken aloud automatically.

    Installation

    npm install -g utter-cli

    Or install from source:

    git clone https://github.com/gztomas/utter.git
    cd utter
    npm install && npm run build
    npm install -g .

    Setup

    Get your API key from ElevenLabs, then:

    utter init

    Or set it via environment variable:

    export ELEVENLABS_API_KEY=your_key_here

    Usage

    Speak text aloud

    utter me "Hello, world!"

    Speak from a file

    utter me --file article.txt

    Save to MP3

    utter to hello.mp3 "Hello"

    Use a specific voice

    utter me --voice Rachel "Hello there!"

    Pipe text

    echo "Hello from stdin" | utter me
    cat document.txt | utter me

    List available voices

    utter voices

    Set default voice

    utter voices --set-default <voice-id>

    Options

    utter me

    Option Description
    -f, --file <path> Read text from file
    -v, --voice <voice> Voice ID or name
    -m, --model <model> Model ID (default: eleven_multilingual_v2)
    -s, --stream Stream audio chunks as they're ready
    -q, --quiet Suppress progress output

    utter to <file>

    Option Description
    -f, --file <path> Read text from file
    -v, --voice <voice> Voice ID or name
    -m, --model <model> Model ID (default: eleven_multilingual_v2)
    -q, --quiet Suppress progress output

    utter voices

    Option Description
    --json Output as JSON
    --set-default <id> Set the default voice

    Use with AI agents

    There are two ways to give your AI agent a voice with Utter:

    1. Tell your agent to use it

    Add an instruction to your agent's system prompt (or a CLAUDE.md / rules file) telling it to speak its replies:

    Every time you reply, run utter me --quiet "<your reply>" so the user hears it.

    The agent will call the command as a tool on each response.

    2. Set up a hook (automatic)

    With Claude Code, you can add a hook that automatically speaks every response — no agent instructions needed. Add this to your .claude/settings.json:

    {
      "hooks": {
        "PostToolUse": [
          {
            "matcher": "Task",
            "hooks": [
              {
                "type": "command",
                "command": "utter me --quiet \"$CLAUDE_TOOL_RESULT\""
              }
            ]
          }
        ]
      }
    }

    Tips for agent usage

    • Use --quiet to suppress progress output and keep the agent's context clean
    • Use --json with utter voices for machine-readable voice lists
    • Pipe text via stdin: echo "Hello" | utter me --quiet
    • Exit codes: 0 = success, 1 = error

    Configuration

    Configuration is stored in ~/.utter/config.json. You can also use environment variables:

    Variable Description
    ELEVENLABS_API_KEY Your ElevenLabs API key
    UTTER_DEFAULT_VOICE Default voice ID

    License

    MIT