JSPM

samuraizer

0.2.0
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 134
  • Score
    100M100P100Q0F
  • License MIT

Local-first CLI that turns meeting recordings into transcripts, summaries, action items, and decisions

Package Exports

    This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (samuraizer) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

    Readme

    Samuraizer

    Turn meeting recordings into transcripts, summaries, action items, and decisions โ€” entirely on your machine. No cloud, no subscriptions, no data leaving your network.

    Samuraizer demo

    Why Samuraizer

    • Fully local. Your recordings never leave your machine.
    • CLI-first. Scriptable, automatable, integrates with cron, Git hooks, Obsidian workflows.
    • Resumable. Crashed mid-pipeline? Re-run picks up where it left off.
    • Model-agnostic. Works with any Ollama-compatible LLM โ€” pick what fits your hardware.
    • Free. No subscriptions, no per-minute pricing.

    ๐Ÿ’ป System Requirements

    RAM Recommended model
    8 GB qwen2.5:3b
    16 GB qwen2.5:7b
    32 GB+ qwen2.5:14b (default)

    Apple Silicon (M1/M2/M3/M4) and recent x86 CPUs with AVX2 are recommended. Whisper transcription is CPU/Metal-accelerated; LLM inference uses Ollama's defaults.

    โš™๏ธ Prerequisites

    Install the required tools:

    Start Ollama and pull a model:

    ollama serve
    ollama pull qwen2.5:14b

    ๐Ÿ“ฆ Installation

    npm install -g samuraizer

    ๐Ÿš€ Quick Start

    samuraizer init
    samuraizer process meeting.m4a

    On a 30-minute recording this typically takes 3โ€“5 minutes on Apple Silicon and 8โ€“15 minutes on x86 CPUs, depending on the model.

    ๐Ÿงช Commands

    Process an audio file

    samuraizer process meeting.m4a              # full pipeline
    samuraizer process meeting.m4a --verbose    # show detailed metadata
    samuraizer process meeting.m4a --force      # recompute all steps
    samuraizer process meeting.m4a --verbose --force

    Run individual steps

    samuraizer normalize input.m4a output.wav   # normalize audio for Whisper
    samuraizer summarize transcript.txt         # generate summary from transcript
    samuraizer actions transcript.txt           # extract action items
    samuraizer decisions transcript.txt         # extract decisions

    Configuration

    samuraizer init           # create default config file
    samuraizer config path    # show config file location
    samuraizer config get     # print resolved config as JSON

    Other

    samuraizer --help
    samuraizer --version

    โš™๏ธ Configuration

    Samuraizer uses a global JSON config file.

    Config location

    • macOS: ~/Library/Application Support/samuraizer/config.json
    • Linux: ~/.config/samuraizer/config.json
    • Windows: %AppData%/samuraizer/config.json

    Example config

    {
      "model": "qwen2.5:14b",
      "ollamaBaseUrl": "http://127.0.0.1:11434",
      "whisperCommand": "whisper-cli",
      "ffmpegCommand": "ffmpeg",
      "ffprobeCommand": "ffprobe"
    }

    Config fields

    • model โ€” LLM model used for analysis (summary, action items, decisions)
    • ollamaBaseUrl โ€” URL where Ollama is running
    • whisperCommand โ€” Command used to run Whisper
    • ffmpegCommand โ€” Command used for audio processing
    • ffprobeCommand โ€” Command used for audio inspection

    ๐Ÿ“‚ Example output

    After processing, you'll find structured files in output/<recording-name>/:

    output/meeting/
      transcript.txt
      summary.txt
      action-items.json
      decisions.json
      report.txt

    summary.txt

    Team standup focused on Q2 roadmap and infrastructure migration.
    The frontend team will start the Next.js upgrade next week...

    action-items.json

    [
      {
        "owner": "Alice",
        "task": "Set up staging environment for migration testing",
        "deadline": "by end of week"
      },
      {
        "owner": "Bob",
        "task": "Review the auth refactor PR",
        "deadline": null
      }
    ]

    decisions.json

    [
      {
        "decision": "Adopt Next.js 15 for the new dashboard",
        "rationale": "Better SSR and built-in App Router support"
      }
    ]

    ๐Ÿ” Resume behavior

    Samuraizer skips steps whose output files already exist. If processing crashes or you stop it mid-pipeline, just re-run the same command โ€” completed steps are reused.

    Use --force to recompute everything from scratch.

    โš ๏ธ Troubleshooting

    Ollama not running

    ollama serve

    Ollama on a non-default port

    Update ollamaBaseUrl in your config:

    {
      "ollamaBaseUrl": "http://127.0.0.1:11500"
    }

    Out of memory during analysis

    Switch to a smaller model:

    ollama pull qwen2.5:7b

    Then update model in your config to qwen2.5:7b (or qwen2.5:3b on machines with 8 GB RAM).

    Model not found

    Make sure the model in your config is actually pulled:

    ollama list
    ollama pull <model-name>

    whisper-cli not in PATH

    Build whisper.cpp and ensure the binary is on your PATH, or set the absolute path in whisperCommand in your config.

    ffmpeg not found

    macOS:

    brew install ffmpeg

    Linux:

    # Debian / Ubuntu
    sudo apt install ffmpeg
    
    # Arch / CachyOS
    sudo pacman -S ffmpeg
    
    # Fedora
    sudo dnf install ffmpeg

    Windows:

    winget install Gyan.FFmpeg

    ๐Ÿ“ Changelog

    See CHANGELOG.md for release history.

    ๐Ÿ“„ License

    MIT โ€” see LICENSE.

    ๐Ÿ”— Source code

    Available on GitHub: github.com/UladzKha/samuraizer-cli