Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (samuraizer) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Samuraizer
Turn meeting recordings into transcripts, summaries, action items, and decisions โ entirely on your machine. No cloud, no subscriptions, no data leaving your network.

Why Samuraizer
- Fully local. Your recordings never leave your machine.
- CLI-first. Scriptable, automatable, integrates with cron, Git hooks, Obsidian workflows.
- Resumable. Crashed mid-pipeline? Re-run picks up where it left off.
- Model-agnostic. Works with any Ollama-compatible LLM โ pick what fits your hardware.
- Free. No subscriptions, no per-minute pricing.
๐ป System Requirements
| RAM | Recommended model |
|---|---|
| 8 GB | qwen2.5:3b |
| 16 GB | qwen2.5:7b |
| 32 GB+ | qwen2.5:14b (default) |
Apple Silicon (M1/M2/M3/M4) and recent x86 CPUs with AVX2 are recommended. Whisper transcription is CPU/Metal-accelerated; LLM inference uses Ollama's defaults.
โ๏ธ Prerequisites
Install the required tools:
- Node.js โฅ 20 โ nodejs.org
- ffmpeg โ for audio processing
- whisper-cli โ from whisper.cpp
- Ollama โ ollama.com
Start Ollama and pull a model:
ollama serve
ollama pull qwen2.5:14b๐ฆ Installation
npm install -g samuraizer๐ Quick Start
samuraizer init
samuraizer process meeting.m4aOn a 30-minute recording this typically takes 3โ5 minutes on Apple Silicon and 8โ15 minutes on x86 CPUs, depending on the model.
๐งช Commands
Process an audio file
samuraizer process meeting.m4a # full pipeline
samuraizer process meeting.m4a --verbose # show detailed metadata
samuraizer process meeting.m4a --force # recompute all steps
samuraizer process meeting.m4a --verbose --forceRun individual steps
samuraizer normalize input.m4a output.wav # normalize audio for Whisper
samuraizer summarize transcript.txt # generate summary from transcript
samuraizer actions transcript.txt # extract action items
samuraizer decisions transcript.txt # extract decisionsConfiguration
samuraizer init # create default config file
samuraizer config path # show config file location
samuraizer config get # print resolved config as JSONOther
samuraizer --help
samuraizer --versionโ๏ธ Configuration
Samuraizer uses a global JSON config file.
Config location
- macOS:
~/Library/Application Support/samuraizer/config.json - Linux:
~/.config/samuraizer/config.json - Windows:
%AppData%/samuraizer/config.json
Example config
{
"model": "qwen2.5:14b",
"ollamaBaseUrl": "http://127.0.0.1:11434",
"whisperCommand": "whisper-cli",
"ffmpegCommand": "ffmpeg",
"ffprobeCommand": "ffprobe"
}Config fields
- model โ LLM model used for analysis (summary, action items, decisions)
- ollamaBaseUrl โ URL where Ollama is running
- whisperCommand โ Command used to run Whisper
- ffmpegCommand โ Command used for audio processing
- ffprobeCommand โ Command used for audio inspection
๐ Example output
After processing, you'll find structured files in output/<recording-name>/:
output/meeting/
transcript.txt
summary.txt
action-items.json
decisions.json
report.txtsummary.txt
Team standup focused on Q2 roadmap and infrastructure migration.
The frontend team will start the Next.js upgrade next week...action-items.json
[
{
"owner": "Alice",
"task": "Set up staging environment for migration testing",
"deadline": "by end of week"
},
{
"owner": "Bob",
"task": "Review the auth refactor PR",
"deadline": null
}
]decisions.json
[
{
"decision": "Adopt Next.js 15 for the new dashboard",
"rationale": "Better SSR and built-in App Router support"
}
]๐ Resume behavior
Samuraizer skips steps whose output files already exist. If processing crashes or you stop it mid-pipeline, just re-run the same command โ completed steps are reused.
Use --force to recompute everything from scratch.
โ ๏ธ Troubleshooting
Ollama not running
ollama serveOllama on a non-default port
Update ollamaBaseUrl in your config:
{
"ollamaBaseUrl": "http://127.0.0.1:11500"
}Out of memory during analysis
Switch to a smaller model:
ollama pull qwen2.5:7bThen update model in your config to qwen2.5:7b (or qwen2.5:3b on machines with 8 GB RAM).
Model not found
Make sure the model in your config is actually pulled:
ollama list
ollama pull <model-name>whisper-cli not in PATH
Build whisper.cpp and ensure the binary is on your PATH, or set the absolute path in whisperCommand in your config.
ffmpeg not found
macOS:
brew install ffmpegLinux:
# Debian / Ubuntu
sudo apt install ffmpeg
# Arch / CachyOS
sudo pacman -S ffmpeg
# Fedora
sudo dnf install ffmpegWindows:
winget install Gyan.FFmpeg๐ Changelog
See CHANGELOG.md for release history.
๐ License
MIT โ see LICENSE.
๐ Source code
Available on GitHub: github.com/UladzKha/samuraizer-cli