Package Exports
- @c0mpute/worker
- @c0mpute/worker/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@c0mpute/worker) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
c0mpute-worker
Native CLI worker for the c0mpute.ai distributed inference network. Runs LLM inference using node-llama-cpp with full GPU acceleration (CUDA, Metal, Vulkan) and connects to the orchestrator via Socket.io.
Quick Start
npx c0mpute-worker --token <your-token>What It Does
- Detects your GPU (CUDA, Metal, Vulkan, or CPU fallback)
- Downloads the optimal GGUF model for your hardware (~8GB)
- Runs a speed benchmark
- Connects to the c0mpute.ai orchestrator
- Accepts and processes inference jobs, streaming tokens back in real time
Options
--token <token> Authentication token from c0mpute.ai (required)
--url <url> Orchestrator URL (default: https://c0mpute.ai)
--model <path> Path to a custom GGUF model file
--benchmark Run benchmark only, then exit
--version Show version
--help Show helpRequirements
- Node.js 18+
- 10GB+ disk space for model download
- GPU with 10GB+ VRAM recommended (NVIDIA, Apple Silicon, or Vulkan-compatible)
- CPU-only mode available but slower
Default Model
Qwen2.5-14B-Instruct (Q4_K_M quantization) from bartowski/Qwen2.5-14B-Instruct-GGUF.
Models are stored in ~/.c0mpute/models/.
License
MIT