node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Found 28 results for cuda
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Native module for An another Node binding of llama.cpp (win32-x64-cuda)
Native module for An another Node binding of whisper.cpp (win32-x64-vulkan)
Native module for An another Node binding of whisper.cpp (linux-x64-cuda)
Native module for An another Node binding of whisper.cpp (win32-arm64)
Native module for An another Node binding of whisper.cpp (linux-x64)
Native module for An another Node binding of whisper.cpp (linux-x64-vulkan)
Native module for An another Node binding of whisper.cpp (win32-x64)
An another Node binding of whisper.cpp to make same API with whisper.rn as much as possible.
Native module for An another Node binding of whisper.cpp (linux-arm64-cuda)
Native module for An another Node binding of whisper.cpp (win32-x64-cuda)
Native module for An another Node binding of whisper.cpp (linux-arm64)
Native module for An another Node binding of whisper.cpp (win32-arm64-vulkan)
Native module for An another Node binding of whisper.cpp (linux-arm64-vulkan)
Native module for An another Node binding of whisper.cpp (darwin-arm64)
Native module for An another Node binding of whisper.cpp (darwin-x64)
High-performance HLS transcoding library with hardware acceleration, intelligent client management, and distributed processing support for Node.js
CLI for nvGraph, which is a GPU-based graph analytics library written by NVIDIA, using CUDA.
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
High-performance CUDA to WebAssembly/WebGPU transpiler with Rust safety - Run GPU kernels in browsers and Node.js
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
Node.js client for Vast.ai API - Rent GPUs for machine learning and AI workloads
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
Cuda Node JS binding using napi API with working example.
`node-nvrtc`是一个简易的使用`nvrtc`的`node.js`扩展,目前只包含简单的`nvrtc`功能和`cuda`内存交换相关功能。
A mininal binding to bind generics.js with tensorflow to use GPU / CUDA.
Aho-Corasick string matching algorithm implementation with GPU acceleration using CUDA
CUDA bindings for Node.js