node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Found 42 results for llama.cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
React Native binding of llama.cpp
The Scrypted LLM plugin allows connecting to various LLM providers, or running your own locally.
React Native binding of llama.cpp
Native module for An another Node binding of llama.cpp (linux-arm64)
Native module for An another Node binding of llama.cpp (linux-x64)
llama.cpp gguf file parser for javascript
Fork of llama.rn for ChatterUI
Native module for An another Node binding of llama.cpp (win32-x64-vulkan)
Native module for An another Node binding of llama.cpp (darwin-arm64)
Native module for An another Node binding of llama.cpp (linux-x64-cuda)
An another Node binding of llama.cpp
Native module for An another Node binding of llama.cpp (linux-arm64-cuda)
Native module for An another Node binding of llama.cpp (linux-arm64-vulkan)
Native module for An another Node binding of llama.cpp (win32-x64)
Native module for An another Node binding of llama.cpp (win32-arm64)
Native module for An another Node binding of llama.cpp (win32-x64-cuda)
Native module for An another Node binding of llama.cpp (linux-x64-vulkan)
Native module for An another Node binding of llama.cpp (darwin-x64)
Native module for An another Node binding of llama.cpp (win32-arm64-vulkan)
An attempt at a pure cpp turbo module library
Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.
llama.cpp LLM Provider
Load and use an LLM model directly in Electron. Experimental.
React Native binding of llama.cpp
MCP server bridge for Claude and llama.cpp - Connect Claude Desktop to your local models
React Native binding of llama.cpp
A native Capacitor plugin that embeds llama.cpp directly into mobile apps, enabling offline AI inference with comprehensive support for text generation, multimodal processing, TTS, LoRA adapters, and more.
llama.cpp LLM Provider - OpenAI Compatible
llama.cpp LLM Provider
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Node.js bindings for LlamaCPP, a C++ library for running language models.
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
llama.cpp LLM local Provider
React Native binding of llama.cpp
A robust LLaMA Node.js library with enhanced error handling and segfault fixes
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
React Native binding of llama.cpp for Inferra
use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.
serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp
React Native binding of llama.cpp for Inferra
A simple grammar builder compatible with GBNF (llama.cpp)