@novastera-oss/llamarn
An attempt at a pure cpp turbo module library
Found 42 results for llama.cpp
An attempt at a pure cpp turbo module library
Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.
llama.cpp LLM Provider
Load and use an LLM model directly in Electron. Experimental.
React Native binding of llama.cpp
MCP server bridge for Claude and llama.cpp - Connect Claude Desktop to your local models
React Native binding of llama.cpp
A native Capacitor plugin that embeds llama.cpp directly into mobile apps, enabling offline AI inference with comprehensive support for text generation, multimodal processing, TTS, LoRA adapters, and more.
llama.cpp LLM Provider - OpenAI Compatible
llama.cpp LLM Provider
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Node.js bindings for LlamaCPP, a C++ library for running language models.
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
llama.cpp LLM local Provider
React Native binding of llama.cpp
A robust LLaMA Node.js library with enhanced error handling and segfault fixes
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
React Native binding of llama.cpp for Inferra
use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.
serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp
React Native binding of llama.cpp for Inferra
A simple grammar builder compatible with GBNF (llama.cpp)