node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Found 23 results for llama.cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
React Native binding of llama.cpp
An another Node binding of llama.cpp
llama.cpp gguf file parser for javascript
React Native binding of llama.cpp
Fork of llama.rn for ChatterUI
Load and use an LLM model directly in Electron. Experimental.
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.
llama.cpp LLM Provider
llama.cpp LLM Provider - OpenAI Compatible
llama.cpp LLM Provider
llama.cpp LLM local Provider
React Native binding of llama.cpp for Inferra
An attempt at a pure cpp turbo module library
A simple grammar builder compatible with GBNF (llama.cpp)
Node.js bindings for LlamaCPP, a C++ library for running language models.
React Native binding of llama.cpp
React Native binding of llama.cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.
serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp