JSPM

Found 42 results for llama.cpp

inference-server

Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.

  • v1.0.0-beta.31
  • 27.66
  • Published

@electron/llm

Load and use an LLM model directly in Electron. Experimental.

  • v1.1.1
  • 24.91
  • Published

pllama.rn

React Native binding of llama.cpp

  • v0.4.4
  • 22.91
  • Published

llama-cpp-capacitor

A native Capacitor plugin that embeds llama.cpp directly into mobile apps, enabling offline AI inference with comprehensive support for text generation, multimodal processing, TTS, LoRA adapters, and more.

  • v0.0.13
  • 22.07
  • Published

@aibrow/node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

  • v1.7.0
  • 19.79
  • Published

llama.cpp-ts

Node.js bindings for LlamaCPP, a C++ library for running language models.

  • v1.2.0
  • 19.38
  • Published

custom-koya-node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level

  • v0.1.0
  • 16.34
  • Published

llama-node-fixed

A robust LLaMA Node.js library with enhanced error handling and segfault fixes

  • v1.0.1
  • 14.90
  • Published

quiad

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level

  • v1.3.1
  • 13.87
  • Published

inferra-llama

React Native binding of llama.cpp for Inferra

  • v1.8.6
  • 12.33
  • Published

llama.native.js

use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.

    • v1.1.0
    • 9.78
    • Published

    llama-ggml.js

    serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp

      • v0.1.0
      • 7.14
      • Published

      inferra-llama.rn

      React Native binding of llama.cpp for Inferra

      • v1.8.0
      • 4.49
      • Published

      grammar-builder

      A simple grammar builder compatible with GBNF (llama.cpp)

      • v0.0.5
      • 0.00
      • Published