JSPM

Found 41 results for llama.cpp

node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

  • v3.12.4
  • 69.56
  • Published

llama.rn

React Native binding of llama.cpp

  • v0.6.12
  • 57.20
  • Published

@scrypted/llm

The Scrypted LLM plugin allows connecting to various LLM providers, or running your own locally.

    • v0.0.59
    • 48.32
    • Published

    cui-llama.rn

    Fork of llama.rn for ChatterUI

    • v1.8.0
    • 45.26
    • Published

    hyllama

    llama.cpp gguf file parser for javascript

    • v0.2.2
    • 44.68
    • Published

    inference-server

    Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.

    • v1.0.0-beta.31
    • 28.29
    • Published

    @electron/llm

    Load and use an LLM model directly in Electron. Experimental.

    • v1.1.1
    • 25.27
    • Published

    pllama.rn

    React Native binding of llama.cpp

    • v0.4.4
    • 23.24
    • Published

    llama.cpp-ts

    Node.js bindings for LlamaCPP, a C++ library for running language models.

    • v1.2.0
    • 19.66
    • Published

    @aibrow/node-llama-cpp

    Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

    • v1.7.0
    • 19.47
    • Published

    custom-koya-node-llama-cpp

    Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level

    • v0.1.0
    • 16.58
    • Published

    llama-node-fixed

    A robust LLaMA Node.js library with enhanced error handling and segfault fixes

    • v1.0.1
    • 15.11
    • Published

    quiad

    Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level

    • v1.3.1
    • 14.18
    • Published

    inferra-llama

    React Native binding of llama.cpp for Inferra

    • v1.8.6
    • 12.73
    • Published

    llama.native.js

    use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.

      • v1.1.0
      • 10.23
      • Published

      llama-ggml.js

      serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp

        • v0.1.0
        • 7.41
        • Published

        inferra-llama.rn

        React Native binding of llama.cpp for Inferra

        • v1.8.0
        • 4.55
        • Published

        grammar-builder

        A simple grammar builder compatible with GBNF (llama.cpp)

        • v0.0.5
        • 0.00
        • Published