JSPM

Found 23 results for llama.cpp

custom-koya-node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level

  • v0.1.0
  • 15.42
  • Published

llama.native.js

use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.

    • v1.1.0
    • 13.84
    • Published

    llama-ggml.js

    serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp

      • v0.1.0
      • 11.00
      • Published