Package Exports
- llm-interface
- llm-interface/src/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (llm-interface) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
llm-interface
Introduction
The LLM Interface project is a versatile and comprehensive wrapper designed to interact with multiple Large Language Model (LLM) APIs. It simplifies integrating various LLM providers, including OpenAI, Anthropic, Cohere, Google Gemini, Goose AI, Groq, Mistral AI, Reka AI, and LLaMA.cpp, into your applications. This project aims to provide a simplified and unified interface for sending messages and receiving responses from different LLM services, making it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API.
Updates
v0.0.9
- Response Caching: Efficiently caches LLM responses to reduce costs, enhance performance and minimize redundant requests, with customizable cache timeout settings.
v0.0.8
- Mistral AI: Added support for Mistral AI
- Cohere: Added support for Cohere
v0.0.7
- Goose AI: Added support for Goose AI
Features
- Unified Interface: A single, consistent interface to interact with multiple LLM APIs.
- Dynamic Module Loading: Automatically loads and manages different LLM LLMInterface.
- Error Handling: Robust error handling mechanisms to ensure reliable API interactions.
- Extensible: Easily extendable to support additional LLM providers as needed.
- JSON Output: Simple to use JSON output for OpenAI and Gemini responses.
- Response Caching: Efficiently caches LLM responses to reduce costs and enhance performance.
Dependencies
The project relies on several npm packages and APIs. Here are the primary dependencies:
axios: For making HTTP requests (used for Cohere, Goose AI, LLaMA.cpp, Mistral, and Reka AI).@anthropic-ai/sdk: SDK for interacting with the Anthropic API.@google/generative-ai: SDK for interacting with the Google Gemini API.groq-sdk: SDK for interacting with the Groq API.openai: SDK for interacting with the OpenAI API.dotenv: For managing environment variables. Used by test cases.
Installation
To install the llm-interface package, you can use npm:
npm install llm-interfaceUsage
Example
Import llm-interface using:
const LLMInterface = require("llm-interface");or
import LLMInterface from "llm-interface";then call the handler you want to use:
const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY);
const message = {
model: "gpt-3.5-turbo",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain the importance of low latency LLMs." },
],
};
openai
.sendMessage(message, { max_tokens: 150 })
.then((response) => {
console.log(response);
})
.catch((error) => {
console.error(error);
});Additional usage examples and an API reference are available. You may also wish to review the test cases for further examples.
Running Tests
The project includes tests for each LLM handler. To run the tests, use the following command:
npm testContribute
Contributions to this project are welcome. Please fork the repository and submit a pull request with your changes or improvements.
License
This project is licensed under the MIT License - see the LICENSE file for details.