Package Exports
- llm-interface
- llm-interface/src/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (llm-interface) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
llm-interface
Introduction
The LLM Interface project is a versatile and comprehensive wrapper designed to interact with multiple Large Language Model (LLM) APIs. It simplifies integrating various LLM providers, including OpenAI, AI21 Studio, Anthropic, Cohere, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity, Reka AI, and LLaMA.cpp, into your applications. This project aims to provide a simplified and unified interface for sending messages and receiving responses from different LLM services, making it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API.
Features
- Unified Interface: A single, consistent interface to interact with multiple LLM APIs.
- Dynamic Module Loading: Automatically loads and manages different LLM LLMInterface.
- Error Handling: Robust error handling mechanisms to ensure reliable API interactions.
- Extensible: Easily extendable to support additional LLM providers as needed.
- JSON Output: Simple to use JSON output for OpenAI and Gemini responses.
- Response Caching: Efficiently caches LLM responses to reduce costs and enhance performance.
- Graceful Retries: Automatically retry failed prompts with increasing delays to ensure successful responses.
Updates
v0.0.11
- Simple Prompt Handler: Added support for simplified prompting.
v0.0.10
- Hugging Face: Added support for new LLM provider Hugging Face (over 150,000 publicly accessible machine learning models)
- Perplexity: Added support for new LLM provider Perplexity
- AI21: Add support for new LLM provider AI21 Studio
- JSON Output Improvements: The
json_objectmode now guarantees the return a valid JSON object or null. - Graceful Retries: Retry LLM queries upon failure with progressive delays.
v0.0.9
- Response Caching: Efficiently caches LLM responses to reduce costs, enhance performance and minimize redundant requests, with customizable cache timeout settings.
Dependencies
The project relies on several npm packages and APIs. Here are the primary dependencies:
axios: For making HTTP requests (used for various HTTP AI APIs).@anthropic-ai/sdk: SDK for interacting with the Anthropic API.@google/generative-ai: SDK for interacting with the Google Gemini API.groq-sdk: SDK for interacting with the Groq API.openai: SDK for interacting with the OpenAI API.dotenv: For managing environment variables. Used by test cases.flat-cache: For caching API responses to improve performance and reduce redundant requests.jest: For running test cases.
Installation
To install the llm-interface package, you can use npm:
npm install llm-interfaceUsage
Example
Import llm-interface using:
const LLMInterface = require("llm-interface");or
import LLMInterface from "llm-interface";then call the handler you want to use:
const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY);
const message = {
model: "gpt-3.5-turbo",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain the importance of low latency LLMs." },
],
};
openai
.sendMessage(message, { max_tokens: 150 })
.then((response) => {
console.log(response);
})
.catch((error) => {
console.error(error);
});or if you want to keep things simple you can use:
const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY);
openai
.sendMessage("Explain the importance of low latency LLMs.")
.then((response) => {
console.log(response);
})
.catch((error) => {
console.error(error);
});If you need API Keys, use this starting point. Additional usage examples and an API reference are available. You may also wish to review the test cases for further examples.
Running Tests
The project includes tests for each LLM handler. To run the tests, use the following command:
npm testContribute
Contributions to this project are welcome. Please fork the repository and submit a pull request with your changes or improvements.
License
This project is licensed under the MIT License - see the LICENSE file for details.