Package Exports
- llm-interface
- llm-interface/src/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (llm-interface) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
llm-interface
Introduction
The LLM Interface project is a versatile and comprehensive wrapper designed to interact with multiple Large Language Model (LLM) APIs. It simplifies integrating various LLM providers, including OpenAI, Anthropic, Google Gemini, Groq, Reka AI, and LlamaCPP, into your applications. This project aims to provide a simplified and unified interface for sending messages and receiving responses from different LLM services, making it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API.
Features
- Unified Interface: A single, consistent interface to interact with multiple LLM APIs.
- Dynamic Module Loading: Automatically loads and manages different LLM handlers.
- Error Handling: Robust error handling mechanisms to ensure reliable API interactions.
- Extensible: Easily extendable to support additional LLM providers as needed.
- JSON Output: Simple to use JSON output for OpenAI and Gemini responses.
Dependencies
The project relies on several npm packages and APIs. Here are the primary dependencies:
dotenv: For managing environment variables.axios: For making HTTP requests (used in LlamaCPP).@anthropic-ai/sdk: SDK for interacting with the Anthropic API.@google/generative-ai: SDK for interacting with the Google Gemini API.groq-sdk: SDK for interacting with the Groq API.openai: SDK for interacting with the OpenAI API.
Installation
To install the llm-interface package, you can use npm:
npm install llm-interfaceUsage
Example
const handlers = require("llm-interface");
const openai = new handlers.openai(process.env.OPENAI_API_KEY);
const message = {
model: "gpt-3.5-turbo",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain the importance of low latency LLMs." },
],
};
openai
.sendMessage(message, { max_tokens: 150 })
.then((response) => {
console.log(response);
})
.catch((error) => {
console.error(error);
});Additional usage examples and API reference are available.
Running Tests
The project includes tests for each LLM handler. To run the tests, use the following command:
npm testContribute
Contributions to this project are welcome. Please fork the repository and submit a pull request with your changes or improvements.
License
This project is licensed under the MIT License - see the LICENSE file for details.