JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 1
  • Score
    100M100P100Q61155F
  • License ISC

Langchain integration with endee vector database.

Package Exports

  • endee-langchain
  • endee-langchain/src/index.ts

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (endee-langchain) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

Endee LangChain Integration

A LangChain integration for the Endee vector database, enabling seamless vector storage and retrieval for RAG (Retrieval-Augmented Generation) applications.

Installation

npm install endee-langchain

Prerequisites

  1. An Endee account and API key
  2. An OpenAI API key (or another embeddings provider compatible with LangChain)
  3. An existing Endee index (or create one programmatically)

Quick Start

1. Set Up Environment Variables

Create a .env file in your project root:

OPENAI_API_KEY=your-openai-api-key

2. Basic Usage

import { EndeeVectorStore } from "endee-langchain";
import { OpenAIEmbeddings } from "@langchain/openai";
import { Endee as EndeeClient } from "endee";
import { Document } from "@langchain/core/documents";

// Initialize embeddings
const embeddings = new OpenAIEmbeddings({
  model: "text-embedding-3-small",
  apiKey: process.env.OPENAI_API_KEY,
});

// Initialize Endee client and get index
const endeeClient = new EndeeClient("your-endee-api-key");
const index = await endeeClient.getIndex("your-index-name");

// Create vector store
const vectorStore = new EndeeVectorStore(embeddings, {
  endeeIndex: index,
});

// Add documents
const documents = [
  new Document({
    pageContent: "The powerhouse of the cell is the mitochondria",
    metadata: { source: "biology-textbook" },
  }),
  new Document({
    pageContent: "Buildings are made out of brick",
    metadata: { source: "architecture-guide" },
  }),
];

const ids = await vectorStore.addDocuments(documents);
console.log("Added documents with IDs:", ids);

// Search for similar documents
const results = await vectorStore.similaritySearch("biology", 3);
console.log(results);

Creating an Index

If you need to create an index first:

const endeeClient = new EndeeClient("your-endee-api-key");

await endeeClient.createIndex({
  name: "my-index",
  dimension: 1536, // Must match your embedding model dimension
  spaceType: "cosine", // or "euclidean", "dotproduct"
  precision: "medium", // or "high", "low"
});

const index = await endeeClient.getIndex("my-index");

Static Factory Methods

fromTexts

Create a vector store from an array of text strings:

const texts = [
  "The powerhouse of the cell is the mitochondria",
  "Buildings are made out of brick",
  "Mitochondria are made out of lipids",
];

const metadatas = [
  { source: "doc1" },
  { source: "doc2" },
  { source: "doc3" },
];

const vectorStore = await EndeeVectorStore.fromTexts(
  texts,
  metadatas,
  embeddings,
  {
    endeeIndex: index,
  }
);

You can also use a single metadata object for all texts:

const vectorStore = await EndeeVectorStore.fromTexts(
  texts,
  { category: "science" }, // Single metadata object
  embeddings,
  { endeeIndex: index }
);

fromDocuments

Create a vector store from existing LangChain Documents:

const documents = [
  new Document({
    pageContent: "Document content here",
    metadata: { source: "example.com" },
  }),
  // ... more documents
];

const vectorStore = await EndeeVectorStore.fromDocuments(
  documents,
  embeddings,
  { endeeIndex: index }
);

API Reference

Constructor

new EndeeVectorStore(
  embeddings: EmbeddingsInterface,
  params: { endeeIndex: EndeeIndex }
)

Methods

addDocuments(documents, options?)

Add documents to the vector store. Documents are automatically embedded.

const ids = await vectorStore.addDocuments(documents, {
  ids: ["custom-id-1", "custom-id-2"], // Optional: provide custom IDs
});

addVectors(vectors, documents, options?)

Add pre-computed vectors and their corresponding documents.

const vectors = [[0.1, 0.2, ...], [0.3, 0.4, ...]];
const ids = await vectorStore.addVectors(vectors, documents);

similaritySearch(query, k?, filter?)

Search for similar documents by text query.

const results = await vectorStore.similaritySearch("biology", 5);
// Returns: Document[]

similaritySearchWithScore(query, k?, filter?)

Search for similar documents and return similarity scores.

const results = await vectorStore.similaritySearchWithScore("biology", 5);
// Returns: [Document, number][]

maxMarginalRelevanceSearch(query, options)

Perform Maximal Marginal Relevance (MMR) search to balance relevance and diversity.

const results = await vectorStore.maxMarginalRelevanceSearch("biology", {
  k: 5, // Number of documents to return
  fetchK: 20, // Number of candidates to fetch before MMR selection
  lambda: 0.5, // Balance between relevance (1.0) and diversity (0.0)
});
// Returns: Document[]

delete({ id })

Delete a vector by ID.

await vectorStore.delete({ id: "document-id" });

asRetriever(options?)

Convert the vector store into a LangChain retriever.

const retriever = vectorStore.asRetriever({
  k: 5, // Number of documents to retrieve
  searchType: "similarity", // or "mmr"
  searchKwargs: {
    // For MMR search
    fetchK: 20,
    lambda: 0.5,
  },
});

const results = await retriever.invoke("query text");

Advanced Usage

Using with LangChain Chains

import { RetrievalQAChain } from "langchain/chains";
import { OpenAI } from "@langchain/openai";

const retriever = vectorStore.asRetriever({ k: 5 });
const llm = new OpenAI({ temperature: 0 });

const chain = RetrievalQAChain.fromLLM(llm, retriever);

const answer = await chain.call({
  query: "What is the powerhouse of the cell?",
});
console.log(answer.text);

Batch Operations

The vector store automatically batches operations for efficiency. Documents are added in chunks of 100 by default.

Custom IDs

You can provide custom IDs when adding documents:

const documents = [/* ... */];
const customIds = ["doc-1", "doc-2", "doc-3"];

await vectorStore.addDocuments(documents, { ids: customIds });

Error Handling

The vector store includes error handling for common operations:

try {
  await vectorStore.addDocuments(documents);
} catch (error) {
  console.error("Failed to add documents:", error);
}

try {
  await vectorStore.delete({ id: "some-id" });
} catch (error) {
  console.error("Failed to delete vector:", error);
}

Requirements

  • Node.js 18+
  • TypeScript 4.5+ (if using TypeScript)
  • An active Endee account
  • An embeddings provider (OpenAI, HuggingFace, etc.)

License

ISC

Author

Pankaj Singh (Endee Labs)

Support

For issues and questions, please open an issue on the GitHub repository.