JSPM

llamaindex

0.0.0-20240314032004
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 39367
  • Score
    100M100P100Q151527F
  • License MIT

Package Exports

  • llamaindex
  • llamaindex/ChatHistory
  • llamaindex/GlobalsHelper
  • llamaindex/Node
  • llamaindex/OutputParser
  • llamaindex/Prompt
  • llamaindex/PromptHelper
  • llamaindex/QuestionGenerator
  • llamaindex/Response
  • llamaindex/Retriever
  • llamaindex/ServiceContext
  • llamaindex/TextSplitter
  • llamaindex/agent/index
  • llamaindex/agent/openai/base
  • llamaindex/agent/openai/types/chat
  • llamaindex/agent/openai/utils
  • llamaindex/agent/openai/worker
  • llamaindex/agent/react/base
  • llamaindex/agent/react/formatter
  • llamaindex/agent/react/outputParser
  • llamaindex/agent/react/prompts
  • llamaindex/agent/react/types
  • llamaindex/agent/react/worker
  • llamaindex/agent/runner/base
  • llamaindex/agent/runner/types
  • llamaindex/agent/types
  • llamaindex/agent/utils
  • llamaindex/callbacks/CallbackManager
  • llamaindex/cjs/ChatHistory
  • llamaindex/cjs/GlobalsHelper
  • llamaindex/cjs/Node
  • llamaindex/cjs/OutputParser
  • llamaindex/cjs/Prompt
  • llamaindex/cjs/PromptHelper
  • llamaindex/cjs/QuestionGenerator
  • llamaindex/cjs/Response
  • llamaindex/cjs/Retriever
  • llamaindex/cjs/ServiceContext
  • llamaindex/cjs/TextSplitter
  • llamaindex/cjs/agent/index
  • llamaindex/cjs/agent/openai/base
  • llamaindex/cjs/agent/openai/types/chat
  • llamaindex/cjs/agent/openai/utils
  • llamaindex/cjs/agent/openai/worker
  • llamaindex/cjs/agent/react/base
  • llamaindex/cjs/agent/react/formatter
  • llamaindex/cjs/agent/react/outputParser
  • llamaindex/cjs/agent/react/prompts
  • llamaindex/cjs/agent/react/types
  • llamaindex/cjs/agent/react/worker
  • llamaindex/cjs/agent/runner/base
  • llamaindex/cjs/agent/runner/types
  • llamaindex/cjs/agent/types
  • llamaindex/cjs/agent/utils
  • llamaindex/cjs/callbacks/CallbackManager
  • llamaindex/cjs/cloud/LlamaCloudIndex
  • llamaindex/cjs/cloud/LlamaCloudRetriever
  • llamaindex/cjs/cloud/index
  • llamaindex/cjs/cloud/types
  • llamaindex/cjs/cloud/utils
  • llamaindex/cjs/constants
  • llamaindex/cjs/embeddings/ClipEmbedding
  • llamaindex/cjs/embeddings/HuggingFaceEmbedding
  • llamaindex/cjs/embeddings/MistralAIEmbedding
  • llamaindex/cjs/embeddings/MultiModalEmbedding
  • llamaindex/cjs/embeddings/OllamaEmbedding
  • llamaindex/cjs/embeddings/OpenAIEmbedding
  • llamaindex/cjs/embeddings/fireworks
  • llamaindex/cjs/embeddings/index
  • llamaindex/cjs/embeddings/together
  • llamaindex/cjs/embeddings/types
  • llamaindex/cjs/embeddings/utils
  • llamaindex/cjs/engines/chat/CondenseQuestionChatEngine
  • llamaindex/cjs/engines/chat/ContextChatEngine
  • llamaindex/cjs/engines/chat/DefaultContextGenerator
  • llamaindex/cjs/engines/chat/SimpleChatEngine
  • llamaindex/cjs/engines/chat/index
  • llamaindex/cjs/engines/chat/types
  • llamaindex/cjs/engines/query/RetrieverQueryEngine
  • llamaindex/cjs/engines/query/RouterQueryEngine
  • llamaindex/cjs/engines/query/SubQuestionQueryEngine
  • llamaindex/cjs/engines/query/index
  • llamaindex/cjs/engines/query/types
  • llamaindex/cjs/evaluation/Correctness
  • llamaindex/cjs/evaluation/Faithfulness
  • llamaindex/cjs/evaluation/Relevancy
  • llamaindex/cjs/evaluation/index
  • llamaindex/cjs/evaluation/prompts
  • llamaindex/cjs/evaluation/types
  • llamaindex/cjs/evaluation/utils
  • llamaindex/cjs/extractors/MetadataExtractors
  • llamaindex/cjs/extractors/index
  • llamaindex/cjs/extractors/prompts
  • llamaindex/cjs/extractors/types
  • llamaindex/cjs/index
  • llamaindex/cjs/indices/BaseIndex
  • llamaindex/cjs/indices/IndexStruct
  • llamaindex/cjs/indices/index
  • llamaindex/cjs/indices/json-to-index-struct
  • llamaindex/cjs/indices/keyword/index
  • llamaindex/cjs/indices/keyword/utils
  • llamaindex/cjs/indices/summary/index
  • llamaindex/cjs/indices/summary/utils
  • llamaindex/cjs/indices/vectorStore/index
  • llamaindex/cjs/ingestion/IngestionCache
  • llamaindex/cjs/ingestion/IngestionPipeline
  • llamaindex/cjs/ingestion/index
  • llamaindex/cjs/ingestion/strategies/DuplicatesStrategy
  • llamaindex/cjs/ingestion/strategies/UpsertsAndDeleteStrategy
  • llamaindex/cjs/ingestion/strategies/UpsertsStrategy
  • llamaindex/cjs/ingestion/strategies/classify
  • llamaindex/cjs/ingestion/strategies/index
  • llamaindex/cjs/ingestion/types
  • llamaindex/cjs/llm/LLM
  • llamaindex/cjs/llm/anthropic
  • llamaindex/cjs/llm/azure
  • llamaindex/cjs/llm/base
  • llamaindex/cjs/llm/fireworks
  • llamaindex/cjs/llm/groq
  • llamaindex/cjs/llm/index
  • llamaindex/cjs/llm/mistral
  • llamaindex/cjs/llm/ollama
  • llamaindex/cjs/llm/open_ai
  • llamaindex/cjs/llm/portkey
  • llamaindex/cjs/llm/replicate_ai
  • llamaindex/cjs/llm/together
  • llamaindex/cjs/llm/types
  • llamaindex/cjs/llm/utils
  • llamaindex/cjs/memory/ChatMemoryBuffer
  • llamaindex/cjs/memory/types
  • llamaindex/cjs/nodeParsers/MarkdownNodeParser
  • llamaindex/cjs/nodeParsers/SentenceWindowNodeParser
  • llamaindex/cjs/nodeParsers/SimpleNodeParser
  • llamaindex/cjs/nodeParsers/index
  • llamaindex/cjs/nodeParsers/types
  • llamaindex/cjs/nodeParsers/utils
  • llamaindex/cjs/objects/base
  • llamaindex/cjs/objects/index
  • llamaindex/cjs/outputParsers/selectors
  • llamaindex/cjs/postprocessors/MetadataReplacementPostProcessor
  • llamaindex/cjs/postprocessors/SimilarityPostprocessor
  • llamaindex/cjs/postprocessors/index
  • llamaindex/cjs/postprocessors/rerankers/CohereRerank
  • llamaindex/cjs/postprocessors/rerankers/index
  • llamaindex/cjs/postprocessors/types
  • llamaindex/cjs/prompts/Mixin
  • llamaindex/cjs/prompts/index
  • llamaindex/cjs/readers/AssemblyAIReader
  • llamaindex/cjs/readers/CSVReader
  • llamaindex/cjs/readers/DocxReader
  • llamaindex/cjs/readers/HTMLReader
  • llamaindex/cjs/readers/ImageReader
  • llamaindex/cjs/readers/LlamaParseReader
  • llamaindex/cjs/readers/MarkdownReader
  • llamaindex/cjs/readers/NotionReader
  • llamaindex/cjs/readers/PDFReader
  • llamaindex/cjs/readers/SimpleDirectoryReader
  • llamaindex/cjs/readers/SimpleMongoReader
  • llamaindex/cjs/readers/index
  • llamaindex/cjs/readers/type
  • llamaindex/cjs/selectors/base
  • llamaindex/cjs/selectors/index
  • llamaindex/cjs/selectors/llmSelectors
  • llamaindex/cjs/selectors/prompts
  • llamaindex/cjs/selectors/utils
  • llamaindex/cjs/storage/FileSystem
  • llamaindex/cjs/storage/StorageContext
  • llamaindex/cjs/storage/chatStore/SimpleChatStore
  • llamaindex/cjs/storage/chatStore/types
  • llamaindex/cjs/storage/constants
  • llamaindex/cjs/storage/docStore/KVDocumentStore
  • llamaindex/cjs/storage/docStore/SimpleDocumentStore
  • llamaindex/cjs/storage/docStore/types
  • llamaindex/cjs/storage/docStore/utils
  • llamaindex/cjs/storage/index
  • llamaindex/cjs/storage/indexStore/KVIndexStore
  • llamaindex/cjs/storage/indexStore/SimpleIndexStore
  • llamaindex/cjs/storage/indexStore/types
  • llamaindex/cjs/storage/kvStore/SimpleKVStore
  • llamaindex/cjs/storage/kvStore/types
  • llamaindex/cjs/storage/vectorStore/AstraDBVectorStore
  • llamaindex/cjs/storage/vectorStore/ChromaVectorStore
  • llamaindex/cjs/storage/vectorStore/MilvusVectorStore
  • llamaindex/cjs/storage/vectorStore/MongoDBAtlasVectorStore
  • llamaindex/cjs/storage/vectorStore/PGVectorStore
  • llamaindex/cjs/storage/vectorStore/PineconeVectorStore
  • llamaindex/cjs/storage/vectorStore/QdrantVectorStore
  • llamaindex/cjs/storage/vectorStore/SimpleVectorStore
  • llamaindex/cjs/storage/vectorStore/types
  • llamaindex/cjs/storage/vectorStore/utils
  • llamaindex/cjs/synthesizers/MultiModalResponseSynthesizer
  • llamaindex/cjs/synthesizers/ResponseSynthesizer
  • llamaindex/cjs/synthesizers/builders
  • llamaindex/cjs/synthesizers/index
  • llamaindex/cjs/synthesizers/types
  • llamaindex/cjs/tools/QueryEngineTool
  • llamaindex/cjs/tools/functionTool
  • llamaindex/cjs/tools/index
  • llamaindex/cjs/tools/types
  • llamaindex/cjs/tools/utils
  • llamaindex/cjs/types
  • llamaindex/cloud/LlamaCloudIndex
  • llamaindex/cloud/LlamaCloudRetriever
  • llamaindex/cloud/index
  • llamaindex/cloud/types
  • llamaindex/cloud/utils
  • llamaindex/constants
  • llamaindex/embeddings/ClipEmbedding
  • llamaindex/embeddings/HuggingFaceEmbedding
  • llamaindex/embeddings/MistralAIEmbedding
  • llamaindex/embeddings/MultiModalEmbedding
  • llamaindex/embeddings/OllamaEmbedding
  • llamaindex/embeddings/OpenAIEmbedding
  • llamaindex/embeddings/fireworks
  • llamaindex/embeddings/index
  • llamaindex/embeddings/together
  • llamaindex/embeddings/types
  • llamaindex/embeddings/utils
  • llamaindex/engines/chat/CondenseQuestionChatEngine
  • llamaindex/engines/chat/ContextChatEngine
  • llamaindex/engines/chat/DefaultContextGenerator
  • llamaindex/engines/chat/SimpleChatEngine
  • llamaindex/engines/chat/index
  • llamaindex/engines/chat/types
  • llamaindex/engines/query/RetrieverQueryEngine
  • llamaindex/engines/query/RouterQueryEngine
  • llamaindex/engines/query/SubQuestionQueryEngine
  • llamaindex/engines/query/index
  • llamaindex/engines/query/types
  • llamaindex/evaluation/Correctness
  • llamaindex/evaluation/Faithfulness
  • llamaindex/evaluation/Relevancy
  • llamaindex/evaluation/index
  • llamaindex/evaluation/prompts
  • llamaindex/evaluation/types
  • llamaindex/evaluation/utils
  • llamaindex/extractors/MetadataExtractors
  • llamaindex/extractors/index
  • llamaindex/extractors/prompts
  • llamaindex/extractors/types
  • llamaindex/index
  • llamaindex/indices/BaseIndex
  • llamaindex/indices/IndexStruct
  • llamaindex/indices/index
  • llamaindex/indices/json-to-index-struct
  • llamaindex/indices/keyword/index
  • llamaindex/indices/keyword/utils
  • llamaindex/indices/summary/index
  • llamaindex/indices/summary/utils
  • llamaindex/indices/vectorStore/index
  • llamaindex/ingestion/IngestionCache
  • llamaindex/ingestion/IngestionPipeline
  • llamaindex/ingestion/index
  • llamaindex/ingestion/strategies/DuplicatesStrategy
  • llamaindex/ingestion/strategies/UpsertsAndDeleteStrategy
  • llamaindex/ingestion/strategies/UpsertsStrategy
  • llamaindex/ingestion/strategies/classify
  • llamaindex/ingestion/strategies/index
  • llamaindex/ingestion/types
  • llamaindex/llm/LLM
  • llamaindex/llm/anthropic
  • llamaindex/llm/azure
  • llamaindex/llm/base
  • llamaindex/llm/fireworks
  • llamaindex/llm/groq
  • llamaindex/llm/index
  • llamaindex/llm/mistral
  • llamaindex/llm/ollama
  • llamaindex/llm/open_ai
  • llamaindex/llm/portkey
  • llamaindex/llm/replicate_ai
  • llamaindex/llm/together
  • llamaindex/llm/types
  • llamaindex/llm/utils
  • llamaindex/memory/ChatMemoryBuffer
  • llamaindex/memory/types
  • llamaindex/nodeParsers/MarkdownNodeParser
  • llamaindex/nodeParsers/SentenceWindowNodeParser
  • llamaindex/nodeParsers/SimpleNodeParser
  • llamaindex/nodeParsers/index
  • llamaindex/nodeParsers/types
  • llamaindex/nodeParsers/utils
  • llamaindex/objects/base
  • llamaindex/objects/index
  • llamaindex/outputParsers/selectors
  • llamaindex/postprocessors/MetadataReplacementPostProcessor
  • llamaindex/postprocessors/SimilarityPostprocessor
  • llamaindex/postprocessors/index
  • llamaindex/postprocessors/rerankers/CohereRerank
  • llamaindex/postprocessors/rerankers/index
  • llamaindex/postprocessors/types
  • llamaindex/prompts/Mixin
  • llamaindex/prompts/index
  • llamaindex/readers/AssemblyAIReader
  • llamaindex/readers/CSVReader
  • llamaindex/readers/DocxReader
  • llamaindex/readers/HTMLReader
  • llamaindex/readers/ImageReader
  • llamaindex/readers/LlamaParseReader
  • llamaindex/readers/MarkdownReader
  • llamaindex/readers/NotionReader
  • llamaindex/readers/PDFReader
  • llamaindex/readers/SimpleDirectoryReader
  • llamaindex/readers/SimpleMongoReader
  • llamaindex/readers/index
  • llamaindex/readers/type
  • llamaindex/selectors/base
  • llamaindex/selectors/index
  • llamaindex/selectors/llmSelectors
  • llamaindex/selectors/prompts
  • llamaindex/selectors/utils
  • llamaindex/storage/FileSystem
  • llamaindex/storage/StorageContext
  • llamaindex/storage/chatStore/SimpleChatStore
  • llamaindex/storage/chatStore/types
  • llamaindex/storage/constants
  • llamaindex/storage/docStore/KVDocumentStore
  • llamaindex/storage/docStore/SimpleDocumentStore
  • llamaindex/storage/docStore/types
  • llamaindex/storage/docStore/utils
  • llamaindex/storage/index
  • llamaindex/storage/indexStore/KVIndexStore
  • llamaindex/storage/indexStore/SimpleIndexStore
  • llamaindex/storage/indexStore/types
  • llamaindex/storage/kvStore/SimpleKVStore
  • llamaindex/storage/kvStore/types
  • llamaindex/storage/vectorStore/AstraDBVectorStore
  • llamaindex/storage/vectorStore/ChromaVectorStore
  • llamaindex/storage/vectorStore/MilvusVectorStore
  • llamaindex/storage/vectorStore/MongoDBAtlasVectorStore
  • llamaindex/storage/vectorStore/PGVectorStore
  • llamaindex/storage/vectorStore/PineconeVectorStore
  • llamaindex/storage/vectorStore/QdrantVectorStore
  • llamaindex/storage/vectorStore/SimpleVectorStore
  • llamaindex/storage/vectorStore/types
  • llamaindex/storage/vectorStore/utils
  • llamaindex/synthesizers/MultiModalResponseSynthesizer
  • llamaindex/synthesizers/ResponseSynthesizer
  • llamaindex/synthesizers/builders
  • llamaindex/synthesizers/index
  • llamaindex/synthesizers/types
  • llamaindex/tools/QueryEngineTool
  • llamaindex/tools/functionTool
  • llamaindex/tools/index
  • llamaindex/tools/types
  • llamaindex/tools/utils
  • llamaindex/types

Readme

LlamaIndex.TS

NPM Version NPM License NPM Downloads Discord

LlamaIndex is a data framework for your LLM application.

Use your own data with large language models (LLMs, OpenAI ChatGPT and others) in Typescript and Javascript.

Documentation: https://ts.llamaindex.ai/

Try examples online:

Open in Stackblitz

What is LlamaIndex.TS?

LlamaIndex.TS aims to be a lightweight, easy to use set of libraries to help you integrate large language models into your applications with your own data.

Getting started with an example:

LlamaIndex.TS requires Node v18 or higher. You can download it from https://nodejs.org or use https://nvm.sh (our preferred option).

In a new folder:

export OPENAI_API_KEY="sk-......" # Replace with your key from https://platform.openai.com/account/api-keys
pnpm init
pnpm install typescript
pnpm exec tsc --init # if needed
pnpm install llamaindex
pnpm install @types/node

Create the file example.ts

// example.ts
import fs from "fs/promises";
import { Document, VectorStoreIndex } from "llamaindex";

async function main() {
  // Load essay from abramov.txt in Node
  const essay = await fs.readFile(
    "node_modules/llamaindex/examples/abramov.txt",
    "utf-8",
  );

  // Create Document object with essay
  const document = new Document({ text: essay });

  // Split text and create embeddings. Store them in a VectorStoreIndex
  const index = await VectorStoreIndex.fromDocuments([document]);

  // Query the index
  const queryEngine = index.asQueryEngine();
  const response = await queryEngine.query({
    query: "What did the author do in college?",
  });

  // Output response
  console.log(response.toString());
}

main();

Then you can run it using

pnpm dlx ts-node example.ts

Playground

Check out our NextJS playground at https://llama-playground.vercel.app/. The source is available at https://github.com/run-llama/ts-playground

Core concepts for getting started:

  • Document: A document represents a text file, PDF file or other contiguous piece of data.

  • Node: The basic data building block. Most commonly, these are parts of the document split into manageable pieces that are small enough to be fed into an embedding model and LLM.

  • Embedding: Embeddings are sets of floating point numbers which represent the data in a Node. By comparing the similarity of embeddings, we can derive an understanding of the similarity of two pieces of data. One use case is to compare the embedding of a question with the embeddings of our Nodes to see which Nodes may contain the data needed to answer that quesiton.

  • Indices: Indices store the Nodes and the embeddings of those nodes. QueryEngines retrieve Nodes from these Indices using embedding similarity.

  • QueryEngine: Query engines are what generate the query you put in and give you back the result. Query engines generally combine a pre-built prompt with selected Nodes from your Index to give the LLM the context it needs to answer your query.

  • ChatEngine: A ChatEngine helps you build a chatbot that will interact with your Indices.

  • SimplePrompt: A simple standardized function call definition that takes in inputs and formats them in a template literal. SimplePrompts can be specialized using currying and combined using other SimplePrompt functions.

Note: NextJS:

If you're using NextJS App Router, you'll need to use the NodeJS runtime (default) and add the following config to your next.config.js to have it use imports/exports in the same way Node does.

export const runtime = "nodejs"; // default
// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
  experimental: {
    serverComponentsExternalPackages: ["pdf2json"],
  },
  webpack: (config) => {
    config.resolve.alias = {
      ...config.resolve.alias,
      sharp$: false,
      "onnxruntime-node$": false,
    };
    return config;
  },
};

module.exports = nextConfig;

NextJS with Milvus:

As proto files are not loaded per default in NextJS, you'll need to add the following to your next.config.js to have it load the proto files.

const path = require("path");
const CopyWebpackPlugin = require("copy-webpack-plugin");

// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
  webpack: (config, { isServer }) => {
    if (isServer) {
      // Copy the proto files to the server build directory
      config.plugins.push(
        new CopyWebpackPlugin({
          patterns: [
            {
              from: path.join(
                __dirname,
                "node_modules/@zilliz/milvus2-sdk-node/dist",
              ),
              to: path.join(__dirname, ".next"),
            },
          ],
        }),
      );
    }
    // Important: return the modified config
    return config;
  },
};

module.exports = nextConfig;

Supported LLMs:

  • OpenAI GPT-3.5-turbo and GPT-4
  • Anthropic Claude Instant and Claude 2
  • Groq LLMs
  • Llama2 Chat LLMs (70B, 13B, and 7B parameters)
  • MistralAI Chat LLMs
  • Fireworks Chat LLMs

Contributing:

We are in the very early days of LlamaIndex.TS. If you’re interested in hacking on it with us check out our contributing guide

Bugs? Questions?

Please join our Discord! https://discord.com/invite/eN6D2HQ4aX