Package Exports
- ai-embed-search
Readme
๐ AI-embed-search โ Lightweight AI Semantic Search Engine
Smart. Simple. Local.
AI-powered semantic search in TypeScript using transformer embeddings. No cloud, no API keys โ 100% offline.
๐ Features
- ๐ง AI-powered semantic understanding
- โจ Super simple API:
init,embed,search,clear - โก๏ธ Fast cosine similarity-based retrieval
- ๐ฆ In-memory vector store (no DB required)
- ๐ Fully offline via
@xenova/transformers(WASM/Node)
๐ฆ Installation
npm install ai-embed-searchor
yarn add ai-embed-searchRequires Node.js โฅ 18 or a modern browser for WASM.
โก Quick Start
import { initEmbedder, embed, search } from 'ai-embed-search';
await initEmbedder();
await embed([
{ id: '1', text: 'iPhone 15 Pro Max', meta: { brand: 'Apple', type: 'phone' } },
{ id: '2', text: 'Samsung Galaxy S24 Ultra', meta: { brand: 'Samsung', type: 'phone' } },
{ id: '3', text: 'Apple MacBook Pro', meta: { brand: 'Apple', type: 'laptop' } }
]);
const results = await search('apple phone', 2);
console.log(results);
/*
[
{ id: '1', text: 'iPhone 15 Pro Max', meta: { brand: 'Apple', type: 'phone' }, score: 0.92 },
{ id: '3', text: 'Apple MacBook Pro', { brand: 'Apple', type: 'laptop' }, score: 0.75 }
]
*/๐ง 1. Initialize the Embedding Model
import { initEmbedder } from 'ai-embed-search';
await initEmbedder();Loads the MiniLM model via @xenova/transformers. Required once at startup.
๐ฅ 2. Add Items to the Vector Store
import { embed } from 'ai-embed-search';
await embed([
{ id: 'a1', text: 'Tesla Model S' },
{ id: 'a2', text: 'Electric Vehicle by Tesla' }
]);Embeds and stores vector representations of the given items.
๐ 3. Perform Semantic Search
import { search } from 'ai-embed-search';
const results = await search('fast electric car', 3);Returns:
[
{ id: 'a1', text: 'Tesla Model S', score: 0.95 },
{ id: 'a2', text: 'Electric Vehicle by Tesla', score: 0.85 }
]๐พ 6. Search with Cached Embeddings (Advanced)
You can store precomputed embeddings in your own DB or file:
const precomputed = {
id: 'x1',
text: 'Apple Watch Series 9',
vector: [0.11, 0.32, ...] // 384-dim array
};Then use cosine similarity to search across them, or build your own vector store using ai-embed-search functions.
๐งน 7. Clear the Vector Store
import { clearStore } from 'ai-embed-search';
clearStore(); // Removes all embedded data from memory๐ API Reference
initEmbedder()
Initializes the embedding model. Must be called once before using embed or search.
embed(items: { id: string, text: string }[])
Embeds and stores the provided items in the vector store. Each item must have a unique id and text.
search(query: string)
Performs a semantic search for the given query. Returns up to limit results sorted by similarity score (default is 5).
cacheFor(limit: number)
Caches the embeddings for the next limit search queries. This is useful for optimizing performance when you know you'll be searching multiple times.
clearStore()
Clears all embedded data from the vector store, freeing up memory.
๐ง Development
- Model: MiniLM via
@xenova/transformers - Vector type: 384-dim float32 array
- Similarity: Cosine similarity
- Storage: In-memory vector store (no database required)
- On-premises: Fully offline, no cloud dependencies
๐ SEO Keywords
ai search, semantic search, local ai search, vector search, transformer embeddings, cosine similarity, open source search engine, text embeddings, in-memory search, local search engine, typescript search engine, fast npm search, embeddings in JS, ai search npm package
License
MIT ยฉ 2025 Peter Sibirtsev
Contributing
Contributions are welcome! Please open an issue or submit a pull request.