@lancedb/lancedb
LanceDB: A serverless, low-latency vector database for AI applications
Found 94 results for lancedb
LanceDB: A serverless, low-latency vector database for AI applications
OpenClaw enhanced LanceDB memory plugin with hybrid retrieval (Vector + BM25), cross-encoder rerank, multi-scope isolation, long-context chunking, and management CLI
LanceDB implementation of RAG interfaces for vibe-agent-toolkit
Local RAG MCP Server - Easy-to-setup document search with minimal configuration
Give your OpenClaw agent lasting memory: structured facts, semantic search, auto-capture & recall, decay, optional credential vault. Part of Hybrid Memory v3.
OpenClaw memory plugin: local LanceDB + DashScope-compatible embeddings
MCP server for LanceDB-backed long-term memory with hybrid retrieval (Vector + BM25), cross-encoder rerank, multi-scope isolation, and memory lifecycle management
Codebase graph analysis for AI agents — AST + call graph + type graph + hybrid semantic search via MCP
面向 A 股投资与盯盘场景的 OpenClaw 智能股票插件,基于 TickFlow API 提供实时监控、收盘后复盘、多维综合分析、关键价位跟踪与告警能力。OpenClaw smart stock plugin for A-share investing and watchlist workflows, powered by TickFlow API for realtime monitoring, post-close review, multi-dimensional analysis, key l
LanceDB-backed long-term memory provider for OpenCode
Local-first AI memory engine for multi-agent workflows — 17-tool MCP server. Zero config (LanceDB) or Azure (Cosmos + AI Search).
HireBase - AI-powered CV search engine with LanceDB and MCP
Ultra-simple code search tool with Jina embeddings, LanceDB, and MCP protocol support
Interactive CLI for persona-based local AI workflows
Native TypeScript AI memory system and MCP server (Zero-LLM, LanceDB)
Cognitive science-based AI memory framework — Weibull decay, triple-path retrieval, multi-backend storage
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol, vector search, git-based knowledge graphs, cloud backup. Never lose context again.
OpenClaw plugin for multimodal RAG - semantic indexing and time-aware search for images and audio using local AI models
Local-first semantic memory plugin for OpenClaw — LanceDB + Ollama, no cloud required.
Local RAG MCP Server - Enhanced with hybrid search, memory management, and code file support (fork of shinpr/mcp-local-rag)
LanceDB: A serverless, low-latency vector database for AI applications
Local semantic code search with LanceDB indexing and DeepInfra-powered retrieval
A lightweight web-based viewer for LanceDB databases with advanced table browsing, column filtering, and SQL WHERE clause support
MCP-сервер для поиска UI-компонентов с использованием RAG на базе LanceDB и GigaChat
Embex Vector Database ORM Node.js Bindings
AI agent memory MCP server with hybrid BM25 + vector search — no Docker required
MCP plugin for semantic code search using LanceDB - gives AI coding agents deep context from your entire codebase
Claude Code skill — Direct-to-Runtime Zero-Hop Stack architecture guide
Semantic embeddings and vector search - find concepts that resonate
Local RAG MCP Server - Easy-to-setup document search with minimal configuration
OpenClaw MCP router plugin for Model Context Protocol (MCP): semantic tool search and dynamic tool calling to reduce prompt bloat
Local RAG MCP Server with extended file support (including PPTX)
Reusable local-first hybrid retrieval core powered by SQLite, LanceDB, and optional OpenAI embeddings.
LanceDB: A serverless, low-latency vector database for AI applications
A local-first Model Context Protocol (MCP) server that provides semantic search capabilities for codebases
LanceDB: A serverless, low-latency vector database for AI applications
Genkit AI framework plugin for LanceDB.
LanceDB Vector検索エンジン
Agent memory plugin for OpenCode - persistent memory with semantic search using LanceDB
📚 Model Context Protocol server for AI-powered storytelling with Narrative Knowledge Graph - extract characters, locations, relationships and search your stories semantically.
search-docs サーバ実装
🔍 LanceDB vector storage for FlowRAG - embedded semantic search, no server required
Vidya — Hybrid memory plugin for OpenClaw agents. LanceDB vector store + Voyage AI embedding/rerank + BM25 keyword search.
LanceDB-backed long-term memory plugin for OpenClaw with auto-recall and auto-capture
MCP server that indexes Claude Code and Codex conversations for semantic search
CortexMem — Brain-Inspired Memory System for OpenClaw (类脑记忆系统)
Full-featured long-term memory system for Claude Code via MCP — hybrid retrieval, smart extraction, auto-capture, intelligent forgetting
Smart MCP server for Outline Wiki with AI-powered RAG Q&A, summarization, and semantic search
Native LanceDB vector database plugin for Capacitor — on-device vector search with ANN queries. Includes optional memory management layer.
MCP server for persistent agent memory — hybrid BM25 + vector search backed by LanceDB with local ONNX embeddings. No API keys, fully local.
Local-first semantic search for documents using MCP protocol. Requires Python 3.10+ and Docling (pip install docling).
OpenClaw memory plugin with markdown source-of-truth and LanceDB primary retrieval
SIE embedding function and reranker for LanceDB
MCP server for interacting with LanceDB database
A local-first knowledge base engine with vector search using LanceDB and Hugging Face embeddings
MCP server for interacting with LanceDB database
ContextEngine plugin for OpenClaw with retrieval-augmented context management and memory-aware compaction
MCP plugin for semantic code search using LanceDB - gives AI coding agents deep context from your entire codebase
MCP for adding semantic memory to AI coding agents
TraceAI instrumentation for LanceDB vector database
AST-aware semantic + lexical code indexer for AI agents
Persistent memory system for Claude Code via MCP — dual-write to Markdown + LanceDB with hybrid semantic search
MCP server for semantic memory and codebase search with LanceDB and Voyage AI embeddings
A high-performance local MCP server for semantic code search, summarization, and RAG.
LanceDB: A serverless, low-latency vector database for AI applications
AI-powered git code review toolkit with optional local RAG knowledge base
Simple, lightweight RAG / memory layer using LanceDB + local embeddings (Xenova/all-MiniLM-L6-v2)
Memory and task orchestration plugin for OpenCode with project/session isolation
Dig through any documentation with AI - MCP server for Claude, Cursor, and other AI assistants
AI-powered HR management MCP server — CV upload, semantic candidate search, interview scheduling and more. Works with Claude Code and any MCP-compatible client.
Dig through any documentation with AI - MCP server for Claude, Cursor, and other AI assistants
Universal LLM Memory Architecture (ULMA) plugin for OpenCode with project/session isolation
Admin UI components for Deno AI Toolkit workspace management
Semantic Cache is a library for caching natural text based on semantic similarity using LanceDB with multiple embedding provider support (OpenAI, Google Gemini, VoyageAI)
Privacy-first local memory plugin for Moltbot: SQLite for structured/temporal queries + local embeddings for semantic search. Zero cloud calls.
Local RAG MCP Server - Easy-to-setup document search with minimal configuration
LanceDB: A serverless, low-latency vector database for AI applications
Universal LLM Memory Architecture (ULMA) plugin for OpenCode with project/session isolation
Add AI-powered chat with RAG to any app in minutes
NMP Plus MCP Server - 36 commands via HTTP API for Cursor
MCP server for interacting with LanceDB database
Persistent memory for AI coding agents — vector search over past sessions
AI memory plugin for OpenClaw — embedded LanceDB vector search with confidence scoring, contradiction detection, multi-signal ranking, and user verification loop
Core functionality for GoGlyph - Universal Shared Brain for AI Agents
A wrapper for LanceDB, providing a simple and efficient way to interact with LanceDB databases in JavaScript and TypeScript.
MCP server that indexes Claude Code and Codex conversations for semantic search
NMP Plus MCP Server - 36 commands via HTTP API for Cursor
Vector database indexer for documentation
Local CLI tool for indexing and searching code repositories with full-text and semantic search.
Lightweight repo-local hybrid (semantic + keyword) code search over MCP. Pure npm — no Docker, no Ollama, no servers. 11 languages via tree-sitter, jina-code embeddings, LanceDB, live file watcher. One command to install.
Local-first chat memory with vector search for LLM applications. No cloud. No API keys. Your data stays on your device.
ctxloom — The Universal Code Context Engine. A local-first MCP server providing intelligent code context via hybrid Vector + AST + Graph search with Skeletonization (70-90% token reduction).
Native TypeScript AI memory system and MCP server (Zero-LLM, LanceDB)
MCP server for structured project memory — semantic search, knowledge graph, auto-ingest, analytics dashboard