Package Exports
- thoughtlayer
- thoughtlayer/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (thoughtlayer) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
ThoughtLayer
Persistent, searchable memory for AI agents. Local-first. Works without API keys. Five lines of code to integrate.
import { ThoughtLayer } from 'thoughtlayer';
const memory = ThoughtLayer.init('./my-project');
await memory.add({ domain: 'user', title: 'Preference', content: 'Prefers dark mode' });
const results = await memory.query('what does the user prefer?');
console.log(results[0].entry.content); // "Prefers dark mode"Why ThoughtLayer?
Context windows end. Sessions expire. The knowledge your agent accumulated over 50 turns vanishes. Most "memory" solutions solve this by shipping your data to someone else's cloud and charging you per query.
ThoughtLayer takes a different approach:
- Local-first: SQLite + FTS5. Your data stays on your machine. No external database, no vendor lock-in.
- Works without API keys: The keyword engine alone hits 92.5% Recall@1. Embeddings are optional, not required.
- Real retrieval: Five signals combined via Reciprocal Rank Fusion: vector similarity, keyword match, term overlap, freshness decay, and importance scoring.
- Constant cost: Finding the 5 most relevant entries out of 10,000 costs the same as finding them out of 100. No per-query LLM calls.
- BYOLLM: OpenAI, Anthropic, Ollama, OpenRouter. Use whatever you already have.
Install
npm install thoughtlayerQuick Start
CLI
# Initialise
thoughtlayer init
# Ingest your docs
thoughtlayer ingest ./docs
# Query (no API keys needed)
thoughtlayer query "what database are we using"
# LLM-powered knowledge extraction
export ANTHROPIC_API_KEY=sk-ant-...
thoughtlayer curate "We decided to use PostgreSQL for pgvector support."TypeScript SDK
import { ThoughtLayer } from 'thoughtlayer';
// Initialise (or load existing)
const memory = ThoughtLayer.init('./my-project', {
embedding: { provider: 'openai', apiKey: process.env.OPENAI_API_KEY },
});
// Store knowledge
await memory.add({
domain: 'architecture',
title: 'Database Choice',
content: 'Using PostgreSQL with pgvector for embeddings.',
importance: 0.8,
tags: ['database'],
});
// Retrieve (vector + keyword + freshness + importance)
const results = await memory.query('what database do we use?');
results.forEach(r => console.log(`${r.entry.title}: ${r.score.toFixed(3)}`));
// LLM-powered extraction from raw text
const { entries } = await memory.curate(
'We switched from REST to GraphQL for the mobile API.'
);Features
Retrieval Pipeline
Most tools solve the memory problem by dumping everything into context. At 50 entries, that works. At 500, you're burning tokens. At 5,000, it breaks entirely. ThoughtLayer retrieves only what the query actually needs:
Query → Keyword Search (FTS5/BM25)
→ Vector Search (cosine similarity, optional)
→ Query Term Overlap
→ Metadata Filter (domain, tags, importance)
↓
Reciprocal Rank Fusion
↓
Freshness Decay + Importance Weighting
↓
Top-K ResultsIngest-Time Enrichment
Keywords are extracted automatically at write time: proper nouns, role patterns, action verbs, and synonym bridges. You never need to tag anything manually.
Auto-Chunking
Long documents are split into overlapping chunks with parent-child linking. Each chunk gets its own embedding, so retrieval is precise even when the answer sits in paragraph 47 of a 200-paragraph document.
Query Intent Detection
The query "who handles authentication?" is a different kind of question from "what happened yesterday?" ThoughtLayer classifies intent (who, when, what, how, latest) and adjusts domain and freshness boosts accordingly. No LLM calls required.
Temporal Awareness
"What changed last week" and "decisions in March" are parsed into time ranges and matched against entry timestamps. Time-aware retrieval without a separate temporal index.
Entity Resolution
"John" finds "John Smith, backend engineer". Partial names, aliases, and fuzzy matching are built in.
Fact Versioning
Facts change. When they do, ThoughtLayer detects the contradiction, creates a versioned entry, and links old to new with a supersedes relation. You always get the latest version first, with full history available.
Framework Integrations
LangChain
import { ThoughtLayerMemory } from 'thoughtlayer';
const memory = new ThoughtLayerMemory({
thoughtlayer: ThoughtLayer.load('./my-project'),
topK: 5,
});
// Drop-in replacement for ConversationBufferMemory
const vars = await memory.loadMemoryVariables({ input: 'tell me about auth' });
await memory.saveContext(
{ input: 'How does auth work?' },
{ output: 'We use JWT with refresh tokens.' }
);Vercel AI SDK
import { ThoughtLayerProvider } from 'thoughtlayer';
import { streamText } from 'ai';
const memory = new ThoughtLayerProvider({
thoughtlayer: ThoughtLayer.load('./my-project'),
});
const context = await memory.getContext(userMessage);
const result = await streamText({
model,
system: `You have memory:\n${context}`,
messages,
});
await memory.saveTurn(userMessage, assistantResponse);OpenAI Agents
import { createThoughtLayerTools } from 'thoughtlayer';
const tools = createThoughtLayerTools(ThoughtLayer.load('./my-project'));
// Three tools: remember, recall, update
const agent = new Agent({
name: 'my-agent',
tools: tools.definitions,
});
// Execute tool calls
const result = await tools.execute('recall', { query: 'user preferences' });CrewAI
import { ThoughtLayerCrewMemory } from 'thoughtlayer';
const crew = new ThoughtLayerCrewMemory({
thoughtlayer: ThoughtLayer.load('./my-project'),
crewId: 'research-crew',
});
// Agent-scoped memory
const agentMem = crew.forAgent('researcher');
await agentMem.save('Found paper on transformers', { importance: 0.8 });
// Shared crew memory
await crew.saveShared('Project goal: market analysis', { importance: 0.9 });MCP (Claude Desktop, Cursor, Windsurf)
Add to claude_desktop_config.json:
{
"mcpServers": {
"thoughtlayer": {
"command": "npx",
"args": ["-y", "thoughtlayer", "mcp"],
"env": {
"THOUGHTLAYER_PROJECT_ROOT": "/path/to/project"
}
}
}
}Exposes 6 tools: thoughtlayer_query, thoughtlayer_add, thoughtlayer_curate, thoughtlayer_search, thoughtlayer_list, thoughtlayer_health.
How It Compares
| Feature | ThoughtLayer | Mem0 | Zep | Letta (MemGPT) |
|---|---|---|---|---|
| Local-first | ✅ SQLite | ❌ Cloud | ❌ Cloud | ✅ Local |
| Works without API keys | ✅ Keyword search | ❌ Needs API | ❌ Needs API | ❌ Needs LLM |
| Framework integrations | 5 (LC, Vercel, OAI, CrewAI, MCP) | 3 | 1 | 1 |
| Retrieval pipeline | Vector + keyword + freshness + importance | Vector only | Vector + temporal | LLM-managed |
| Auto-chunking | ✅ | ❌ | ✅ | ✅ |
| Fact versioning | ✅ | ❌ | ❌ | ✅ |
| Query intent detection | ✅ No LLM | ❌ | ❌ | ❌ |
| Entity resolution | ✅ Fuzzy | ❌ | ❌ | ❌ |
| npm install to working | ~30 seconds | Minutes + signup | Minutes + signup | Minutes + config |
| Pricing | Free (MIT) | Freemium | Commercial | Open source |
Performance
Benchmarked on a corpus of 200 entries across 5 domains, 40 test queries:
| Metric | Keyword Only | With Embeddings |
|---|---|---|
| Recall@1 | 92.5% | 96.5% |
| MRR (Mean Reciprocal Rank) | 92.5% | 96.3% |
| p50 latency | <5ms | ~230ms |
| p99 latency | <10ms | ~400ms |
CLI Reference
| Command | Description |
|---|---|
thoughtlayer init |
Initialise a new project |
thoughtlayer ingest <dir> |
Ingest files (dedup, change detection) |
thoughtlayer ingest <dir> --watch |
Watch for changes |
thoughtlayer query <query> |
Hybrid search |
thoughtlayer search <term> |
Keyword-only search |
thoughtlayer add <content> |
Add a manual entry |
thoughtlayer curate <text> |
LLM-powered extraction |
thoughtlayer list |
List entries |
thoughtlayer status |
Ingestion status |
thoughtlayer health |
Health metrics |
Configuration
Embedding Providers
# Local (free, private)
ollama pull nomic-embed-text
thoughtlayer init --embedding-provider ollama
# Cloud ($0.02/1M tokens)
export OPENAI_API_KEY=sk-...
thoughtlayer init --embedding-provider openaiLLM Providers (for curate)
| Provider | Config | Notes |
|---|---|---|
| Anthropic | provider: "anthropic" |
Best quality |
| OpenAI | provider: "openai" |
Cheapest |
| OpenRouter | provider: "openrouter" |
Any model |
Storage
your-project/
└── .thoughtlayer/
├── config.json
├── knowledge/ # Markdown files (human-readable, git-friendly)
└── index/
└── metadata.db # SQLite (FTS5 + embeddings)Documentation
Contributing
git clone https://github.com/prasants/thoughtlayer.git
cd thoughtlayer
npm install --include=dev
npm run build
npx vitest runPRs welcome. Please include tests.
License
MIT. See LICENSE.