Package Exports
- @agentskit/memory
Readme
@agentskit/memory
Persist conversations and add vector search to your agents — swap backends without changing agent code.
Why
- Conversations that survive restarts — SQLite for local development, Redis for production; your agent remembers context across sessions with zero code changes
- RAG-ready vector search — store and retrieve embeddings with
fileVectorMemory(pure JS, no native deps) or Redis vector search for scale - Plug any backend — the
VectorStoreinterface is 3 methods; bring LanceDB, Pinecone, or any custom store in minutes
Install
npm install @agentskit/memory better-sqlite3
# For production: npm install redis
# For vectors: npm install vectraQuick example
import { createRuntime } from '@agentskit/runtime'
import { anthropic } from '@agentskit/adapters'
import { sqliteChatMemory, fileVectorMemory } from '@agentskit/memory'
const runtime = createRuntime({
adapter: anthropic({ apiKey: process.env.ANTHROPIC_API_KEY, model: 'claude-sonnet-4-6' }),
memory: sqliteChatMemory({ path: './chat.db' }),
})
// Agent now remembers previous conversations across process restarts
const result = await runtime.run('What did we discuss yesterday?')
console.log(result.content)With RAG
Use a vector backend with @agentskit/rag createRAG({ embed, store }) — fileVectorMemory and redisVectorMemory implement VectorMemory for chunk storage and search.
Next steps
- Swap
sqliteChatMemoryfor Redis or in-memory variants from the same package for different deployment targets - Pair embedders from
@agentskit/adapterswith RAG — see@agentskit/rag
Ecosystem
| Package | Role |
|---|---|
| @agentskit/core | Memory, VectorMemory types |
| @agentskit/rag | Chunking + retrieval on top of vector memory |
| @agentskit/runtime | memory / retriever options |
| @agentskit/adapters | Embeddings for RAG |