Package Exports
- @gamaze/hicortex
- @gamaze/hicortex/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@gamaze/hicortex) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
@gamaze/hicortex — Long-term Memory for OpenClaw Agents
Your agents remember past decisions, avoid repeated mistakes, and get smarter every day.
Hicortex captures session activity, distills knowledge nightly, scores importance, generates actionable lessons via reflection, and automatically injects them into agent context. No manual intervention needed after install.
Requirements
- OpenClaw gateway (Node.js 18+)
- LLM provider configured in OpenClaw (auto-detected, 20+ providers supported)
- ~500MB disk for database + embedding model
Install
openclaw plugins install @gamaze/hicortexConfigure
No configuration needed for most users. The plugin auto-detects your LLM provider from OpenClaw settings on first startup.
Optional config (add to plugin entry in ~/.openclaw/openclaw.json):
| Field | Default | Description |
|---|---|---|
licenseKey |
(none) | License key. Free tier (250 memories) without key. |
llmBaseUrl |
(auto) | Override LLM base URL (auto-detected from OC config) |
llmApiKey |
(auto) | Override LLM API key (auto-detected from OC auth) |
llmModel |
(auto) | Override model for scoring and distillation |
reflectModel |
(auto) | Override model for nightly reflection |
consolidateHour |
2 |
Hour (0-23, local time) for nightly consolidation |
dbPath |
(auto) | Custom database path |
Restart the gateway after config changes:
openclaw gateway restartWhat Happens Automatically
| When | What | How |
|---|---|---|
| Agent start | Recent lessons injected into context | before_agent_start hook |
| Agent end | Conversation captured and distilled | agent_end hook + LLM |
| Nightly | Score importance, reflect, link, decay | In-process consolidation pipeline |
Agent Tools
The plugin registers these tools for agents to use:
- hicortex_search — Semantic search across all stored knowledge
- hicortex_context — Get recent decisions and project state
- hicortex_ingest — Store a piece of knowledge directly
- hicortex_lessons — Get actionable lessons from reflection
Skills: /learn to save explicit learnings.
Architecture
OpenClaw Gateway (Node.js)
└── @gamaze/hicortex plugin (TypeScript, in-process)
├── before_agent_start → inject lessons into agent context
├── agent_end → capture conversation, distill via LLM
├── session_end → check if consolidation overdue
├── registerTool → hicortex_search, hicortex_context, hicortex_ingest, hicortex_lessons
└── registerService → DB init, LLM auto-config, nightly consolidation
├── SQLite + sqlite-vec + FTS5 (single file, in-process)
├── bge-small-en-v1.5 embeddings (ONNX, local CPU)
└── BM25 + vector search with RRF fusionNo sidecar, no HTTP server, no separate process. Everything runs inside the gateway.
Pricing
| Tier | Price | Memories | Features |
|---|---|---|---|
| Free | $0 | 250 | Full features: search, reflection, lessons, linking |
| Pro | $9/month | Unlimited | Everything in Free, unlimited |
| Lifetime | $149 | Unlimited | Pro forever |
| Team | $29/month | Unlimited | Multi-agent shared memory |
Get a license key at hicortex.gamaze.com.
Uninstall
openclaw plugins uninstall hicortexYour memory database is preserved by default. To remove it: delete ~/.openclaw/data/hicortex.db.
Development
# Local dev install
openclaw plugins install -l ./packages/openclaw-plugin
# Build
cd packages/openclaw-plugin
npm install
npm run build
# Test
npm testTroubleshooting
Tools not visible to agent: The plugin auto-adds tools to tools.allow on startup. If tools still don't appear, check that the gateway was restarted after install.
LLM auto-config failed: Check the gateway log for [hicortex] WARNING. You may need to add llmBaseUrl to the plugin config manually for non-standard providers.
No lessons generated: Reflection requires an LLM. Check that your LLM provider is accessible and has sufficient quota.
First startup slow: The embedding model (~130MB) downloads on first run. Allow up to 2 minutes.