Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (kongbrain) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
KongBrain

A graph-backed cognitive engine for OpenClaw.
Replace the default sliding-window context with a persistent memory graph. Vector-embedded, self-scoring, and wired to learn across sessions. KongBrain extracts skills from what worked, traces causal chains through what broke, reflects on its own failures, and earns an identity through real experience. Every session compounds on the last.
Your assistant stops forgetting. Then it starts getting better.
Quick Start | Architecture | How It Works | Tools | Development
What Changes
| Lobster Brain (default) | Ape Brain (KongBrain) | |
|---|---|---|
| Memory | Sliding window. Old messages fall off a cliff. | Graph-persistent. Every turn, concept, skill, and causal chain stored with vector embeddings. |
| Recall | Whatever fits in the context window right now. | Cosine similarity + graph expansion + learned attention scoring across your entire history. |
| Adaptation | Same retrieval budget every turn, regardless of intent. | 10 intent categories. Simple question? Minimal retrieval. Complex debugging? Full graph search + elevated thinking. |
| Learning | None. Every session starts from zero. | Skills extracted from successful workflows, causal chains graduated into reusable procedures, corrections remembered permanently. |
| Self-awareness | Thermostat-level. | Periodic cognitive checks grade its own retrieval quality, detect contradictions, suppress noise, and extract your preferences. Eventually graduates a soul document. |
| Compaction | LLM-summarizes your conversation mid-flow (disruptive). | Graph retrieval IS the compaction. No interruptions, no lossy summaries. |
Quick Start
From zero to ape brain in under 5 minutes.
1. Install OpenClaw (if you haven't already)
npm install -g openclaw2. Start SurrealDB
Pick one:
# Native install
curl -sSf https://install.surrealdb.com | sh
export PATH="$HOME/.surrealdb:$PATH"
surreal start --user root --pass root --bind 0.0.0.0:8042 surrealkv:~/.kongbrain/surreal.db# Docker
docker run -d --name surrealdb -p 8042:8000 \
-v ~/.kongbrain/surreal-data:/data \
surrealdb/surrealdb:latest start \
--user root --pass root surrealkv:/data/surreal.db3. Install KongBrain
openclaw plugins install kongbrain4. Activate
Add to your OpenClaw config (~/.openclaw/openclaw.json):
{
"plugins": {
"allow": ["kongbrain"],
"slots": {
"contextEngine": "kongbrain"
}
}
}5. Talk to your ape
openclaw tuiThat's it. KongBrain uses whatever LLM provider and model you already have configured in OpenClaw (Anthropic, OpenAI, Google, Ollama, whatever). No separate API keys needed for the brain itself.
The BGE-M3 embedding model (~420MB) downloads automatically on first startup. All database tables and indexes are created automatically on first run. No manual setup required.
Configuration Options
All options have sensible defaults. Override via plugin config or environment variables:
| Option | Env Var | Default |
|---|---|---|
surreal.url |
SURREAL_URL |
ws://localhost:8042/rpc |
surreal.user |
SURREAL_USER |
root |
surreal.pass |
SURREAL_PASS |
root |
surreal.ns |
SURREAL_NS |
kong |
surreal.db |
SURREAL_DB |
memory |
embedding.modelPath |
KONGBRAIN_EMBEDDING_MODEL |
Auto-downloaded BGE-M3 Q4_K_M |
embedding.dimensions |
- | 1024 |
Full config example:
{
"plugins": {
"allow": ["kongbrain"],
"slots": {
"contextEngine": "kongbrain"
},
"entries": {
"kongbrain": {
"config": {
"surreal": {
"url": "ws://localhost:8042/rpc",
"user": "root",
"pass": "root",
"ns": "kong",
"db": "memory"
}
}
}
}
}
}Architecture
The IKONG Pillars
KongBrain's cognitive architecture follows five functional pillars:
| Pillar | Role | What it does |
|---|---|---|
| Intelligence | Adaptive reasoning | Intent classification, complexity estimation, thinking depth, orchestrator preflight |
| Knowledge | Persistent memory | Memory graph, concepts, skills, reflections, identity chunks, core memory tiers |
| Operation | Execution | Tool orchestration, skill procedures, causal chain tracking, artifact management |
| Network | Graph traversal | Cross-pillar edge following, neighbor expansion, causal path walking |
| Graph | Persistence | SurrealDB storage, BGE-M3 vector search, HNSW indexes, embedding pipeline |
A 6th pillar, Persona, is unlocked at soul graduation: "You have a Soul, an identity grounded in real experience. Be unique, be genuine, be yourself."
Structural Pillars
The graph entity model in SurrealDB:
| Pillar | Table | What it anchors |
|---|---|---|
| 1. Agent | agent |
Who is operating (name, model) |
| 2. Project | project |
What we're working on (status, tags) |
| 3. Task | task |
Individual sessions as units of work |
| 4. Artifact | artifact |
Files and outputs tracked across sessions |
| 5. Concept | concept |
Semantic knowledge nodes extracted from sessions |
On startup, the agent bootstraps the full chain: Agent → owns → Project, Agent → performed → Task, Task → task_part_of → Project, Session → session_task → Task. Graph expansion traverses these edges during retrieval.
The Knowledge Graph
SurrealDB with HNSW vector indexes (1024-dim cosine). Everything is embedded and queryable.
| Table | What it stores |
|---|---|
turn |
Every conversation message (role, text, embedding, token count, model, usage) |
memory |
Compacted episodic knowledge (importance 0-10, confidence, access tracking) |
skill |
Learned procedures with steps, preconditions, success/failure counts |
reflection |
Metacognitive lessons (efficiency, failure patterns, approach strategy) |
causal_chain |
Cause-effect patterns (trigger, outcome, chain type, success, confidence) |
identity_chunk |
Agent self-knowledge fragments (source, importance, embedding) |
monologue |
Thinking traces preserved across sessions |
core_memory |
Tier 0 (always loaded) + Tier 1 (session-pinned) directives |
soul |
Emergent identity document, earned through graduation |
Adaptive Reasoning: per-turn intent classification and budget allocation
Every turn gets classified by intent and assigned an adaptive config:
| Intent | Thinking | Tool Limit | Token Budget | Retrieval Share |
|---|---|---|---|---|
simple-question |
low | 3 | 4K | 10% |
code-read |
medium | 5 | 6K | 15% |
code-write |
high | 8 | 8K | 20% |
code-debug |
high | 10 | 8K | 20% |
deep-explore |
medium | 15 | 6K | 15% |
reference-prior |
medium | 5 | 10K | 25% |
meta-session |
low | 2 | 3K | 7% (skip retrieval) |
multi-step |
high | 12 | 8K | 20% |
continuation |
low | 8 | 4K | skip retrieval |
Fast path: Short inputs (<20 chars, no ?) skip classification entirely.
Confidence gate: Below 0.40 confidence, falls back to conservative config.
Context Injection Pipeline
- Embed user input via BGE-M3 (or hit prefetch cache at 0.85 cosine threshold)
- Vector search across 6 tables (turn, identity_chunk, concept, memory, artifact, monologue)
- Graph expand: fetch neighbors via structural + semantic edges, compute cosine similarity
- Score all candidates with WMR (Working Memory Ranker):
score = W * [similarity, recency, importance, access, neighbor_bonus, utility, reflection_boost] - Budget trim: inject Tier 0/1 core memory first (15% of context), then ranked results up to 21% retrieval budget
- Stage retrieval snapshot for post-hoc quality evaluation
ACAN: learned cross-attention scorer
A ~130K-parameter cross-attention network that replaces the fixed WMR weights once enough data accumulates.
- Activation: 5,000+ labeled retrieval outcomes
- Training: Pure TypeScript SGD with manual backprop, 80 epochs
- Staleness: Retrains when data grows 50%+ or weights age > 7 days
Soul & Graduation: earned identity, not assigned
The agent earns an identity document through accumulated experience. Graduation requires all 7 thresholds met AND a quality score >= 0.6:
| Signal | Threshold |
|---|---|
| Sessions completed | 15 |
| Reflections stored | 10 |
| Causal chains traced | 5 |
| Concepts extracted | 30 |
| Memory compactions | 5 |
| Monologue traces | 5 |
| Time span | 3 days |
Quality scoring from 4 real performance signals: retrieval utilization (30%), skill success rate (25%), critical reflection rate (25%), tool failure rate (20%).
Maturity stages: nascent (0-3/7) → developing (4/7) → emerging (5/7) → maturing (6/7) → ready (7/7 + quality gate). The agent and user are notified at each stage transition.
Soul evolution: Every 10 sessions after graduation, the soul is re-evaluated against new experience and revised if the agent has meaningfully changed.
Soul document structure: Working style, self-observations, earned values (grounded in specific evidence), revision history. Seeded as Tier 0 core memory, loaded every single turn.
Reflection System: metacognitive self-correction
Triggers at session end when metrics indicate problems:
| Condition | Threshold |
|---|---|
| Retrieval utilization | < 20% average |
| Tool failure rate | > 20% |
| Steering candidates | any detected |
| Context waste | > 0.5% of context window |
The LLM generates a 2-4 sentence reflection: root cause, error pattern, what to do differently. Stored with importance 7.0, deduped at 0.85 cosine similarity.
How It Works
Every Turn
User Input
|
v
Preflight ──────── Intent classification (25ms, zero-shot BGE-M3 cosine)
| 10 categories: simple-question, code-read, code-write,
| code-debug, deep-explore, reference-prior, meta-session,
| multi-step, continuation, unknown
v
Prefetch ────────── Predictive background vector searches (LRU cache, 5-min TTL)
|
v
Context Injection ─ Vector search -> graph expand -> 6-signal scoring -> budget trim
| Searches: turns, concepts, memories, artifacts, identity, monologues
| Scores: similarity, recency, importance, access, neighbor, utility
| Budget: 21% of context window reserved for retrieval
v
Agent Loop ──────── LLM + tool execution
| Planning gate: announces plan before touching tools
| Smart truncation: preserves tail of large tool outputs
v
Turn Storage ────── Every message embedded + stored + linked via graph edges
| responds_to, part_of, mentions, produced
v
Quality Eval ────── Measures retrieval utilization (text overlap, trigrams, unigrams)
| Tracks tool success, context waste, feeds ACAN training
v
Memory Daemon ───── Worker thread extracts 9 knowledge types via LLM:
| causal chains, monologues, concepts, corrections,
| preferences, artifacts, decisions, skills, resolved memories
v
Postflight ──────── Records orchestrator metrics (non-blocking)Between Sessions
At session end, KongBrain runs a combined extraction pass: skill graduation, metacognitive reflection, causal chain consolidation, soul graduation check, and soul evolution. A handoff note is written so the next session wakes up knowing what happened.
At session start, a wake-up briefing is synthesized from the handoff, recent monologues, soul content (if graduated), and identity state, then injected as inner speech so the agent knows who it is and what it was doing.
Memory Daemon: background knowledge extraction
A worker thread running throughout the session. Batches turns every ~12K tokens, calls the configured LLM to extract:
- Causal chains: trigger/outcome sequences with success/confidence
- Monologue traces: thinking blocks that reveal problem-solving approach
- Concepts: semantic nodes (architecture patterns, domain terms)
- Corrections: user-provided fixes (importance: 9)
- Preferences: behavioral rules learned from feedback
- Artifacts: file paths created or modified
- Decisions: important conclusions reached
- Skills: multi-step procedures (if 5+ tool calls in session)
- Resolved memories: completed tasks and confirmed facts
Tools
Three tools are registered for the LLM:
recall: Search graph memory by querycore_memory: Read/write persistent core directives (tiered: always-loaded vs session-pinned)introspect: Inspect database state, verify memory counts, run diagnostics, check graduation status, migrate workspace files
Development
git clone https://github.com/42U/kongbrain.git
cd kongbrain
pnpm install
pnpm build
pnpm testLink your local build to OpenClaw:
openclaw plugins install . --linkThen set plugins.slots.contextEngine to "kongbrain" in ~/.openclaw/openclaw.json and run openclaw.
Contributing
- Clone the repo and install dependencies (
pnpm install) - Make your changes
- Build (
pnpm build) and run tests (pnpm test) - Open a PR against
master
The lobster doesn't accept contributions. The ape does.
MIT License | Built by 42U