Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (axon-ai) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
๐ง Axon
Axon is an emergent cognitive architecture for multi-agent systems based on Global Workspace Theory.
import { Axon } from "axon-ai";
const axon = new Axon({
llm: {
baseUrl: "http://localhost:1234/v1",
model: "qwen/qwen3-4b-2507",
embeddingModel: "text-embedding-nomic-embed-text-v1.5",
temperature: 0.7,
provider: "lmstudio",
},
});
const result = await axon.process("Brainstorm sustainable energy solutions");
console.log(result.finalBroadcast);๐ Features
- Self-organizing - No predefined workflows or hardcoded agent interactions
- Emergent cognition - Thoughts compete for attention in a shared workspace
- Flexible configuration - Use any LLM provider (OpenAI, local, etc.)
- Dynamic specialists - Meta-agent creates specialists tailored to each problem
- Streaming support - Real-time observation of the cognitive process
๐ Installation
yarn add axon-ai
# or
pnpm add axon-ai๐งฉ Core Concepts
- ThoughtChunk: The fundamental data unit with content, embeddings, activation energy, and lineage
- Workspace: An active medium managing ThoughtChunks, decay, and cluster detection
- Agents: Specialist processing units with specific roles and prompts
- Meta-Agent: Analyzes the initial prompt and spawns appropriate specialist agents
- Orchestrator: Manages cognitive cycles and the broadcast mechanism
๐ Architecture
The system operates in cognitive cycles:
- User provides an initial prompt
- Meta-Agent defines specialist agents needed
- Each cycle:
- Agents receive the most active ThoughtChunks
- Agents generate new thoughts
- New thoughts are added to the Workspace
- All thoughts decay in energy
- High-energy clusters trigger a broadcast
- Process continues for a defined number of cycles
๐ Usage
Basic Example
import { Axon } from "axon-ai";
// Create a new Axon instance with custom config
const axon = new Axon({
llm: {
apiKey: "your-api-key", // Not needed for local LLMs
baseUrl: "https://api.openai.com/v1",
model: "gpt-4o",
embeddingModel: "text-embedding-3-large",
},
});
// Process a prompt
async function main() {
const result = await axon.process(
"Design a sustainable transportation system",
{ verbose: true },
);
console.log(result.finalBroadcast);
console.log(`Total thoughts: ${result.thoughts.length}`);
}
main();Local LLM Example
// Using a local LLM server (LM Studio, Ollama, etc.)
const axon = new Axon({
llm: {
baseUrl: "http://localhost:1234/v1",
model: "mistralai/mixtral-8x7b-instruct",
provider: "lmstudio",
},
});
const result = await axon.process("Analyze quantum computing impacts");โ๏ธ Configuration
Axon can be fully configured through code:
const axon = new Axon({
llm: {
apiKey: "sk-...",
baseUrl: "https://api.openai.com/v1",
model: "gpt-4o",
embeddingModel: "text-embedding-3-large",
temperature: 0.7,
provider: "openai",
},
context: {
maxContextTokens: 32000,
bufferTokens: 2000,
},
workspace: {
decayRate: 0.95,
activationThreshold: 7.0,
baseActivationEnergy: 1.0,
maxResonanceFactor: 2.0,
},
orchestrator: {
maxCycles: 10,
broadcastThreshold: 10.0,
},
});๐ Advanced Features
Custom Event Handlers
// Listen for specific events
axon.on("broadcast", (broadcast) => {
console.log("New broadcast synthesized:", broadcast);
});
axon.on("agentThought", (agent, thought) => {
console.log(`Agent ${agent} thinking: ${thought}`);
});Streaming Results
const result = await axon.processWithStreaming(
"Analyze the implications of quantum computing",
);Advanced Workspace Manipulation
// Access the workspace directly
const workspace = axon.getWorkspace();
// Add custom thoughts
workspace.createChunk(
"Important insight to consider",
undefined,
"custom-source",
[],
);
// Get most active thoughts
const activeThoughts = workspace.getMostActiveChunks(5);๐งช Examples
The package includes several examples demonstrating different usage scenarios:
# Run the basic example
yarn example:basic
# Run the advanced example with events and streaming
yarn example:advancedCheck the examples directory for more detailed examples and documentation.
๐ฌ Research & Theory
Axon implements Global Workspace Theory (GWT) as proposed by Bernard Baars and further developed by Stan Franklin in the LIDA cognitive architecture. GWT suggests that consciousness emerges from a competition among specialized cognitive processes, with winners being "broadcast" globally to the entire system.
๐ License
MIT