JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 27
  • Score
    100M100P100Q57405F
  • License ISC

🔬 Local SQLite-based AI research agent swarm with multi-perspective analysis, long-horizon recursive framework, AgentDB self-learning, anti-hallucination controls, and MCP server. Swarm-by-default with parallel execution. No cloud dependencies.

Package Exports

  • research-swarm
  • research-swarm/cli
  • research-swarm/db
  • research-swarm/mcp
  • research-swarm/reasoningbank

Readme

🔬 Research Swarm - Local AI Research Agent System

npm version License: ISC Node.js Version

A fully local, SQLite-based AI research agent system with long-horizon recursive framework, AgentDB self-learning, and MCP server support.

Created by rUv | GitHub

✨ Key Features

v1.1.0 - Swarm-by-Default Architecture

  • Multi-Agent Swarm (Default) - Automatic task decomposition into 3-7 specialized research agents
  • Multi-Perspective Analysis - Explorer, Depth Analyst, Verifier, Trend Analyst, Synthesizer
  • Parallel Execution - 3-5x faster with concurrent agent processing (up to 4 concurrent)
  • Adaptive Swarm Sizing - Automatically scales from 3-7 agents based on task complexity
  • Priority-Based Scheduling - Research → Verification → Synthesis phases
  • Backward Compatible - Single-agent mode via --single-agent flag

Core Features

  • 100% Local - SQLite database, no cloud dependencies
  • ED2551 Enhanced Research Mode - 5-phase recursive framework with 51-layer verification cascade
  • Long-Horizon Research - Multi-hour deep analysis with temporal trend tracking
  • AgentDB Self-Learning - Complete ReasoningBank integration with pattern learning
  • HNSW Vector Search - 150x faster similarity search with multi-level graph structure
  • Memory Distillation - Automated knowledge compression from successful patterns
  • Pattern Associations - Similarity-based linking between research patterns
  • Anti-Hallucination - Strict verification protocols with confidence scoring
  • Performance Optimized - 3,848 ops/sec with WAL mode and 16 database indexes
  • MCP Server - stdio and HTTP/SSE streaming support
  • Multi-Model - Anthropic Claude, OpenRouter, Google Gemini support
  • NPX Compatible - Run without installation via npx

🚀 Quick Start

NPX (No Installation Required)

# Multi-agent swarm (v1.1.0 default - 5 agents)
npx research-swarm research researcher "Analyze quantum computing trends"

# Simple tasks (3 agents, faster)
npx research-swarm research researcher "What are REST APIs?" --depth 3

# Complex research (7 agents, comprehensive)
npx research-swarm research researcher "AI safety analysis" --depth 8

# Single-agent mode (v1.0.1 behavior)
npx research-swarm research researcher "Quick question" --single-agent

Install Globally

# Install globally
npm install -g research-swarm

# Then use without npx
research-swarm research researcher "Your research task"

v1.1.0 Update: Swarm-by-default! Default command now spawns 3-7 agents for multi-perspective analysis. Use --single-agent for v1.0.1 behavior. See CHANGELOG.md

Installation Requirements

System Requirements:

  • Node.js >= 16.0.0
  • npm >= 7.0.0
  • Python 3.x (for native module compilation)
  • C++ compiler (GCC, Clang, or MSVC)

Troubleshooting Installation:

# If better-sqlite3 compilation fails, try:
npm install --ignore-scripts

# Or install with build tools:
npm install --build-from-source

# On Ubuntu/Debian:
sudo apt-get install python3 build-essential

# On macOS:
xcode-select --install

# On Windows:
npm install --global windows-build-tools

Basic Usage

# Initialize database (first time only)
npx research-swarm init

# Multi-agent swarm research (v1.1.0 default)
npx research-swarm research researcher "Analyze quantum computing trends"
# → Spawns 5 agents: Explorer, Depth Analyst, Verifier, Trend Analyst, Synthesizer

# View results
npx research-swarm list
npx research-swarm view <job-id>

v1.1.0 Swarm Examples

# Adaptive swarm sizing (automatic based on depth)
npx research-swarm research researcher "What are webhooks?" --depth 3
# → 3 agents (simple task: Explorer, Depth, Synthesizer)

npx research-swarm research researcher "Compare architectures" --depth 5
# → 5 agents (medium task: + Verifier, Trend Analyst)

npx research-swarm research researcher "AI safety analysis" --depth 8
# → 7 agents (complex: + Domain Expert, Critic)

# Custom swarm configuration
npx research-swarm research researcher "task" --swarm-size 3 --max-concurrent 4

# Single-agent mode (v1.0.1 behavior, lower cost)
npx research-swarm research researcher "Quick question" --single-agent

# Verbose mode (see all agent outputs)
npx research-swarm research researcher "task" --verbose

Advanced Configuration

Create .env file:

# Required
ANTHROPIC_API_KEY=sk-ant-...

# Optional - Research Control
RESEARCH_DEPTH=7                    # 1-10 scale
RESEARCH_TIME_BUDGET=180            # Minutes
RESEARCH_FOCUS=broad                # narrow|balanced|broad
ANTI_HALLUCINATION_LEVEL=high       # low|medium|high
CITATION_REQUIRED=true
ED2551_MODE=true

# Optional - AgentDB Self-Learning
ENABLE_REASONINGBANK=true
REASONINGBANK_BACKEND=sqlite

# Optional - Federation
ENABLE_FEDERATION=false
FEDERATION_MODE=docker

📖 Features

Multi-Agent Swarm Architecture (v1.1.0)

Default behavior: Task automatically decomposes into specialized agents

Your Task
    ↓
Swarm Decomposition
    ↓
┌─────────────────────────────────────────────────┐
│ 🔍 Explorer (20%)      → Broad survey          │
│ 🔬 Depth Analyst (30%) → Technical deep dive   │
│ ✅ Verifier (20%)      → Fact checking         │
│ 📈 Trend Analyst (15%) → Temporal analysis     │
│ 🧩 Synthesizer (15%)   → Unified report        │
└─────────────────────────────────────────────────┘
    ↓
Parallel Execution (4 concurrent)
    ↓
Learning Session (ReasoningBank)
    ↓
Final Report

Adaptive Swarm Sizing:

  • Depth 1-3 (Simple): 3 agents (explorer, depth, synthesis)
  • Depth 4-6 (Medium): 5 agents (+ verifier, trend)
  • Depth 7-10 (Complex): 7 agents (+ domain expert, critic)

Long-Horizon Recursive Research

Multi-phase research framework supporting hours-long research tasks:

  1. Initial Exploration (15% of time) - Broad survey and topic mapping
  2. Deep Analysis (40% of time) - Detailed investigation
  3. Verification & Validation (20% of time) - Cross-reference findings
  4. Citation Verification (15% of time) - Verify all sources
  5. Synthesis & Reporting (10% of time) - Compile final report

Anti-Hallucination Protocol

When ANTI_HALLUCINATION_LEVEL=high:

  • ✅ Only cite verified sources
  • ✅ Always provide URLs
  • ✅ Flag uncertain information with confidence scores
  • ✅ Cross-reference all claims
  • ❌ Never generate speculative data
  • ❌ Never create fake citations

AgentDB Self-Learning

Complete ReasoningBank integration with local SQLite storage:

Pattern Storage:

  • Automatic reward calculation based on quality metrics
  • Success/failure tracking with confidence scores
  • Critique generation for continuous improvement
  • Latency and token usage tracking

Memory Distillation:

  • Automated knowledge compression from multiple patterns
  • Category-based grouping (AI/ML, Cloud, Technology, etc.)
  • Key insights, success factors, and failure patterns extraction
  • Best practices identification and storage

Pattern Associations:

  • Similarity-based linking between patterns (0-1 score)
  • Association types: similar, complementary, contrasting, sequential
  • Learning value calculation for knowledge transfer
  • Cross-pattern analysis for improved recommendations

Learning Episodes:

  • Performance tracking over time with verdicts (success/failure/partial/retry)
  • Judgment scores and improvement rates
  • Temporal trend analysis
  • Continuous performance optimization

Vector Embeddings:

  • HNSW multi-level graph for 150x faster search
  • Content hashing for deduplication
  • Semantic similarity matching
  • Source type filtering (pattern/episode/task/report)

Federation Capabilities

Docker-based federated agent coordination:

  • Distribute research across multiple nodes
  • QUIC protocol for fast coordination
  • Fault-tolerant with automatic failover
  • Scales to hundreds of concurrent research tasks

🎯 MCP Server

Research Swarm provides a Model Context Protocol server with 6 tools:

Available MCP Tools

  1. research_swarm_init - Initialize database
  2. research_swarm_create_job - Create research job
  3. research_swarm_start_job - Start job execution
  4. research_swarm_get_job - Get job status
  5. research_swarm_list_jobs - List all jobs
  6. research_swarm_update_progress - Update job progress

Start MCP Server

# stdio mode (default)
research-swarm mcp

# HTTP/SSE mode
research-swarm mcp http --port 3000

MCP Integration

Add to your Claude Desktop or other MCP clients:

{
  "mcpServers": {
    "research-swarm": {
      "command": "npx",
      "args": ["@agentic-flow/research-swarm", "mcp"]
    }
  }
}

📊 Database Schema

SQLite database at ./data/research-jobs.db:

CREATE TABLE research_jobs (
  id TEXT PRIMARY KEY,              -- UUID
  agent TEXT NOT NULL,              -- Agent name
  task TEXT NOT NULL,               -- Research task
  status TEXT,                      -- pending|running|completed|failed
  progress INTEGER,                 -- 0-100%
  current_message TEXT,             -- Status message
  execution_log TEXT,               -- Full logs
  report_content TEXT,              -- Generated report
  report_format TEXT,               -- markdown|json|html
  duration_seconds INTEGER,         -- Execution time
  grounding_score REAL,             -- Quality score
  created_at TEXT,                  -- Timestamps
  completed_at TEXT,
  -- ... and 15 more fields
);

🔧 CLI Commands

# Research (v1.1.0: Swarm by default)
research-swarm research <agent> "<task>" [options]
  -d, --depth <1-10>              Research depth
  -t, --time <minutes>            Time budget
  -f, --focus <mode>              Focus mode (narrow|balanced|broad)
  --anti-hallucination <level>    Verification level
  --no-citations                  Disable citations
  --no-ed2551                     Disable enhanced mode

  # v1.1.0 Swarm Options
  --single-agent                  Legacy single-agent mode (v1.0.1 behavior)
  --swarm-size <number>           Number of agents (3-7, default: adaptive)
  --max-concurrent <number>       Max concurrent agents (default: 4)
  --verbose                       Show all agent outputs

# Jobs
research-swarm list [options]
  -s, --status <status>           Filter by status
  -l, --limit <number>            Limit results

research-swarm view <job-id>      View job details

# AgentDB Learning
research-swarm learn               Run learning session (memory distillation)
  --min-patterns <number>         Minimum patterns required (default: 2)

research-swarm stats               Show AgentDB learning statistics

research-swarm benchmark           Run ReasoningBank performance benchmark
  --iterations <number>           Number of iterations (default: 10)

# Parallel Swarm
research-swarm swarm "<task1>" "<task2>" ...
  -a, --agent <name>              Agent type (default: researcher)
  -c, --concurrent <number>       Max concurrent tasks (default: 3)

# HNSW Vector Search
research-swarm hnsw:init           Initialize HNSW index
  -M <number>                     Connections per layer (default: 16)
  --ef-construction <number>      Search depth (default: 200)
  --max-layers <number>           Maximum layers (default: 5)

research-swarm hnsw:build          Build HNSW graph from vectors
  --batch-size <number>           Vectors per batch (default: 100)

research-swarm hnsw:search "<query>"  Search similar vectors
  -k <number>                     Number of results (default: 5)
  --ef <number>                   Search depth (default: 50)
  --source-type <type>            Filter by source type

research-swarm hnsw:stats          Show HNSW graph statistics

# System
research-swarm init                Initialize database
research-swarm mcp [mode]          Start MCP server
research-swarm --help              Show help
research-swarm --version           Show version

🎓 Examples

Quick Research Task (v1.1.0 Swarm)

# Spawns 3 agents (explorer, depth, synthesis)
research-swarm research researcher "What are webhooks?" --depth 3 --swarm-size 3

# Single-agent mode (faster, lower cost)
research-swarm research researcher "What are webhooks?" --depth 3 --single-agent

Deep Analysis with Full Swarm (v1.1.0)

# Spawns 7 agents: explorer, depth, verifier, trend, domain-expert, critic, synthesizer
research-swarm research researcher "Comprehensive AI safety analysis" \
  --depth 8 \
  --time 240 \
  --focus broad \
  --anti-hallucination high \
  --swarm-size 7 \
  --verbose

# Result: Multi-perspective report with verification and synthesis

Using OpenRouter

# Set in .env or environment
PROVIDER=openrouter
OPENROUTER_API_KEY=sk-or-...
COMPLETION_MODEL=anthropic/claude-3.5-sonnet

research-swarm research researcher "Your task"

Using Google Gemini

PROVIDER=gemini
GOOGLE_GEMINI_API_KEY=AIza...
COMPLETION_MODEL=gemini-2.0-flash-exp

research-swarm research researcher "Your task"

Parallel Swarm Execution

# Run 3 research tasks concurrently
research-swarm swarm \
  "Cloud computing trends 2024" \
  "Machine learning vs deep learning" \
  "TypeScript benefits" \
  --concurrent 3

# Automatically triggers learning session when 2+ tasks complete

Learning Session & Statistics

# Run manual learning session
research-swarm learn --min-patterns 3

# View learning statistics
research-swarm stats

# Performance benchmark
research-swarm benchmark --iterations 20
# Initialize and build HNSW graph
research-swarm hnsw:init
research-swarm hnsw:build --batch-size 50

# Search for similar research
research-swarm hnsw:search "machine learning trends" -k 10

# View graph statistics
research-swarm hnsw:stats

📦 Package Exports

v1.0.1+ (Working Imports)

// Default import (all functions)
import swarm from 'research-swarm';
await swarm.initDatabase();
const jobId = await swarm.createResearchJob({ agent: 'researcher', task: 'Your task' });

// Named imports (21 exports available)
import { createResearchJob, initDatabase, VERSION } from 'research-swarm';
await initDatabase();
const job = await createResearchJob({ agent: 'researcher', task: 'Your task', depth: 5 });

// Subpath imports
import { getDatabase, createJob } from 'research-swarm/db';
import { storeResearchPattern } from 'research-swarm/reasoningbank';

// All exports:
// - Database: initDatabase, createJob, updateProgress, markComplete, getJobStatus, getJobs
// - ReasoningBank: storeResearchPattern, searchSimilarPatterns, getLearningStats
// - HNSW: initializeHNSWIndex, buildHNSWGraph, searchHNSW, addVectorToHNSW, getHNSWStats
// - Utilities: createResearchJob, listJobs, VERSION, PACKAGE_NAME

Note: v1.0.0 had broken imports due to missing lib/index.js. Upgrade to v1.0.1+ for working package imports.

🛡️ Security

  • ✅ No hardcoded credentials
  • ✅ API keys via environment variables
  • ✅ Input validation on all commands
  • ✅ SQL injection protection (parameterized queries)
  • ✅ Process isolation for research tasks
  • ✅ Sandboxed execution environment

📝 License

ISC License - Copyright (c) 2025 rUv

🤝 Contributing

Contributions welcome! This maintains local-first, no-cloud-services architecture.

  1. Fork the repository
  2. Create your feature branch
  3. Commit your changes
  4. Push to the branch
  5. Create a Pull Request

📞 Support


Created by rUv | GitHub | npm

Built with ❤️ using Claude Sonnet 4.5 and agentic-flow