Package Exports
- @hivetechs/hive-ai
- @hivetechs/hive-ai/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@hivetechs/hive-ai) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
🐝 hive-tools - Multi-Model Consensus Platform
Why trust one AI when you can trust them all?
Eliminate AI hallucinations through our revolutionary 4-stage consensus pipeline. hive-tools combines multiple AI models to deliver production-ready, trustworthy responses for mission-critical applications.
✨ What You Get
🆓 Free Tier (No Credit Card Required)
- 5 daily conversations - Perfect for trying our consensus technology
- Single-model AI queries - Access to OpenAI, Anthropic, Google, and more
- IDE integration - Works with VS Code, Cursor, Windsurf
- Provider management - Configure and test multiple AI providers
🚀 Premium Features (Start $5/month)
- 7-day FREE trial with unlimited conversations
- 4-stage consensus pipeline - Generator → Refiner → Validator → Curator
- 323+ models from 55+ providers with intelligent selection
- Advanced analytics - Cost tracking, performance monitoring
- Team collaboration - Shared usage pools and admin controls
🚀 Quick Start
Installation Options
Option 1: NPM (Recommended)
npm install -g @hivetechs/hive-aiOption 2: Clone & Build
git clone https://github.com/hivetechs-collective/hive.ai.git
cd hive.ai
npm install && npm run buildOption 3: GitHub Release (Manual)
# Download binary for your platform from:
# https://github.com/hivetechs-collective/hive.ai/releases🔄 Staying Updated
Why Keep Updated?
- ✅ Latest AI models from 323+ providers
- ✅ Security patches and bug fixes
- ✅ New features and consensus improvements
- ✅ Performance optimizations
- ✅ Compatibility with latest IDEs
Check Your Current Version
# Check your current version
hive --version
# Or get detailed version info
npm list -g @hivetechs/hive-aiAutomatic Update System (NEW!)
hive-tools now includes an intelligent auto-update system:
Check for Updates:
hive update-check # Check for package updates
hive update-status # Show detailed update status Configure Notifications:
hive update-configure # Set update preferencesFeatures:
- 🔔 Smart Notifications - Get notified when updates are available
- 📅 Scheduled Checks - Weekly background checks (configurable)
- 🎯 Update Types - Major, minor, and patch update filtering
- 🛡️ Non-Intrusive - Silent background operation
Manual Update Commands
Update to Latest Beta Version:
npm install -g @hivetechs/hive-ai@betaUpdate to Latest Stable Version:
npm update -g @hivetechs/hive-aiGet Beta Features (Optional):
npm install -g @hivetechs/hive-ai@betaWhen to Update
Update immediately when:
- 🚨 Security updates are released
- 🐛 You encounter known bugs
- 🆕 New AI models are added
- ⚙️ IDE compatibility issues arise
Check for updates:
- 📅 Weekly: For active development projects
- 📅 Monthly: For production environments
- 📅 As needed: When new features are announced
Update Process
Backup your configuration (optional - but recommended):
# Your settings are in ~/.hive-ai/ and are preserved during updates ls ~/.hive-ai/
Update the package:
npm update -g @hivetechs/hive-ai
Verify the update:
hive --versionTest basic functionality:
hive configure --help
💡 Pro Tip: Your license keys, provider configurations, and conversation history are automatically preserved during updates.
Troubleshooting Updates
If update fails:
# Clear npm cache and retry
npm cache clean --force
npm install -g @hivetechs/hive-ai@latestIf permissions error on macOS/Linux:
# Use sudo (not recommended long-term)
sudo npm update -g @hivetechs/hive-ai
# Better: Fix npm permissions permanently
npm config set prefix ~/.npm-global
export PATH=~/.npm-global/bin:$PATHReset to clean state if needed:
# Uninstall and reinstall
npm uninstall -g @hivetechs/hive-ai
npm install -g @hivetechs/hive-aiRelease Notes & What's New
Stay informed about new features and changes:
- 📢 Release announcements: hivetechs.io/changelog
- 📧 Email updates: Subscribe at hivetechs.io/updates
- 🐙 GitHub releases: github.com/hivetechs-collective/hive.ai/releases
First Steps
🚨 IMPORTANT: Get Your License First
Before using hive-tools, you need a license (free, trial, or premium):
- Get FREE License: Visit hivetechs.io/pricing
- Start FREE Trial: 7 days unlimited at hivetechs.io/pricing
- Configure License:
hive-ai configure --license YOUR_LICENSE_KEY
Then Start Using:
# Setup wizard
hive-ai setup wizard
# Or configure your own API keys
hive-ai provider configure openai YOUR_OPENAI_KEYNo license = No functionality. All features require a valid license key from hivetechs.io.
The onboarding assistant will guide you through:
- License key validation
- IDE configuration (VS Code, Cursor, or Windsurf)
- Adding one AI provider for immediate use
- Testing your first consensus query
📋 Subscription Plans & Conversation Limits
| Plan | Daily Limit | Features |
|---|---|---|
| Free | 10 conversations | All features included |
| Basic | 50 conversations | All features included |
| Standard | 100 conversations | All features included |
| Premium | 200 conversations | All features included |
| Team | Unlimited | All features included |
Visit hivetechs.io/pricing to upgrade your plan.
After completing the onboarding process, you'll be ready to use hive-tools in your IDE by typing @hive-tools. followed by a command.
🚀 Transformative Features
🧠 4-Stage Consensus Pipeline
hive-tools's core innovation is its unique 4-stage consensus pipeline that transforms user queries into exceptionally high-quality responses:
- Generator Stage (GPT-3.5-Turbo) - Creates comprehensive initial responses with broad topic coverage
- Refiner Stage (GPT-4-Turbo) - Enhances clarity, corrects inaccuracies, and improves structure
- Validator Stage (GPT-4-Turbo) - Verifies factual accuracy and performs critical reasoning checks
- Curator Stage (GPT-3.5-Turbo) - Delivers polished, well-formatted responses with consistent tone
🔄 Thematic Knowledge Retrieval System
Unlike standard AI assistants, hive-tools features an advanced thematic knowledge retrieval system that:
- Automatically maintains conversation continuity across related topics
- Identifies thematic relationships between seemingly disparate queries
- Builds a comprehensive knowledge graph from user interactions
- Provides context-aware responses without requiring explicit conversation references
🧩 Technical Domain Expertise
hive-tools excels at specialized technical domains with deep understanding of:
- Software Engineering & Programming Languages
- Machine Learning & AI Systems
- Database Technologies & Data Science
- Cloud Infrastructure & DevOps
- Security & Authentication
- Web Development & Modern Frameworks
📚 Persistent Contextual Memory
Our SQLite-based persistent storage ensures:
- Long-term memory across sessions
- Automatic context retrieval for related questions
- Progressive knowledge building from user interactions
- Intelligent response adaptation based on conversation history
🚀 Getting Started
1. Environment Configuration
hive-tools requires OpenAI API access for its multi-model consensus pipeline. Configure your environment by creating a file at src/env/keys.ts:
export const OPENAI_API_KEY = "your_key_here";⚠️ Security Note: For production environments, we recommend using environment variables or a secure secrets management solution.
2. Installation
Install all dependencies to power the advanced consensus pipeline and thematic knowledge retrieval system:
npm install
# or
yarn install3. Build the Server
Compile the TypeScript implementation of our multi-stage consensus pipeline:
npm run build4. Database Initialization
The SQLite database for persistent conversation memory will be automatically initialized on first startup. The system stores:
- Conversation history with timestamped entries
- Semantic topic embeddings for thematic retrieval
- Conversation metadata for context persistence
5. Using the CLI Tool
hive-tools offers a powerful CLI tool for easy interaction:
# Start the clean interactive CLI (recommended)
npm run cli
# Or use the colorful CLI with syntax highlighting
npm run cli:color
# Run a direct query (quiet mode is the default, showing only the result)
npm run cli -- consensus "What is the capital of France?"
# Enable verbose mode for debugging or detailed output
npm run cli -- --debug consensus "What is the capital of France?"You can also install the CLI globally for easier access:
# Install the CLI globally
npm link
# Now you can use the 'hive' command directly
hive consensus "What is the capital of France?"
# Or simply type your query directly - any unrecognized input is treated as a consensus query
hive "What is the capital of France?"For detailed CLI documentation, see CLI.md.
Once the CLI starts, you'll see the hive> prompt. From there, you can run commands directly:
hive> list_providers
hive> configure_provider OpenAI sk-your-api-key-here
hive> test_providers
hive> configure_pipeline_interactive my_pipeline🚀 Shorthand Commands
hive-tools supports intuitive shorthand commands for all tools, making interactions faster and more natural. These shortcuts work both in the CLI and when interacting with AI agents through your IDE.
Main Tools
- Consensus:
consensus,cons- Example:
consensus what is quantum computing?
- Example:
- Capture:
capture,cap- Example:
capture My insights about React | React is a JavaScript library for building user interfaces | code_analysis | react,javascript
- Example:
Profile Management
- List Profiles:
list_profiles,profiles,lp- Example:
profiles
- Example:
- Get Profile:
get_profile,profile,gp- Example:
profile default
- Example:
- Update Profile:
update_profile,up- Example:
update_profile default | {"temperature": 0.7}
- Example:
Provider Configuration
- List Providers:
list_providers,providers,lprov- Example:
providers
- Example:
- Configure Provider:
configure_provider,config_provider,cp- Example:
configure_provider OpenAI sk-abc123 https://api.openai.com
- Example:
- Test Providers:
test_providers,test,tp- Example:
test_providersortest OpenAI:gpt-4
- Example:
Pipeline Configuration
- List Pipeline Profiles:
list_pipeline_profiles,pipelines,lpp- Example:
pipelines
- Example:
- Configure Pipeline:
configure_pipeline,config_pipeline,cpp- Example:
configure_pipeline default OpenAI:gpt-4:0.7 Anthropic:claude-3-opus:0.5 Gemini:gemini-pro:0.3
- Example:
- Set Default Profile:
set_default_profile,default_profile,sdp- Example:
set_default_profile high_quality
- Example:
Model Registry Management
- Update Model Registry:
update_model_registry,update_registry,umr- Example:
update_model_registry
- Example:
- Add Custom Model:
add_custom_model,add_model,acm- Example:
add_custom_model OpenAI gpt-4-turbo-preview GENERAL
- Example:
- List Models:
list_models,models,lm- Example:
list_models OpenAI
- Example:
This interactive mode makes it much easier to work with hive-tools, as you don't need to prefix commands with npm run cli --. We recommend using this interactive mode for all your hive-tools configuration and usage.
CLI Options
- Standard CLI (
npm run cli): Current CLI implementation with enhanced functionality - Colorful CLI (
npm run cli:color): CLI with syntax highlighting for better readability - Legacy CLI (
npm run cli:old): Original CLI implementation (for backward compatibility)
Note: The colorful CLI enhances the terminal experience with color-coded commands, parameters, and output messages, making it easier to read and navigate the CLI interface.
6. Configure MCP in Your IDE
hive-tools integrates seamlessly with modern IDEs like Windsurf, Cursor, VS Code, and Zed. The easiest way to configure is using our automatic configuration script:
npm run configure-ideThis script automatically generates all necessary configuration files for supported IDEs:
- Windsurf: Configures
~/.codeium/windsurf/mcp_config.json - Cursor: Configures
~/.cursor/mcp.json - VS Code: Configures
.vscode/settings.json - Zed: Configures
.zed/mcp_config.json
For detailed IDE-specific instructions, see the IDE Configuration Guide.
You can also manually configure your IDE using this configuration:
{
"mcpServers": {
"hive-tools": {
"command": "node",
"args": [
"/path/to/your/hive.ai/dist/index.js"
]
}
}
}📘 Pro Tip: Replace
/path/to/your/hive.ai/with the actual path to your hive-tools installation.
After configuration, you'll see the "hive-tools Consensus" tool available in your IDE. This single tool provides access to our entire multi-model pipeline and thematic knowledge system.
For detailed integration guides, visit our documentation portal.
💪️ Available Tools
hive-tools provides a comprehensive suite of 11 tools through the MCP server, organized into five categories.
💡 CLI Usage Tip: While examples below use the full
hive-tools.commandformat for IDE integration, you can also use our interactive CLI by runningnpm run clionce and then typing commands directly at thehive>prompt without any prefix.
1️⃣ Core Consensus Tools
consensus
Use the hive-tools multi-model consensus pipeline to generate high-quality responses.
hive-tools.consensus: What are the tradeoffs between microservices and monoliths?Optional parameters:
profile_id: Specify a pipeline profile (default: the default profile)conversation_id: Continue a specific conversation thread
capture
Capture insights, code analyses, and other valuable content into the hive-tools knowledge base.
hive-tools.capture:
title: "React Component Best Practices"
content_type: "best_practice"
content: "1. Use functional components with hooks instead of class components..."
tags: ["react", "frontend", "components"]Parameters:
title: Title for the captured contentcontent_type: Type of content ("code_analysis", "architecture_insight", "design_pattern", "best_practice", "general")content: The main content to capturetags: (Optional) Array of tags to categorize the content
2️⃣ Provider Configuration Tools
list_providers
List all configured LLM providers with masked API keys.
hive-tools.list_providersconfigure_provider
Configure an LLM provider with an API key and optional base URL.
hive-tools.configure_provider: OpenAI sk-your-api-key-herehive-tools.configure_provider: OpenAI sk-your-api-key-here https://custom-endpoint.comFormat: PROVIDER_NAME API_KEY [BASE_URL]
test_providers
Test configured LLM providers to verify API keys and connectivity.
hive-tools.test_providersTest a specific provider with optional model:
hive-tools.test_providers: OpenAI:gpt-43️⃣ Pipeline Configuration Tools
list_pipeline_profiles
List all configured pipeline profiles.
hive-tools.list_pipeline_profilesconfigure_pipeline
Configure a pipeline profile with models for each stage. The order of models is extremely important!
hive-tools.configure_pipeline: profile_name GENERATOR REFINER VALIDATOR [CURATOR]Where:
profile_name: A simple name you choose for this configuration (e.g., "standard", "fast", "premium")GENERATOR: The first model that creates initial content (position 1)REFINER: The second model that improves the content (position 2)VALIDATOR: The third model that checks facts (position 3)CURATOR: The fourth model that formats the final output (position 4, optional)
Each stage uses this format: PROVIDER_NAME:MODEL_NAME[:TEMPERATURE]
Example:
hive-tools.configure_pipeline: standard OpenAI:gpt-3.5-turbo:0.7 OpenAI:gpt-4:0.0 Anthropic:claude-3-haiku:0.0This creates a pipeline named "standard" where:
- OpenAI's GPT-3.5 is the Generator (first position)
- OpenAI's GPT-4 is the Refiner (second position)
- Anthropic's Claude is the Validator (third position)
- Claude is also used as the Curator since none was specified
configure_pipeline_interactive
Configure a pipeline profile interactively with guided model selection.
hive-tools.configure_pipeline_interactive: my_interactive_pipelineThis interactive tool:
- Lists Available Providers: Shows all configured providers (Anthropic, Grok, Gemini, etc.)
- Displays Available Models: For each provider, shows available models with descriptions
- Guides Through Each Stage: Walks you through configuring each pipeline stage:
- Generator (creates initial content)
- Refiner (improves the content)
- Validator (checks for accuracy)
- Curator (adds final polish)
- Sets Default Temperatures: Suggests appropriate temperature settings for each stage
Benefits of Interactive Configuration:
- No need to memorize model names
- Ensures provider compatibility
- Suggests appropriate temperature values
- Reduces configuration errors
- Shows model descriptions to help with selection
Example Interactive Session:
Configuring pipeline profile: my_interactive_pipeline
=== Configuring Generator stage ===
Available providers:
1. Anthropic
2. Gemini
3. Grok
Using provider: Anthropic
Available models:
claude-3-haiku-20240307 - Fast and efficient model for routine tasks
claude-3-sonnet-20240229 - Balanced model for most use cases
claude-3-opus-20240229 - Most capable model for complex tasks
Selected model: claude-3-haiku-20240307 (temperature: 0.7)
[Similar process repeats for Refiner, Validator, and Curator stages]
✅ Pipeline profile "my_interactive_pipeline" configured successfully!set_default_profile
Set a pipeline profile as the default for consensus operations.
hive-tools.set_default_profile: standard4️⃣ Profile Management Tools
list_profiles
Lists available provider profiles in the hive-tools system.
hive-tools.list_profilesget_profile
Gets details of a specific provider profile.
hive-tools.get_profile: defaultupdate_profile
Updates a provider profile configuration.
hive-tools.update_profile:
profile_name: "default"
profile_data: "{ ... profile configuration ... }"🔍 Configuration Guide for Beginners
Setting Up Your AI Pipeline: Step-by-Step
💡 Success Tip: Think of this as setting up a team of AI assistants, each with a specific job. By the end, you'll have your own customized AI team ready to work together!
Think of the consensus pipeline as an assembly line with four stations, each handled by an AI model you choose. Here's how to set up your own custom pipeline in simple steps:
Step 1: Start the CLI Tool
Begin by launching the interactive CLI:
npm run cliWait for the hive> prompt to appear, then proceed with the following steps directly at this prompt.
Step 2: Configure Your Providers (Required First Step)
Important: You must configure your providers with API keys before creating any pipelines. This is a required first step:
hive> configure_provider OpenAI your-openai-key-here
hive> configure_provider Anthropic your-anthropic-key-here
hive> configure_provider Gemini your-gemini-key-here
hive> configure_provider Grok your-grok-key-hereThe system will automatically detect the appropriate base URLs for each provider.
Step 3: Test Your Connections
Make sure all your providers are working:
hive> test_providersStep 4: Create a Pipeline with Interactive Configuration (Recommended)
The recommended way to create a pipeline is using our interactive configuration tool, which guides you through selecting the right providers and models:
hive> configure_pipeline_interactive my_custom_pipelineThis interactive tool will:
- Show you all available providers you've configured
- List compatible models for each provider with descriptions
- Allow you to select the best model for each pipeline stage
- Suggest appropriate temperature settings based on each stage's purpose
- Let you decide whether to include an optional Curator stage
- Ask if you want to set this as your default pipeline
Alternative: Manual Pipeline Configuration
If you prefer, you can also manually configure pipelines for different needs:
# Premium pipeline with top models
hive> configure_pipeline premium Anthropic:claude-3-opus:0.7 OpenAI:gpt-4:0.3 OpenAI:gpt-4:0.1 Anthropic:claude-3-sonnet:0.5
# Balanced pipeline with mixed providers
hive> configure_pipeline balanced OpenAI:gpt-4:0.7 Anthropic:claude-3-sonnet:0.3 Gemini:gemini-pro:0.1 OpenAI:gpt-3.5-turbo:0.5
# Budget-friendly pipeline
hive> configure_pipeline basic OpenAI:gpt-3.5-turbo:0.7 OpenAI:gpt-3.5-turbo:0.0 OpenAI:gpt-3.5-turbo:0.0Each pipeline has four positions (the last one is optional):
- Generator: Creates the first draft (like a writer)
- Refiner: Improves the draft (like an editor)
- Validator: Checks facts and accuracy (like a fact-checker)
- Curator: Formats the final response (like a publisher)
The numbers after model names (like 0.7 or 0.0) control creativity - higher means more creative, lower means more consistent.
Step 5: Choose Your Default Pipeline
Select which pipeline to use when you don't specify one:
hive> set_default_profile premiumStep 6: Using Your AI Pipeline
Once configured, you can use your pipeline directly from the CLI:
# Ask a question using your default pipeline
hive> consensus What's the best way to learn JavaScript?
# Use a specific pipeline for this question only
hive> consensus How do quantum computers work? --profile premium
# Save insights to the knowledge base
hive> capture "JavaScript Best Practices" best_practice "Always use const and let instead of var..." javascript,frontendIDE Integration
In your IDE, you'll use the full tool name format:
hive-tools.consensus: What's the best way to learn JavaScript?
hive-tools.consensus:
prompt: "How do quantum computers work?"
profile_id: "premium"Interactive Pipeline Configuration Experience
The configure_pipeline_interactive command provides a truly interactive experience that walks you through each step of creating a pipeline profile. Here's what to expect:
=== Configuring Generator stage ===
Available providers:
1. Gemini
2. OpenAI
3. Anthropic
4. Grok
Select provider number: 3
Selected provider: Anthropic
Available models:
1. claude-3-opus-20240229 - Most powerful Claude model for complex tasks
2. claude-3-sonnet-20240229 - Balanced model for most tasks
3. claude-3-haiku-20240307 - Fastest and most cost-effective Claude model
Select model number: 1
Enter temperature (0.0-1.0) [default: 0.7]: 0.7
Selected model: claude-3-opus-20240229 (temperature: 0.7)This approach ensures you select compatible models and configure your pipeline correctly without needing to memorize model names or provider compatibility.
Quick Tips for Beginners
- Always configure providers first before trying to create pipelines
- The order of models in a pipeline is critical - position determines function!
- You can mix and match different providers in the same pipeline
- For important questions, use your best models at each stage
- For quick answers, use faster/cheaper models
- When in doubt, use
list_pipeline_profilesto see your configurations - After creating a pipeline interactively, you can fine-tune it with the standard
configure_pipelinecommand
🌡️ Understanding Temperature Settings
Temperature is a crucial parameter that controls the randomness and creativity of AI model outputs. Understanding how to set temperature appropriately can significantly impact the quality and consistency of responses from your consensus pipeline.
What is Temperature?
Temperature is a hyperparameter that affects how the AI model selects the next token in a sequence. It's typically set between 0.0 and 1.0, with some models supporting higher values:
- Low temperature (0.0-0.3): More deterministic, focused, and consistent responses
- Medium temperature (0.4-0.7): Balanced creativity and coherence
- High temperature (0.8-1.0+): More random, creative, and diverse responses
Temperature Effects by Value
| Temperature | Characteristics | Best For | Avoid For |
|---|---|---|---|
| 0.0 | Highly deterministic, always selects the most probable token | Factual Q&A, code generation, structured data | Creative writing, brainstorming, casual conversation |
| 0.1-0.3 | Very focused, consistent, and predictable | Technical documentation, definitions, instructions | Open-ended ideation, diverse alternatives |
| 0.4-0.6 | Good balance of coherence and variety | Most general use cases, explanations | Highly formal or highly creative tasks |
| 0.7-0.8 | Creative but still coherent | Content generation, brainstorming | Critical factual responses, code generation |
| 0.9-1.0+ | Highly creative, sometimes unpredictable | Creative writing, idea generation, exploration | Technical accuracy, consistency between runs |
Recommended Temperature Settings by Pipeline Stage
Each stage of the consensus pipeline benefits from different temperature settings:
Generator Stage: 0.7-0.8
- Rationale: Higher temperature encourages broader exploration of ideas and more comprehensive initial responses
- Goal: Generate diverse content that covers multiple aspects of the query
Refiner Stage: 0.4-0.6
- Rationale: Moderate temperature balances creativity with structure improvement
- Goal: Enhance the content while maintaining coherence and adding valuable details
Validator Stage: 0.0-0.3
- Rationale: Low temperature ensures consistent fact-checking and error detection
- Goal: Critically evaluate content for accuracy with minimal randomness
Curator Stage: 0.3-0.5
- Rationale: Moderate-low temperature provides consistent formatting while allowing some flexibility
- Goal: Polish and format content with reliable, predictable results
Temperature Strategies for Different Use Cases
Technical Documentation
- Generator: 0.5-0.6 (balanced initial draft)
- Refiner: 0.3-0.4 (focused improvements)
- Validator: 0.0-0.1 (strict accuracy checking)
- Curator: 0.2-0.3 (consistent formatting)
Creative Content
- Generator: 0.8-0.9 (highly creative initial ideas)
- Refiner: 0.6-0.7 (creative but more structured improvements)
- Validator: 0.2-0.3 (fact checking while preserving style)
- Curator: 0.4-0.5 (stylistic formatting)
Balanced General-Purpose
- Generator: 0.7 (moderately creative)
- Refiner: 0.5 (balanced improvements)
- Validator: 0.2 (focused fact checking)
- Curator: 0.4 (balanced formatting)
Fine-Tuning Temperature
When setting temperature in your pipeline configuration, consider these tips:
- Start with defaults: Begin with our recommended settings and adjust based on results
- Iterative refinement: Test different temperatures and compare outputs
- Consider query complexity: More complex queries often benefit from lower temperatures
- Balance across stages: If one stage uses high temperature, consider lower temperatures in other stages
- Monitor consistency: Higher temperatures increase variability between runs
When configuring a pipeline with specific temperatures:
hive> configure_pipeline custom_pipeline OpenAI:gpt-3.5-turbo:0.7 OpenAI:gpt-4-turbo:0.5 Anthropic:claude-3-haiku:0.2 OpenAI:gpt-3.5-turbo:0.4This creates a pipeline with carefully balanced temperature settings across all four stages.
💡 Recommended User Flow
For the best experience with hive-tools, we recommend following this workflow:
- Start the CLI: Run
npm run clito launch the interactive CLI - Configure Providers: Set up your API keys with
configure_provider(required first step) - Test Connections: Verify your providers work with
test_providers - Create Pipeline Interactively: Use
configure_pipeline_interactiveto create a pipeline - Set as Default: Make your new pipeline the default if desired
- Use the Pipeline: Start using the
consensuscommand with your configured pipeline
This workflow ensures you have a properly configured system with the right models for each stage of the pipeline.
🧰️ Using hive-tools Consensus
hive-tools revolutionizes the way you interact with AI assistants. Here's how to leverage its unique capabilities:
Contextual Conversations
Start asking technical questions and watch as hive-tools maintains context across related topics:
- "What's the difference between REST and GraphQL?"
- Later: "How would I implement authentication in each approach?"
hive-tools automatically connects these related queries without you needing to reference the previous conversation.
Multi-Stage Processing
Every query passes through our comprehensive 4-stage pipeline:
- Generator creates comprehensive initial responses
- Refiner enhances clarity and structure
- Validator verifies factual accuracy
- Curator delivers polished, well-formatted results
Technical Domain Expertise
Test hive-tools's deep technical knowledge with complex questions spanning multiple domains:
- Complex software architecture decisions
- Machine learning implementation details
- Database optimization strategies
- System design considerations
💡 Tip: For optimal results, ask follow-up questions on related topics to leverage hive-tools's thematic knowledge retrieval system.
📊 Architecture Overview
src/
├── tools/
│ └── hiveai/
│ ├── consensus.ts # 4-stage consensus pipeline implementation
│ ├── provider-config.ts # Provider configuration tools
│ ├── pipeline-config.ts # Pipeline profile management
│ └── conversation-memory.ts # In-memory conversation tracking
├── storage/
│ ├── database.ts # SQLite database management
│ ├── contextManager.ts # Persistent context management
│ ├── knowledgeRetrieval.ts # Thematic relationship detection
│ ├── topicTagging.ts # Technical domain topic extraction
│ ├── userManager.ts # User identification and management
│ └── cloudSync.ts # Cloud synchronization for user data
├── cloudflare/
│ ├── worker.js # Cloudflare Workers API implementation
│ ├── schema.sql # D1 database schema
│ └── wrangler.toml # Cloudflare Workers configuration
├── env/
│ └── keys.ts # Environment configuration
└── index.ts # MCP server entry point🔐 User Identification System
hive-tools includes a comprehensive user identification system that enables subscription management and usage tracking:
Features
- Local-First Architecture: User data is stored locally for privacy and offline resilience
- Multi-Device Support: Users can register multiple devices under a single account
- Adaptive Verification: Optimizes cloud API calls based on subscription tier
- Usage Tracking: Monitors conversation counts with tier-based limits
- Cloud Synchronization: Syncs usage data across devices via Cloudflare Workers
Components
Client-Side Implementation
userManager.ts: Handles user creation, device registration, and usage trackingcloudSync.ts: Manages communication with Cloudflare Workers API
Cloudflare Workers Backend
- API endpoints for user verification, usage synchronization, and checkout
- D1 database for storing user data in the cloud
- Integration with Lemon Squeezy for subscription management
Deployment
Before deploying the Cloudflare Workers backend, you'll need to set up a Cloudflare account. See our Cloudflare Setup Guide for detailed instructions.
Once you have a Cloudflare account, you can deploy using our deployment script:
# Navigate to the cloudflare directory
cd src/cloudflare
# Run the deployment script
node deploy.jsThe script will guide you through:
- Logging in to Cloudflare
- Creating a D1 database
- Setting up API keys
- Deploying the worker
For more details on the user identification system, see the Monetization Strategy document.
📊 Consensus Pipeline Analysis
hive-tools includes comprehensive analysis tools to help you understand and optimize the consensus pipeline. These tools analyze the SQLite database to provide insights into model performance, content transformations, and pipeline efficiency.
Available Reports and Tools
- Consensus Pipeline Report - Detailed statistical analysis of the pipeline
- Model Contribution Analysis - How different models contribute at each stage
- Interactive Pipeline Visualization - Visual exploration of pipeline data
- Visualization Guide - Guide to using the interactive visualization
- Use Case Guide - Practical applications and optimization strategies
Key Insights
Our analysis has revealed several important insights about the consensus pipeline:
Pipeline Structure: The 4-stage pipeline (Generator → Refiner → Validator → Curator) shows a pattern of content expansion followed by refinement.
Model Performance:
- Generator: gpt-3.5-turbo is most commonly used
- Refiner: gpt-4-turbo and grok-3-beta provide significant content expansion
- Validator: gpt-4-turbo, grok-1, and gemini-2.0-flash make important corrections
- Curator: gpt-3.5-turbo and claude-3-haiku provide final formatting
Efficient Combinations:
- Fastest: gemini-pro → gpt-4 → grok-1 → gemini-pro (5.96s avg)
- Most common: gpt-3.5-turbo → gpt-4-turbo → gpt-4-turbo → gpt-3.5-turbo (57.00s avg)
For more detailed analysis and optimization recommendations, see the reports directory.
Generating Your Own Analysis
You can generate updated reports with the latest data using these scripts:
# Generate comprehensive pipeline report
node consensus-pipeline-report.js
# Analyze model contributions
node model-contribution-analysis.js
# Create interactive visualization
node consensus-pipeline-visualization.jsThese tools help you understand how the consensus pipeline works and how to optimize it for your specific needs.
📬 Contact Us
We'd love to hear from you! Reach out to us with any questions, feedback, or partnership opportunities:
- General Inquiries: hello@hivetechs.io
- Technical Support: support@hivetechs.io
- Information Requests: info@hivetechs.io
- Phone: (813) 400-0871
Business Address
HiveTechs Collective LLC
7901 4th St N STE 300
St. Petersburg, FL 33702
🌐 Learn More
Visit hivetechs.io to learn more about our revolutionary approach to AI consensus and context-aware conversations.
📜 License & Legal Notice
⚠️ PROPRIETARY SOFTWARE - COMMERCIAL LICENSING REQUIRED
hive-tools contains proprietary algorithms, trade secrets, and intellectual property owned exclusively by HiveTechs Collective LLC.
🚫 Unauthorized Use Strictly Prohibited
Without explicit commercial licensing, the following are PROHIBITED:
- Commercial use in any form or capacity
- Reverse engineering or extracting proprietary algorithms
- Creating derivative works or competing products
- Using multi-model consensus methodologies in other products
- Training AI models on proprietary outputs or methodologies
✅ Permitted Uses (Non-Commercial Only)
- Personal projects and individual learning
- Academic research with proper attribution
- Educational use by students and researchers
- 30-day commercial evaluation period
💼 Commercial Licensing Required
For any commercial use, enterprise deployment, or production environment:
- Licensing Contact: licensing@hivetechs.io
- Pricing Plans: https://hivetechs.io/pricing
- Enterprise Sales: enterprise@hivetechs.io
⚖️ Legal Protection
This software is protected by:
- U.S. and international copyright law
- Patent applications and trade secret protections
- Proprietary consensus algorithms and AI methodologies
- Advanced multi-model optimization techniques
🔒 Enforcement
Unauthorized commercial use will result in:
- Immediate legal action for damages and injunctive relief
- Recovery of attorney fees and litigation costs
- Potential criminal prosecution under applicable law
For complete license terms, see the LICENSE file.
📞 Contact
- Legal/Licensing: legal@hivetechs.io
- Technical Support: support@hivetechs.io
- General Inquiries: hello@hivetechs.io
- Website: https://hivetechs.io
Copyright © 2025 HiveTechs Collective LLC. All rights reserved.
hive-tools Professional Software License v2.0 (Enhanced Protection)