JSPM

@fr3k/666fr3k

2.1.1
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • 0
  • Score
    100M100P100Q4779F
  • License MIT

Infinite Evolution AI System: 28+ dynamic agents, continuous learning, self-improving workflows, and AI DEV SUPERPOWERS - Runs forever, creates new agents automatically

Package Exports

  • @fr3k/666fr3k
  • @fr3k/666fr3k/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@fr3k/666fr3k) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

666FR3K v2.0

πŸ”₯ AI-Powered Auto-Improvement System πŸ”₯

A revolutionary auto-improvement loop that monitors agent conversations, extracts insights, and automatically integrates improvements into workflows and codebases. Features continuous speech recognition & synthesis, real-time monitoring, and dynamic audio generation that never repeats.

npm version License: MIT

πŸš€ NEW in v2.0: Auto-Improvement Loop

Revolutionary Features:

  • 🧠 Agent Conversation Monitoring: Real-time monitoring of all agent communications
  • πŸ’‘ AI-Powered Insight Extraction: Automatically identifies patterns, improvements, and optimizations
  • πŸ”„ Automatic Workflow Integration: Insights are integrated into CLAUDE.md and workflow files
  • πŸ’» Dynamic Codebase Updates: Auto-applies code improvements based on discovered patterns
  • πŸ”Š Evolving Audio: Generates unique audio updates that never repeat (hash-verified)
  • πŸ“š Self-Adapting Knowledge Base: Continuously learns and improves from conversations
  • 🌐 Real-Time Web Dashboard: Monitor system status, insights, and metrics live
  • ♾️ Continuous Evolution: Runs indefinitely, constantly improving the system

🎯 Core Features

  • πŸ”Š Text-to-Speech: Convert text to natural speech using gTTS (Google Text-to-Speech)
  • 🎀 Speech-to-Text: Real-time transcription using Google Speech Recognition
  • πŸ”„ Continuous Loop: Automated TTSβ†’Audioβ†’STT testing pipeline
  • 🎧 Auto-Listener: Continuous microphone monitoring with voice command support
  • 🎭 Multi-Agent System: 3 AI agents that listen to each other and discuss dev/debug/test topics
  • 🌐 Web UI: Beautiful responsive dashboard with live monitoring
  • πŸ§ͺ Advanced Testing: Complex vocabulary and technical terminology tests
  • πŸ“Š Analytics: Performance metrics and accuracy tracking
  • ⚑ Fast: 1.4-1.6x speed playback for efficient testing

πŸ“¦ Installation

Quick Start (npx)

npx 666fr3k --help

Global Installation

npm install -g 666fr3k

Local Installation

npm install 666fr3k

πŸ”§ Prerequisites

Node.js Dependencies

All Node.js dependencies are installed automatically with the package.

Python Dependencies

Install Python dependencies:

# Automatic installation
npx 666fr3k install-deps

# Or manual installation
pip3 install gtts SpeechRecognition pyaudio

System Requirements

  • Node.js: v16.0.0 or higher
  • Python: 3.7 or higher
  • Audio: ffmpeg, ffplay (for audio playback)
  • Microphone: Required for STT features

Install System Audio Tools

Ubuntu/Debian:

sudo apt-get install ffmpeg portaudio19-dev python3-pyaudio

macOS:

brew install ffmpeg portaudio

Windows:

choco install ffmpeg

πŸš€ Quick Start

NEW: Auto-Improvement Loop

The flagship feature - run the complete auto-improvement system:

# Run the FULL auto-improvement loop (recommended)
npx 666fr3k auto-improve

# Run with specific mode
npx 666fr3k auto-improve --mode advanced

# Run simple mode (Python agents only)
npx 666fr3k auto-improve --mode simple --iterations 5

# Start with web dashboard
npx 666fr3k web &
npx 666fr3k auto-improve
# Then open http://localhost:3666

What it does:

  1. Monitors all agent conversations in real-time
  2. Extracts actionable insights using AI analysis
  3. Automatically integrates improvements into:
    • Workflow files (CLAUDE.md)
    • Codebase (relevant source files)
    • Knowledge base (auto-patterns directory)
    • MCP tool configurations
  4. Generates unique audio updates (never repeats)
  5. Tracks all changes in auto-improvement-state.json

Features:

  • Runs continuously (Ctrl+C to stop)
  • Saves state every iteration
  • Auto-resumes from saved state
  • WebSocket-based live monitoring
  • Hash-verified unique audio generation

CLI Commands

1. Auto-Listener (Continuous Speech Recognition)

Listen continuously for speech and transcribe in real-time:

# Basic listening (text output only)
npx 666fr3k listen

# With voice responses
npx 666fr3k listen --speak

# Verbose mode
npx 666fr3k listen --speak --verbose

Voice Commands:

  • "stop listening" - Exit the listener
  • "status" - Show session statistics
  • "help" - Show available commands

2. TTS→STT Loop Testing

Run continuous loop tests to verify the pipeline:

# Run 5 cycles at 1.6x speed (default)
npx 666fr3k loop

# Custom cycles and speed
npx 666fr3k loop --cycles 10 --speed 1.8

# Run 20 cycles at normal speed
npx 666fr3k loop -c 20 -s 1.0

3. Verification Tests

Run comprehensive TTS and STT tests:

# Basic tests
npx 666fr3k test

# Advanced tests with complex vocabulary
npx 666fr3k test --advanced

# Custom loop count
npx 666fr3k test --loop 10

4. Multi-Agent Conversation (NEW! 🎭)

Run 3 AI agents that listen to each other and discuss development topics:

# Run 5 rounds of conversation (default)
npx 666fr3k agents

# Custom rounds and speed
npx 666fr3k agents --rounds 10 --speed 1.6

# Short form
npx 666fr3k agents -r 3 -s 1.4

Features:

  • 3 agents with different roles (Lead Developer, QA Engineer, DevOps Specialist)
  • Agents listen to each other's speech
  • Discuss real development/debug/test topics
  • Conversational turn-taking
  • 1.4-1.6x speed for efficient conversations

5. Web UI

Launch the web interface:

# Start on default port 3666
npx 666fr3k web

# Custom port
npx 666fr3k web --port 8080

Then open: http://localhost:3666

Web UI Features

The web interface provides:

  • TTS Panel: Convert text to speech with speed control
  • STT Panel: Live microphone transcription
  • Loop Testing: Run automated TTSβ†’STT cycles
  • Statistics Dashboard: Track usage and accuracy metrics
  • Real-time Updates: WebSocket-based live transcription

πŸ“š API Usage

JavaScript/Node.js

const { spawn } = require('child_process');

// Run TTS
const tts = spawn('npx', ['666fr3k', 'loop', '--cycles', '3']);
tts.stdout.on('data', (data) => {
  console.log(data.toString());
});

// Start auto-listener
const listener = spawn('npx', ['666fr3k', 'listen', '--speak']);

Python Integration

import subprocess

# Run loop test
result = subprocess.run(['npx', '666fr3k', 'loop', '--cycles', '5'])

# Start listener
listener = subprocess.Popen(['npx', '666fr3k', 'listen'])

HTTP API (Web Server)

# Start server
npx 666fr3k web --port 3666

TTS Endpoint:

curl -X POST http://localhost:3666/api/tts \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello world", "speed": 1.6}'

STT Endpoint:

curl -X POST http://localhost:3666/api/stt \
  -H "Content-Type: application/json" \
  -d '{"audioData": "<base64_audio>"}'

πŸ§ͺ Testing Examples

Basic Loop Test

npx 666fr3k loop --cycles 5

Output:

πŸ”„ CYCLE #1
πŸ“’ Input: "Hello, this is cycle number one..."
πŸ”Š Playing audio at 1.6x speed... βœ“
🎧 Transcribing... βœ“
πŸ“ Output: "hello this is cycle number one"
πŸ“Š Accuracy: 80.0%
βœ… SUCCESS

Advanced Vocabulary Test

npx 666fr3k test --advanced

Tests complex terminology including:

  • Technology & AI concepts
  • Medical & scientific terms
  • Legal & constitutional language
  • Quantum physics & mathematics
  • Economic & financial theory
  • Philosophical concepts
  • Cybersecurity terminology

Continuous Listening Session

npx 666fr3k listen --speak
🎀 666FR3K AUTO-LISTENER ACTIVATED
πŸ“’ I'm now listening continuously...
πŸ’‘ Say 'stop listening' to exit

[12:34:56] 🎀 Heard: "Hello, can you hear me?"
πŸ’¬ You said: Hello, can you hear me?

🎧 Listening...

πŸ“Š Performance Metrics

Test Results

  • Basic Loops: 80-100% accuracy on simple phrases
  • Complex Vocabulary: 16-72% accuracy on technical terms
  • Speed: 1.6x playback maintains 70%+ accuracy
  • Latency: <2 seconds per TTSβ†’STT cycle

Tested Domains

βœ… General conversation (90%+ accuracy) βœ… Technology terminology (80%+ accuracy) βœ… Economic concepts (72% accuracy) ⚠️ Medical terms (37% accuracy) ⚠️ Quantum physics (17% accuracy)

πŸ”§ Configuration

Environment Variables

# Set custom port for web server
export PORT=8080

# Python interpreter path
export PYTHON_BIN=/usr/bin/python3

Package Configuration

Edit package.json to customize:

{
  "scripts": {
    "start": "node index.js",
    "web": "node server.js",
    "test": "node test.js"
  }
}

πŸ› Troubleshooting

Microphone Not Working

# Test microphone access
python3 -c "import speech_recognition as sr; print(sr.Microphone.list_microphone_names())"

Audio Playback Issues

# Check ffplay installation
ffplay -version

# Test audio output
ffplay -nodisp test.mp3

Python Dependencies

# Reinstall dependencies
pip3 install --upgrade gtts SpeechRecognition pyaudio

# On macOS, if pyaudio fails:
brew install portaudio
pip3 install --global-option='build_ext' --global-option='-I/usr/local/include' --global-option='-L/usr/local/lib' pyaudio

Google API Rate Limits

If you encounter API timeouts:

  • Add delays between requests
  • Consider using offline STT alternatives
  • Check network connectivity

πŸ“– Command Reference

Command Description Options
auto-improve πŸš€ NEW: Auto-improvement loop --mode advanced|simple, --iterations N, --rounds N
listen Start continuous listening --speak, --verbose
loop Run TTS→STT loop test --cycles N, --speed X
test Run verification tests --advanced, --loop N
agents 3-agent conversation system --rounds N, --speed X
web Start web UI server with monitoring --port N
install-deps Install Python dependencies -

Auto-Improvement Loop Options

# Advanced mode (Full JS monitoring) - DEFAULT
npx 666fr3k auto-improve --mode advanced

# Simple mode (Python agents only)
npx 666fr3k auto-improve --mode simple --iterations 3 --rounds 2

Advanced Mode Features:

  • Real-time agent conversation monitoring
  • MCP server integration (hey-fr3k, fr3k-think, md-mcp)
  • Automatic code and workflow updates
  • Hash-verified unique audio generation
  • Self-adapting knowledge base
  • Runs indefinitely until stopped

Simple Mode Features:

  • Python-based multi-agent conversations
  • Fixed iteration count
  • Lighter resource usage
  • Good for testing

🎨 Web UI Screenshots

Main Dashboard:

  • TTS Panel with text input and speed control
  • STT Panel with live transcription
  • Loop testing interface
  • Real-time statistics dashboard

🀝 Contributing

Contributions welcome! Please follow these guidelines:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

πŸ“ License

MIT License - see LICENSE file for details

πŸ‘€ Author

fr3k

πŸ™ Acknowledgments

  • gTTS: Google Text-to-Speech library
  • SpeechRecognition: Python speech recognition library
  • Google Cloud: Speech API
  • Express.js: Web server framework

πŸ“ˆ Roadmap

v2.0 βœ… COMPLETE

  • Auto-improvement loop
  • Agent conversation monitoring
  • AI-powered insight extraction
  • Automatic workflow integration
  • Dynamic audio generation (never repeats)
  • Real-time web dashboard
  • MCP server integration

v2.1+ Planned

  • Offline STT support
  • Multiple TTS voice options
  • Docker containerization
  • Multi-language support
  • Custom wake words
  • Voice activity detection improvements
  • Machine learning model fine-tuning
  • Multi-modal insight extraction (code + audio + visual)
  • Distributed agent networks
  • Mobile app

πŸ”₯ Quick Examples

Example 1: Quick Test

npx 666fr3k loop --cycles 3

Example 2: Voice Assistant Mode

npx 666fr3k listen --speak

Say "Hello" and hear the system respond!

Example 3: Web Interface

npx 666fr3k web
# Open http://localhost:3666

Example 4: Advanced Testing

npx 666fr3k test --advanced

Tests complex vocabulary across 7 domains!


πŸ”₯ Architecture

Auto-Improvement Loop Flow

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  Agent Conversations                         β”‚
β”‚         (Development, QA, DevOps discussions)               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
                      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              Real-Time Monitoring                            β”‚
β”‚    β€’ File watchers    β€’ Process output capture              β”‚
β”‚    β€’ WebSocket streams β€’ Log aggregation                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
                      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚           AI-Powered Insight Extraction                      β”‚
β”‚  β€’ Pattern recognition  β€’ fr3k-think MCP analysis           β”‚
β”‚  β€’ Confidence scoring   β€’ Priority assignment               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
                      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              Automatic Integration                           β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚  β”‚   Workflow   β”‚   Codebase   β”‚   Knowledge  β”‚            β”‚
β”‚  β”‚  CLAUDE.md   β”‚  Auto-fix    β”‚  Patterns    β”‚            β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
                      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚           Dynamic Audio Generation                           β”‚
β”‚  β€’ Hash verification  β€’ Unique content synthesis            β”‚
β”‚  β€’ Never repeats      β€’ Reflects latest changes             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
                      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              State Persistence                               β”‚
β”‚  auto-improvement-state.json β€’ Progress tracking            β”‚
β”‚  β€’ Resume capability          β€’ Metrics history             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸŽ“ Use Cases

  1. Continuous Development Improvement: Monitor development team conversations and automatically improve workflows
  2. AI Agent Research: Study multi-agent communication patterns and emergent behaviors
  3. Automated Documentation: Extract insights from conversations and auto-generate documentation
  4. Code Quality Enhancement: Automatically identify and apply code improvements
  5. Knowledge Base Growth: Build self-expanding knowledge repositories
  6. Audio-First Workflows: Generate unique audio updates for accessibility and monitoring

Made with πŸ”₯ by fr3k

Version: 2.0.0 Last Updated: 2025-11-02 License: MIT