Package Exports
- @fr3k/666fr3k
- @fr3k/666fr3k/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@fr3k/666fr3k) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
666FR3K v2.0
π₯ AI-Powered Auto-Improvement System π₯
A revolutionary auto-improvement loop that monitors agent conversations, extracts insights, and automatically integrates improvements into workflows and codebases. Features continuous speech recognition & synthesis, real-time monitoring, and dynamic audio generation that never repeats.
π NEW in v2.0: Auto-Improvement Loop
Revolutionary Features:
- π§ Agent Conversation Monitoring: Real-time monitoring of all agent communications
- π‘ AI-Powered Insight Extraction: Automatically identifies patterns, improvements, and optimizations
- π Automatic Workflow Integration: Insights are integrated into CLAUDE.md and workflow files
- π» Dynamic Codebase Updates: Auto-applies code improvements based on discovered patterns
- π Evolving Audio: Generates unique audio updates that never repeat (hash-verified)
- π Self-Adapting Knowledge Base: Continuously learns and improves from conversations
- π Real-Time Web Dashboard: Monitor system status, insights, and metrics live
- βΎοΈ Continuous Evolution: Runs indefinitely, constantly improving the system
π― Core Features
- π Text-to-Speech: Convert text to natural speech using gTTS (Google Text-to-Speech)
- π€ Speech-to-Text: Real-time transcription using Google Speech Recognition
- π Continuous Loop: Automated TTSβAudioβSTT testing pipeline
- π§ Auto-Listener: Continuous microphone monitoring with voice command support
- π Multi-Agent System: 3 AI agents that listen to each other and discuss dev/debug/test topics
- π Web UI: Beautiful responsive dashboard with live monitoring
- π§ͺ Advanced Testing: Complex vocabulary and technical terminology tests
- π Analytics: Performance metrics and accuracy tracking
- β‘ Fast: 1.4-1.6x speed playback for efficient testing
π¦ Installation
Quick Start (npx)
npx 666fr3k --helpGlobal Installation
npm install -g 666fr3kLocal Installation
npm install 666fr3kπ§ Prerequisites
Node.js Dependencies
All Node.js dependencies are installed automatically with the package.
Python Dependencies
Install Python dependencies:
# Automatic installation
npx 666fr3k install-deps
# Or manual installation
pip3 install gtts SpeechRecognition pyaudioSystem Requirements
- Node.js: v16.0.0 or higher
- Python: 3.7 or higher
- Audio: ffmpeg, ffplay (for audio playback)
- Microphone: Required for STT features
Install System Audio Tools
Ubuntu/Debian:
sudo apt-get install ffmpeg portaudio19-dev python3-pyaudiomacOS:
brew install ffmpeg portaudioWindows:
choco install ffmpegπ Quick Start
NEW: Auto-Improvement Loop
The flagship feature - run the complete auto-improvement system:
# Run the FULL auto-improvement loop (recommended)
npx 666fr3k auto-improve
# Run with specific mode
npx 666fr3k auto-improve --mode advanced
# Run simple mode (Python agents only)
npx 666fr3k auto-improve --mode simple --iterations 5
# Start with web dashboard
npx 666fr3k web &
npx 666fr3k auto-improve
# Then open http://localhost:3666What it does:
- Monitors all agent conversations in real-time
- Extracts actionable insights using AI analysis
- Automatically integrates improvements into:
- Workflow files (CLAUDE.md)
- Codebase (relevant source files)
- Knowledge base (auto-patterns directory)
- MCP tool configurations
- Generates unique audio updates (never repeats)
- Tracks all changes in
auto-improvement-state.json
Features:
- Runs continuously (Ctrl+C to stop)
- Saves state every iteration
- Auto-resumes from saved state
- WebSocket-based live monitoring
- Hash-verified unique audio generation
CLI Commands
1. Auto-Listener (Continuous Speech Recognition)
Listen continuously for speech and transcribe in real-time:
# Basic listening (text output only)
npx 666fr3k listen
# With voice responses
npx 666fr3k listen --speak
# Verbose mode
npx 666fr3k listen --speak --verboseVoice Commands:
- "stop listening" - Exit the listener
- "status" - Show session statistics
- "help" - Show available commands
2. TTSβSTT Loop Testing
Run continuous loop tests to verify the pipeline:
# Run 5 cycles at 1.6x speed (default)
npx 666fr3k loop
# Custom cycles and speed
npx 666fr3k loop --cycles 10 --speed 1.8
# Run 20 cycles at normal speed
npx 666fr3k loop -c 20 -s 1.03. Verification Tests
Run comprehensive TTS and STT tests:
# Basic tests
npx 666fr3k test
# Advanced tests with complex vocabulary
npx 666fr3k test --advanced
# Custom loop count
npx 666fr3k test --loop 104. Multi-Agent Conversation (NEW! π)
Run 3 AI agents that listen to each other and discuss development topics:
# Run 5 rounds of conversation (default)
npx 666fr3k agents
# Custom rounds and speed
npx 666fr3k agents --rounds 10 --speed 1.6
# Short form
npx 666fr3k agents -r 3 -s 1.4Features:
- 3 agents with different roles (Lead Developer, QA Engineer, DevOps Specialist)
- Agents listen to each other's speech
- Discuss real development/debug/test topics
- Conversational turn-taking
- 1.4-1.6x speed for efficient conversations
5. Web UI
Launch the web interface:
# Start on default port 3666
npx 666fr3k web
# Custom port
npx 666fr3k web --port 8080Then open: http://localhost:3666
Web UI Features
The web interface provides:
- TTS Panel: Convert text to speech with speed control
- STT Panel: Live microphone transcription
- Loop Testing: Run automated TTSβSTT cycles
- Statistics Dashboard: Track usage and accuracy metrics
- Real-time Updates: WebSocket-based live transcription
π API Usage
JavaScript/Node.js
const { spawn } = require('child_process');
// Run TTS
const tts = spawn('npx', ['666fr3k', 'loop', '--cycles', '3']);
tts.stdout.on('data', (data) => {
console.log(data.toString());
});
// Start auto-listener
const listener = spawn('npx', ['666fr3k', 'listen', '--speak']);Python Integration
import subprocess
# Run loop test
result = subprocess.run(['npx', '666fr3k', 'loop', '--cycles', '5'])
# Start listener
listener = subprocess.Popen(['npx', '666fr3k', 'listen'])HTTP API (Web Server)
# Start server
npx 666fr3k web --port 3666TTS Endpoint:
curl -X POST http://localhost:3666/api/tts \
-H "Content-Type: application/json" \
-d '{"text": "Hello world", "speed": 1.6}'STT Endpoint:
curl -X POST http://localhost:3666/api/stt \
-H "Content-Type: application/json" \
-d '{"audioData": "<base64_audio>"}'π§ͺ Testing Examples
Basic Loop Test
npx 666fr3k loop --cycles 5Output:
π CYCLE #1
π’ Input: "Hello, this is cycle number one..."
π Playing audio at 1.6x speed... β
π§ Transcribing... β
π Output: "hello this is cycle number one"
π Accuracy: 80.0%
β
SUCCESSAdvanced Vocabulary Test
npx 666fr3k test --advancedTests complex terminology including:
- Technology & AI concepts
- Medical & scientific terms
- Legal & constitutional language
- Quantum physics & mathematics
- Economic & financial theory
- Philosophical concepts
- Cybersecurity terminology
Continuous Listening Session
npx 666fr3k listen --speakπ€ 666FR3K AUTO-LISTENER ACTIVATED
π’ I'm now listening continuously...
π‘ Say 'stop listening' to exit
[12:34:56] π€ Heard: "Hello, can you hear me?"
π¬ You said: Hello, can you hear me?
π§ Listening...π Performance Metrics
Test Results
- Basic Loops: 80-100% accuracy on simple phrases
- Complex Vocabulary: 16-72% accuracy on technical terms
- Speed: 1.6x playback maintains 70%+ accuracy
- Latency: <2 seconds per TTSβSTT cycle
Tested Domains
β General conversation (90%+ accuracy) β Technology terminology (80%+ accuracy) β Economic concepts (72% accuracy) β οΈ Medical terms (37% accuracy) β οΈ Quantum physics (17% accuracy)
π§ Configuration
Environment Variables
# Set custom port for web server
export PORT=8080
# Python interpreter path
export PYTHON_BIN=/usr/bin/python3Package Configuration
Edit package.json to customize:
{
"scripts": {
"start": "node index.js",
"web": "node server.js",
"test": "node test.js"
}
}π Troubleshooting
Microphone Not Working
# Test microphone access
python3 -c "import speech_recognition as sr; print(sr.Microphone.list_microphone_names())"Audio Playback Issues
# Check ffplay installation
ffplay -version
# Test audio output
ffplay -nodisp test.mp3Python Dependencies
# Reinstall dependencies
pip3 install --upgrade gtts SpeechRecognition pyaudio
# On macOS, if pyaudio fails:
brew install portaudio
pip3 install --global-option='build_ext' --global-option='-I/usr/local/include' --global-option='-L/usr/local/lib' pyaudioGoogle API Rate Limits
If you encounter API timeouts:
- Add delays between requests
- Consider using offline STT alternatives
- Check network connectivity
π Command Reference
| Command | Description | Options |
|---|---|---|
auto-improve |
π NEW: Auto-improvement loop | --mode advanced|simple, --iterations N, --rounds N |
listen |
Start continuous listening | --speak, --verbose |
loop |
Run TTSβSTT loop test | --cycles N, --speed X |
test |
Run verification tests | --advanced, --loop N |
agents |
3-agent conversation system | --rounds N, --speed X |
web |
Start web UI server with monitoring | --port N |
install-deps |
Install Python dependencies | - |
Auto-Improvement Loop Options
# Advanced mode (Full JS monitoring) - DEFAULT
npx 666fr3k auto-improve --mode advanced
# Simple mode (Python agents only)
npx 666fr3k auto-improve --mode simple --iterations 3 --rounds 2Advanced Mode Features:
- Real-time agent conversation monitoring
- MCP server integration (hey-fr3k, fr3k-think, md-mcp)
- Automatic code and workflow updates
- Hash-verified unique audio generation
- Self-adapting knowledge base
- Runs indefinitely until stopped
Simple Mode Features:
- Python-based multi-agent conversations
- Fixed iteration count
- Lighter resource usage
- Good for testing
π¨ Web UI Screenshots
Main Dashboard:
- TTS Panel with text input and speed control
- STT Panel with live transcription
- Loop testing interface
- Real-time statistics dashboard
π€ Contributing
Contributions welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
π License
MIT License - see LICENSE file for details
π€ Author
fr3k
π Acknowledgments
- gTTS: Google Text-to-Speech library
- SpeechRecognition: Python speech recognition library
- Google Cloud: Speech API
- Express.js: Web server framework
π Roadmap
v2.0 β COMPLETE
- Auto-improvement loop
- Agent conversation monitoring
- AI-powered insight extraction
- Automatic workflow integration
- Dynamic audio generation (never repeats)
- Real-time web dashboard
- MCP server integration
v2.1+ Planned
- Offline STT support
- Multiple TTS voice options
- Docker containerization
- Multi-language support
- Custom wake words
- Voice activity detection improvements
- Machine learning model fine-tuning
- Multi-modal insight extraction (code + audio + visual)
- Distributed agent networks
- Mobile app
π₯ Quick Examples
Example 1: Quick Test
npx 666fr3k loop --cycles 3Example 2: Voice Assistant Mode
npx 666fr3k listen --speakSay "Hello" and hear the system respond!
Example 3: Web Interface
npx 666fr3k web
# Open http://localhost:3666Example 4: Advanced Testing
npx 666fr3k test --advancedTests complex vocabulary across 7 domains!
π₯ Architecture
Auto-Improvement Loop Flow
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Agent Conversations β
β (Development, QA, DevOps discussions) β
βββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Real-Time Monitoring β
β β’ File watchers β’ Process output capture β
β β’ WebSocket streams β’ Log aggregation β
βββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AI-Powered Insight Extraction β
β β’ Pattern recognition β’ fr3k-think MCP analysis β
β β’ Confidence scoring β’ Priority assignment β
βββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Automatic Integration β
β ββββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ β
β β Workflow β Codebase β Knowledge β β
β β CLAUDE.md β Auto-fix β Patterns β β
β ββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ β
βββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Dynamic Audio Generation β
β β’ Hash verification β’ Unique content synthesis β
β β’ Never repeats β’ Reflects latest changes β
βββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β State Persistence β
β auto-improvement-state.json β’ Progress tracking β
β β’ Resume capability β’ Metrics history β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββπ Use Cases
- Continuous Development Improvement: Monitor development team conversations and automatically improve workflows
- AI Agent Research: Study multi-agent communication patterns and emergent behaviors
- Automated Documentation: Extract insights from conversations and auto-generate documentation
- Code Quality Enhancement: Automatically identify and apply code improvements
- Knowledge Base Growth: Build self-expanding knowledge repositories
- Audio-First Workflows: Generate unique audio updates for accessibility and monitoring
Made with π₯ by fr3k
Version: 2.0.0 Last Updated: 2025-11-02 License: MIT