Package Exports
- @bonginkan/maria
- @bonginkan/maria/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@bonginkan/maria) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
π§ MARIA Platform v2.2.0 "RL Evolution Dashboard Edition"
π MARIA Platform v2.2.0 - RL Evolution Dashboard System + Revolutionary AI-Powered CLI with Real-time Reinforcement Learning Monitoring, Context Switch Performance Analysis, Intelligent System Optimization, Advanced Terminal Dashboard Interface, Complete Local LLM Integration (Ollama, vLLM, LM Studio), 50+ Interactive Commands, and Privacy-First Development Environment!
π₯οΈ MARIA CODE CLI Interface
MARIA's beautiful startup interface with automatic AI service initialization and local LLM detection
π Key Features - RL Evolution Dashboard & AI Development
π§ RL Evolution Dashboard System (v2.2.0) - NEW!
- Real-time RL Monitoring: Advanced reinforcement learning system visualization with blessed.js terminal UI
- Context Switch Analysis: Intelligent overhead calculation and performance optimization recommendations
- 7-Panel Dashboard: Performance metrics, context switches, learning progress, safety validation, alerts, and logs
- Interactive Controls: Full keyboard navigation with real-time data export and snapshot capabilities
- Safety-First RL: Built-in safety validation with rollback capabilities for all RL policy updates
π€ Linux Command Intelligence System (v2.1.7)
- Phase 1 - Intelligence Layer: Real-time context analysis and command intent recognition (14 intent types)
- Phase 2 - Autonomous Execution: Smart command executor with 5-level risk assessment and safety validation
- Phase 3 - Advanced Operations: Infrastructure management, file system intelligence, container integration
- Phase 4 - Learning & Adaptation: Machine learning engine, workflow automation, anomaly detection
- 100% Integration Tested: All 4 phases validated on macOS with complete system compatibility
π§ Revolutionary Dual-Layer Memory System (Enhanced v2.1.7)
- System 1 (Fast/Intuitive): Instant pattern recognition with Linux command awareness
- System 2 (Deliberate/Analytical): Deep reasoning with system administration context
- Intelligence Integration: Memory system now learns from Linux command patterns
- Autonomous Learning: Adapts to your system administration and coding patterns
- Enhanced Performance: 60% faster startup with intelligent command caching
π Complete Local LLM Integration
- Automatic Detection & Setup: Auto-configures Ollama, vLLM, LM Studio
- Privacy-First Development: All processing runs locally on your machine
- Zero Cloud Dependencies: Work offline with full AI capabilities
- Multi-Model Support: Seamlessly switch between 20+ local models
- One-Command Setup:
maria setup-ollama
/maria setup-vllm
for instant configuration
π€ Enterprise AI Development + System Administration
- Autonomous Linux Administration: Complete system management with intelligent command execution
- Safety-First Operations: Pre-execution validation with automatic backup and rollback capabilities
- Learning System Administration: AI learns your server management patterns and optimizes workflows
- Memory-Enhanced Commands: All core commands learn from usage with Linux command intelligence
- Autonomous Coding Agent: Complete project development from requirements
- Multi-Provider Support: OpenAI, Anthropic, Google, Groq + Local LLMs
- Interactive Commands: 50+ slash commands including system administration tools
- Professional Engineering Modes: 50+ specialized AI cognitive states for development and ops
π Instant Setup & Usage
npm install -g @bonginkan/maria
maria setup-ollama # Auto-install local AI
maria # Start interactive development + Linux intelligence
Linux Command Intelligence Capabilities (NEW in v2.1.7):
- β Autonomous Linux Administration: Smart command execution with risk assessment
- β Safety Validation: Pre-execution checks with automatic backup creation
- β Learning Engine: AI learns your system administration patterns
- β Workflow Automation: Complex task orchestration with cron scheduling
- β Anomaly Detection: Real-time system monitoring and security analysis
- β 100% Integration Tested: Complete validation across all 4 intelligence phases
Core Development Capabilities:
- β Local AI Models: Complete offline development environment
- β Code Generation: AI-powered development assistance with system context
- β Quality Analysis: Real-time code review and optimization
- β Multi-Language: Support for all major programming languages
- β Enterprise Ready: Professional development workflows with system administration
π€ Linux Command Intelligence System
Phase 1: Intelligence Layer
- Context Analysis Engine: Real-time system state monitoring with user intent recognition
- Command Knowledge Base: Comprehensive Linux command database with risk classification
- Intent Recognition: 14 command intent types (FILE_OPERATION, SERVICE_CONTROL, PERFORMANCE_OPTIMIZATION, etc.)
- Risk Assessment: 5-level classification system (SAFE β CRITICAL) for command safety
Phase 2: Autonomous Execution Framework
- Smart Command Executor: Context-aware command processing with intelligent parameter suggestion
- Safety Validator: Pre-execution validation with automatic backup creation and rollback planning
- Progress Tracking: Real-time execution monitoring with detailed progress reports
- Dry-run Mode: Safe command testing without actual system changes
Phase 3: Advanced Operations Integration
- Infrastructure Management: Service orchestration and process monitoring capabilities
- File System Intelligence: Smart file operations with metadata analysis and security checks
- Network Analysis: Comprehensive networking diagnostics and troubleshooting
- Container Integration: Docker/Podman support for containerized environment management
Phase 4: Learning & Adaptation System
- Machine Learning Engine: User behavior analysis and command sequence optimization
- Workflow Automation: Complex multi-step operations with cron scheduling and error handling
- Anomaly Detection: Real-time security and performance monitoring with intelligent alerting
- Adaptive Intelligence: Self-improving system that learns from user patterns and optimizes workflows
π― Development Features
- Interactive Learning: Hands-on algorithm education with visualization
- Performance Analysis: Real-time performance metrics and optimization
- Professional Engineering: Industry-standard development practices
- Visual Progress: Beautiful CLI interface with progress tracking
- Autonomous Execution: Complete task automation from requirements
π€ Intelligent Router - Natural Language Command System
- π 5-Language Support: Native understanding in English, Japanese, Chinese, Korean, Vietnamese
- Intent Recognition: "write code" β
/code
automatic execution (95%+ accuracy) - Contextual Understanding: Smart parameter extraction from natural conversation
- Learning Engine: Adapts to user patterns for personalized experience
Multi-Language Examples:
English: "write code" β /code
Japanese: "γ³γΌγγζΈγγ¦" β /code
Chinese: "ε代η " β /code
Korean: "μ½λλ₯Ό μμ±ν΄" β /code
Vietnamese: "viαΊΏt code" β /code
π» AI-Powered Coding Assistant
Professional development with intelligent AI assistance:
/code # Generate any code instantly with AI
/review # Professional code review & optimization
/bug # Intelligent bug detection & auto-fix
/lint # Code quality analysis & auto-correction
/test # Generate comprehensive test suites
Real Results for Engineers:
- Generates production-ready code in seconds
- Detects 40+ bug patterns with AI analysis
- Automatically fixes ESLint and TypeScript issues
- Creates test cases that actually pass
- Professional code reviews with improvement suggestions
π Complete Local LLM Integration
Privacy-first development with local AI models:
/setup # One-command setup for Ollama, vLLM, LM Studio
/model # Switch between cloud & local models instantly
/status # Monitor local AI service health
Privacy & Performance Benefits:
- Your code never leaves your machine
- Works 100% offline with local models
- Supports 20+ local LLM models
- Auto-detects and configures local AI services
- No API keys required for local models
π§ Advanced Intelligence for Researchers
Sophisticated AI features for research & complex projects:
/mode # Access 50+ cognitive modes (β½ Thinking, β½ Analyzing...)
/memory # Intelligent context preservation across sessions
/agents # Deploy specialized AI research assistants
/paper # Transform research papers into working code
Research-Grade Features:
- 50+ internal cognitive modes for different thinking patterns
- Cross-session learning and knowledge retention
- Multi-agent orchestration for complex tasks
- Paper-to-code transformation for research implementation
π¨ Creative Tools & Documentation
Bonus features for presentations and documentation:
/image # AI image generation for presentations & documentation
/video # Create demo videos & tutorials
/avatar # Interactive ASCII avatar companion
/voice # Voice-based coding conversations
Creative Benefits:
- Generate diagrams and visuals for technical documentation
- Create demo videos for project presentations
- Interactive avatar for engaging user experiences
- Voice conversations for hands-free coding
π‘ Why Engineers & Researchers Choose MARIA
# Natural language commands for development & system administration:
"Create a React component for user authentication"
"Fix this TypeScript error in my API"
"Generate comprehensive tests for my algorithm"
"Set up Ollama with local LLM models"
"Check system performance and optimize"
"Create automated backup workflow"
"Analyze disk usage and clean up files"
"Monitor services and restart if needed"
Real developer feedback:
- "MARIA saved me 6 hours on my last research project"
- "Local LLM support means my proprietary code stays secure"
- "The cognitive modes help me think through complex algorithms"
- "Best AI coding assistant for serious development work"
Quick Start
Installation
# Install globally via npm
npm install -g @bonginkan/maria
# Verify installation
maria --version
# Output: MARIA Platform v2.1.7 "Linux Command Intelligence Edition"
# Setup local AI models (optional)
maria setup-ollama # Install and configure Ollama
maria setup-vllm # Install and configure vLLM
# Start interactive mode with Linux Command Intelligence
maria
# Linux Command Intelligence System initializes automatically
# All system administration commands now benefit from AI intelligence
π₯οΈ Live CLI Session Example
Terminal Output:
π Initializing AI Services...
Local AI Services:
LM Studio [βββββββββββββββββββ] 0% βοΈ Checking availability...
Ollama [βββββββββββββββββββ] 0% βοΈ Checking availability...
vLLM [βββββββββββββββββββ] 0% βοΈ Checking availability...
π Initializing AI Services...
> _
π» Core Development Commands (Essential for Engineers):
/code <prompt> # AI-powered code generation with memory learning
/test <prompt> # Generate comprehensive test suites
/bug # Intelligent bug detection & auto-fix
/review # Professional code review & optimization
/paper <query> # Transform research papers into working code
π Local AI Integration (Privacy-First Development):
/model # Interactive model selector (β/β arrows)
/status # Check all AI service availability
maria setup-ollama # Auto-configure Ollama (CLI command)
maria setup-vllm # Auto-configure vLLM (CLI command)
π§ Advanced Intelligence Features (For Researchers):
/mode # Access 50+ cognitive modes (β½ Thinking, β½ Analyzing...)
/memory # Dual-layer memory system status & management
/agents # Deploy specialized AI research assistants
/mcp # Model Context Protocol integration
/chain <commands> # Command chaining for complex workflows
π Natural Language Support (5 Languages):
English: "write code" # β /code
Japanese: "γ³γΌγγζΈγγ¦" # β /code
Chinese: "ε代η " # β /code
Korean: "μ½λλ₯Ό μμ±ν΄" # β /code
Vietnamese: "viαΊΏt code" # β /code
π¨ Creative & Productivity Tools:
/image <prompt> # AI image generation for presentations
/video <prompt> # Create demo videos & documentation
/avatar # Interactive ASCII avatar companion
/template # Code template management
/alias # Custom command aliases
Alternative Installation Methods
# Using yarn
yarn global add @bonginkan/maria
# Using pnpm
pnpm add -g @bonginkan/maria
π― Usage Examples
Basic Interactive Mode
# Start MARIA interactive CLI (default command)
maria
# One-shot commands (non-interactive)
maria ask "How do I implement OAuth?"
maria code "React component for login"
maria vision image.png "Describe this diagram"
# Available slash commands in interactive mode:
> /help # Show all 40+ commands
> /code "hello world function" # AI code generation with memory
> /model # Interactive model selector
> /memory # Dual-layer memory system
> /status # System & AI service status
> /agents # Multi-agent orchestration
> /exit # Exit interactive mode
Algorithm Education Commands
# Start MARIA and use algorithm education slash commands
maria
> /sort quicksort --visualize # Interactive sorting visualization
> /learn algorithms # Complete CS curriculum
> /benchmark sorting # Performance analysis
> /algorithm complexity # Big O notation tutorials
> /code "merge sort implementation" # AI-generated algorithms
40+ Interactive Slash Commands
# All commands are slash commands within interactive mode
maria
> /help # Show all 40+ commands
> /model # Interactive AI model selection
> /code "function" # AI code generation with memory
> /test "unit tests" # Generate comprehensive tests
> /memory # Dual-layer memory system
> /agents # Multi-agent orchestration
> /paper "ML optimization" # Research paper to code
> /status # System & AI service status
> /exit # Exit MARIA
π¨ Key Features
π€ Autonomous Coding Agent
- Complete SOW Generation: Automatic Statement of Work creation
- Visual Mode Display: Real-time progress with beautiful UI
- Active Reporting: Progress tracking and status updates
- Self-Evolution: Learning engine that improves over time
- 120+ Engineering Modes: Professional development patterns
π Algorithm Education Platform
- Interactive QuickSort: Step-by-step algorithm visualization
- Performance Benchmarking: Compare algorithm efficiency
- Memory Profiling: Analyze memory usage patterns
- Educational Tools: Computer science curriculum support
- Sorting Algorithms: Complete collection with analysis
π§ Development Tools
- AI Code Generation: Multi-language code creation
- Intelligent Assistance: Context-aware development help
- Project Analysis: Codebase understanding and insights
- Quality Assurance: Automated testing and validation
- Version Control: Git integration and workflow support
π Supported Platforms
- Node.js: 18.0.0 - 22.x
- Primary OS Support: macOS, Linux (optimized)
- Secondary OS Support: Windows
- Terminals: All major terminal applications
- Shells: bash, zsh (recommended), fish, PowerShell
π Documentation
Command Reference
- Interactive Mode:
maria
(starts directly) - All Commands:
/help
within interactive mode - Algorithm Education:
/sort
,/learn
,/algorithm
commands - AI Development:
/code
,/bug
,/lint
,/model
commands - System Status:
/status
command
Examples and Tutorials
- Getting Started: Run
maria
and type/help
- Algorithm Learning: Use
/sort quicksort --visualize
for interactive tutorials - Development Workflow: AI-assisted coding with
/code
commands - Performance Analysis: Built-in benchmarking with
/benchmark
commands
π§ Configuration
MARIA works out of the box with no configuration required. For advanced features:
# Start interactive mode (default)
maria
# Check system status
> /status
# Configure AI providers
> /model # Select from 22+ AI models (GPT, Claude, Gemini, Local LLMs)
# Algorithm education
> /sort quicksort --visualize # Interactive learning
π€ Contributing
We welcome contributions to MARIA! Please check our contribution guidelines.
Development Setup
# Clone the repository
git clone https://github.com/bonginkan/maria.git
cd maria
# Install dependencies
npm install
# Build the project
npm run build
# Run locally
./bin/maria
π License
Dual-License Model: Personal Use (Free) / Enterprise (Paid)
- Personal Use: Free for individuals, students, and startups (<10 employees, <$1M ARR)
- Enterprise: Commercial license required for larger organizations
- Contact: enterprise@bonginkan.ai for enterprise licensing
See LICENSE for complete terms.
π Links
- NPM Package: npmjs.com/package/@bonginkan/maria
- GitHub Repository: github.com/bonginkan/maria
- Documentation: Available via
maria --help
- Support: GitHub Issues
π― What Makes MARIA Special
Revolutionary AI Development
- First Autonomous AI: Complete software development from requirements
- Visual Progress: Beautiful CLI with real-time feedback
- Educational Focus: Algorithm learning with interactive visualization
- Professional Quality: Industry-standard engineering practices
Cutting-Edge Technology
- Advanced AI Integration: Multiple AI model support
- Intelligent Automation: Self-learning and adaptation
- Modern CLI Experience: Beautiful, responsive interface
- Cross-Platform: Works everywhere Node.js runs
Perfect for:
- Students: Learn algorithms with interactive visualization
- Developers: Accelerate development with AI assistance
- Teams: Collaborative development with autonomous agents
- Educators: Teach computer science with hands-on tools
Experience the Algorithm Education Revolution with MARIA Platform v1.6.4
π Start your journey: npm install -g @bonginkan/maria && maria
release/v1.6.0