Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@bonginkan/maria) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
๐ค MARIA Platform v2.1.4 "ML Intelligence Edition"
๐ค MARIA Platform v2.1.4 - Machine Learning Intelligence Edition with AI Self-Improvement Cycle, 5 New ML Commands (/train /eval /rlhf /self_play /auto_tune), Hyperparameter Optimization, Synthetic Data Generation, plus Research Paper System, Memory Intelligence, and GraphRAG Analysis!
๐ฅ๏ธ MARIA CODE CLI Interface
MARIA's beautiful startup interface with automatic AI service initialization and local LLM detection
๐ Key Features - Local AI & Privacy-First Development
๐ง Revolutionary Dual-Layer Memory System (v2.0.7)
- System 1 (Fast/Intuitive): Instant pattern recognition and cache-based responses
- System 2 (Deliberate/Analytical): Deep reasoning traces and decision trees
- Context-Aware Intelligence: Every command learns from your patterns
- Personalized Experience: Adapts to your coding style and preferences
- 60% Faster Startup: Lazy loading with <50ms memory operations
๐ค Self-Questioning Internal Mode (Auto-Active)
MARIA่ตทๅๅพใซใ่ชๅใงๅพ ๆฉใใๅ ้จใขใผใใจใใฆใSelf-Questioningๅ ้จใขใผใใ่ฟฝๅ ใใพใใใใใฎใขใผใใฏไปฅไธใฎๆฉ่ฝใๆไพใใพใ๏ผ
- ๆ่ๆจๆธฌ: ็่งฃๅฐ้ฃใป็็พใปไธๆ็ญใชๅ ฅๅใงใๆ่ใใๆๅณใ่ชๅๆจๆธฌ
- Ultrathink็บๅ: ่ค้ใชๅ ดๅใซๆทฑๅฑคๆ่ใขใผใใงๅค่ง็ๅๆใๅฎ่ก
- ็็พ่งฃๆ: ็ธๅใใ่ฆๆฑใฎๆ นๆฌๅๅ ใๆขใๅชๅ ้ ไฝไปฎ่ชฌใๆง็ฏ
- ๆจๆธฌๆ นๆ ๆ็คบ: ๆจๆธฌๅ ๅฎนใจไฟก้ ผๅบฆ๏ผ0.1-1.0๏ผใ้ๆๅใใฆๆ็คบ
- ๆ่ป้ฉๅฟ: ๆจๆธฌใๅคใใฆใไฟฎๆญฃๅฏ่ฝใชๅ็ญๆง้ ใ่ชๅ็ถญๆ
๐ New in v2.1.4: Machine Learning Intelligence Edition
๐ค 5 New ML Commands - AI Self-Improvement Cycle:
- /train: Complete model training and fine-tuning with progress visualization
- Local & S3 dataset support with hyperparameter configuration
- Real-time progress display and ASCII learning curves
- Automatic model checkpointing and comprehensive training reports
- /eval: Model evaluation and benchmarking with multiple metrics
- Performance analysis (accuracy, F1, BLEU, latency)
- Confusion matrix visualization and category-wise analysis
- Automatic improvement suggestions and detailed evaluation reports
- /rlhf: Reinforcement Learning from Human Feedback
- Feedback data analysis and reward model training
- Policy optimization (PPO) with improvement visualization
- /self_play: AI self-dialogue for synthetic data generation
- Task-specific dialogue generation (bug_fixing, code_review, qa_generation)
- High-quality synthetic datasets in JSONL format
- /auto_tune: Automated hyperparameter optimization
- Grid, random, and Bayesian search strategies
- Parameter importance analysis and optimal configuration discovery
๐ Complete AI Self-Improvement Pipeline:
- ๐ฏ Auto-Training:
/train
learns from datasets - ๐ Performance Evaluation:
/eval
measures objective performance - ๐ Feedback Integration:
/rlhf
incorporates human feedback - ๐ฎ Data Expansion:
/self_play
generates additional training data - โก Parameter Optimization:
/auto_tune
discovers optimal settings
๐ Enhanced Research & Memory Intelligence
๐ฌ Research Paper Generation System:
- /paper: Complete research workflow from theme to published paper
- 6-stage research process (Theme โ Literature Review โ Design โ Analysis โ Paper)
- Bilingual support (Japanese/English)
- GraphRAG & Agentic RAG specialized knowledge
- Auto-citation and reference management
- Organized research papers folder structure
๐ง Enhanced Memory System:
- /memory: Advanced context preservation and learning
- Dual-layer memory architecture
- Cross-session learning persistence
- Project-specific context management
- Personalized response pattern learning
๐๏ธ Microservice Architecture:
- SlashCommandManager: Phase-based migration system
- PaperResearchService: Dedicated research workflow service
- BaseCommandService: Extensible service framework
- Full TypeScript strict mode compliance
๐ Complete Local LLM Integration
- Automatic Detection & Setup: Auto-configures Ollama, vLLM, LM Studio
- Privacy-First Development: All processing runs locally on your machine
- Zero Cloud Dependencies: Work offline with full AI capabilities
- Multi-Model Support: Seamlessly switch between 20+ local models
- One-Command Setup:
maria setup-ollama
/maria setup-vllm
for instant configuration
๐ค Enterprise AI Development
- Memory-Enhanced Commands: All core commands now learn from usage
- Autonomous Coding Agent: Complete project development from requirements
- Real-time Code Analysis: Live quality feedback with historical context
- Multi-Provider Support: OpenAI, Anthropic, Google, Groq + Local LLMs
- Interactive Commands: 40+ slash commands for development workflow
- Professional Engineering Modes: 50+ specialized AI cognitive states
๐ Instant Setup & Usage
npm install -g @bonginkan/maria
maria setup-ollama # Auto-install local AI
maria # Start interactive development
Core Capabilities:
- โ Local AI Models: Complete offline development environment
- โ Code Generation: AI-powered development assistance
- โ Machine Learning: 5 ML commands for AI self-improvement cycle
- โ Quality Analysis: Real-time code review and optimization
- โ Multi-Language: Support for all major programming languages
- โ Enterprise Ready: Professional development workflows
๐ฏ Key Features
- Interactive Learning: Hands-on algorithm education with visualization
- Performance Analysis: Real-time performance metrics and optimization
- Professional Engineering: Industry-standard development practices
- Visual Progress: Beautiful CLI interface with progress tracking
- Autonomous Execution: Complete task automation from requirements
๐ค Intelligent Router - Natural Language Command System
- ๐ 5-Language Support: Native understanding in English, Japanese, Chinese, Korean, Vietnamese
- Intent Recognition: "write code" โ
/code
automatic execution (95%+ accuracy) - Contextual Understanding: Smart parameter extraction from natural conversation
- Learning Engine: Adapts to user patterns for personalized experience
Multi-Language Examples:
English: "write code" โ /code
Japanese: "ใณใผใใๆธใใฆ" โ /code
Chinese: "ๅไปฃ็ " โ /code
Korean: "์ฝ๋๋ฅผ ์์ฑํด" โ /code
Vietnamese: "viแบฟt code" โ /code
๐ป AI-Powered Coding Assistant
Professional development with intelligent AI assistance:
/code # Generate any code instantly with AI
/review # Professional code review & optimization
/bug # Intelligent bug detection & auto-fix
/lint # Code quality analysis & auto-correction
/test # Generate comprehensive test suites
Real Results for Engineers:
- Generates production-ready code in seconds
- Detects 40+ bug patterns with AI analysis
- Automatically fixes ESLint and TypeScript issues
- Creates test cases that actually pass
- Professional code reviews with improvement suggestions
๐ Complete Local LLM Integration
Privacy-first development with local AI models:
/setup # One-command setup for Ollama, vLLM, LM Studio
/model # Switch between cloud & local models instantly
/status # Monitor local AI service health
Privacy & Performance Benefits:
- Your code never leaves your machine
- Works 100% offline with local models
- Supports 20+ local LLM models
- Auto-detects and configures local AI services
- No API keys required for local models
๐ง Advanced Intelligence for Researchers
Sophisticated AI features for research & complex projects:
/mode # Access 50+ cognitive modes (โฝ Thinking, โฝ Analyzing...)
/memory # Intelligent context preservation across sessions
/agents # Deploy specialized AI research assistants
/paper # Transform research papers into working code
Research-Grade Features:
- 50+ internal cognitive modes for different thinking patterns
- Cross-session learning and knowledge retention
- Multi-agent orchestration for complex tasks
- Paper-to-code transformation for research implementation
๐จ Creative Tools & Documentation
Bonus features for presentations and documentation:
/image # AI image generation for presentations & documentation
/video # Create demo videos & tutorials
/avatar # Interactive ASCII avatar companion
/voice # Voice-based coding conversations
Creative Benefits:
- Generate diagrams and visuals for technical documentation
- Create demo videos for project presentations
- Interactive avatar for engaging user experiences
- Voice conversations for hands-free coding
๐ก Why Engineers & Researchers Choose MARIA
# Natural language commands for complex tasks:
"Create a React component for user authentication"
"Fix this TypeScript error in my API"
"Generate comprehensive tests for my algorithm"
"Set up Ollama with local LLM models"
"Switch to thinking mode for complex debugging"
Real developer feedback:
- "MARIA saved me 6 hours on my last research project"
- "Local LLM support means my proprietary code stays secure"
- "The cognitive modes help me think through complex algorithms"
- "Best AI coding assistant for serious development work"
Quick Start
Installation
# Install globally via npm
npm install -g @bonginkan/maria
# Verify installation
maria --version
# Output: MARIA Platform v2.1.4 "ML Intelligence Edition"
# Setup local AI models (optional)
maria setup-ollama # Install and configure Ollama
maria setup-vllm # Install and configure vLLM
# Start interactive mode with natural language
maria
๐ฅ๏ธ Live CLI Session Example
Terminal Output:
๐ Initializing AI Services...
Local AI Services:
LM Studio [โโโโโโโโโโโโโโโโโโโ] 0% โ๏ธ Checking availability...
Ollama [โโโโโโโโโโโโโโโโโโโ] 0% โ๏ธ Checking availability...
vLLM [โโโโโโโโโโโโโโโโโโโ] 0% โ๏ธ Checking availability...
๐ Initializing AI Services...
> _
๐ป Core Development Commands (Essential for Engineers):
/code <prompt> # AI-powered code generation with memory learning
/test <prompt> # Generate comprehensive test suites
/bug # Intelligent bug detection & auto-fix
/review # Professional code review & optimization
/paper <query> # Transform research papers into working code
๐ค Machine Learning Commands (AI Self-Improvement):
/train <dataset> <model> # Train/fine-tune models with progress visualization
/eval <model> <test-data> # Evaluate model performance with metrics
/rlhf <model> <feedback> # Reinforcement learning from human feedback
/self_play <model> <tasks> # Generate synthetic training data via AI dialogue
/auto_tune <model> <data> # Optimize hyperparameters automatically
๐ Local AI Integration (Privacy-First Development):
/model # Interactive model selector (โ/โ arrows)
/status # Check all AI service availability
maria setup-ollama # Auto-configure Ollama (CLI command)
maria setup-vllm # Auto-configure vLLM (CLI command)
๐ง Advanced Intelligence Features (For Researchers):
/mode # Access 50+ cognitive modes (โฝ Thinking, โฝ Analyzing...)
/memory # Dual-layer memory system status & management
/agents # Deploy specialized AI research assistants
/mcp # Model Context Protocol integration
/chain <commands> # Command chaining for complex workflows
๐ Natural Language Support (5 Languages):
English: "write code" # โ /code
Japanese: "ใณใผใใๆธใใฆ" # โ /code
Chinese: "ๅไปฃ็ " # โ /code
Korean: "์ฝ๋๋ฅผ ์์ฑํด" # โ /code
Vietnamese: "viแบฟt code" # โ /code
๐จ Creative & Productivity Tools:
/image <prompt> # AI image generation for presentations
/video <prompt> # Create demo videos & documentation
/avatar # Interactive ASCII avatar companion
/template # Code template management
/alias # Custom command aliases
Alternative Installation Methods
# Using yarn
yarn global add @bonginkan/maria
# Using pnpm
pnpm add -g @bonginkan/maria
๐ฏ Usage Examples
Basic Interactive Mode
# Start MARIA interactive CLI (default command)
maria
# One-shot commands (non-interactive)
maria ask "How do I implement OAuth?"
maria code "React component for login"
maria vision image.png "Describe this diagram"
# Available slash commands in interactive mode:
> /help # Show all 40+ commands
> /code "hello world function" # AI code generation with memory
> /model # Interactive model selector
> /memory # Dual-layer memory system
> /status # System & AI service status
> /agents # Multi-agent orchestration
> /exit # Exit interactive mode
Algorithm Education Commands
# Start MARIA and use algorithm education slash commands
maria
> /sort quicksort --visualize # Interactive sorting visualization
> /learn algorithms # Complete CS curriculum
> /benchmark sorting # Performance analysis
> /algorithm complexity # Big O notation tutorials
> /code "merge sort implementation" # AI-generated algorithms
40+ Interactive Slash Commands
# All commands are slash commands within interactive mode
maria
> /help # Show all 40+ commands
> /model # Interactive AI model selection
> /code "function" # AI code generation with memory
> /test "unit tests" # Generate comprehensive tests
> /memory # Dual-layer memory system
> /agents # Multi-agent orchestration
> /paper "ML optimization" # Research paper to code
> /status # System & AI service status
> /exit # Exit MARIA
๐จ Key Features
๐ค Autonomous Coding Agent
- Complete SOW Generation: Automatic Statement of Work creation
- Visual Mode Display: Real-time progress with beautiful UI
- Active Reporting: Progress tracking and status updates
- Self-Evolution: Learning engine that improves over time
- 120+ Engineering Modes: Professional development patterns
๐ Algorithm Education Platform
- Interactive QuickSort: Step-by-step algorithm visualization
- Performance Benchmarking: Compare algorithm efficiency
- Memory Profiling: Analyze memory usage patterns
- Educational Tools: Computer science curriculum support
- Sorting Algorithms: Complete collection with analysis
๐ง Development Tools
- AI Code Generation: Multi-language code creation
- Intelligent Assistance: Context-aware development help
- Project Analysis: Codebase understanding and insights
- Quality Assurance: Automated testing and validation
- Version Control: Git integration and workflow support
๐ Supported Platforms
- Node.js: 18.0.0 - 22.x
- Primary OS Support: macOS, Linux (optimized)
- Secondary OS Support: Windows
- Terminals: All major terminal applications
- Shells: bash, zsh (recommended), fish, PowerShell
๐ Documentation
Command Reference
- Interactive Mode:
maria
(starts directly) - All Commands:
/help
within interactive mode - Algorithm Education:
/sort
,/learn
,/algorithm
commands - AI Development:
/code
,/bug
,/lint
,/model
commands - Machine Learning:
/train
,/eval
,/rlhf
,/self_play
,/auto_tune
commands - System Status:
/status
command
Examples and Tutorials
- Getting Started: Run
maria
and type/help
- Algorithm Learning: Use
/sort quicksort --visualize
for interactive tutorials - Development Workflow: AI-assisted coding with
/code
commands - Performance Analysis: Built-in benchmarking with
/benchmark
commands
๐ง Configuration
MARIA works out of the box with no configuration required. For advanced features:
# Start interactive mode (default)
maria
# Check system status
> /status
# Configure AI providers
> /model # Select from 22+ AI models (GPT, Claude, Gemini, Local LLMs)
# Algorithm education
> /sort quicksort --visualize # Interactive learning
๐ค Contributing
We welcome contributions to MARIA! Please check our contribution guidelines.
Development Setup
# Clone the repository
git clone https://github.com/bonginkan/maria.git
cd maria
# Install dependencies
npm install
# Build the project
npm run build
# Run locally
./bin/maria
๐ License
MIT License: Free and open-source for all users
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Links
- NPM Package: npmjs.com/package/@bonginkan/maria
- GitHub Repository: github.com/bonginkan/maria
- Documentation: Available via
maria --help
- Support: GitHub Issues
๐ฏ What Makes MARIA Special
Revolutionary AI Development
- First Autonomous AI: Complete software development from requirements
- Visual Progress: Beautiful CLI with real-time feedback
- Educational Focus: Algorithm learning with interactive visualization
- Professional Quality: Industry-standard engineering practices
Cutting-Edge Technology
- Advanced AI Integration: Multiple AI model support
- Intelligent Automation: Self-learning and adaptation
- Modern CLI Experience: Beautiful, responsive interface
- Cross-Platform: Works everywhere Node.js runs
Perfect for:
- Students: Learn algorithms with interactive visualization
- Developers: Accelerate development with AI assistance
- Teams: Collaborative development with autonomous agents
- Educators: Teach computer science with hands-on tools
Experience the Machine Learning Intelligence Revolution with MARIA Platform v2.1.4
๐ Start your journey: npm install -g @bonginkan/maria && maria
release/v1.6.0