JSPM

@bonginkan/maria

2.1.5
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 391
  • Score
    100M100P100Q122226F
  • License SEE LICENSE IN LICENSE

๐Ÿ”ฌ MARIA Platform v2.1.3 Research & Memory Intelligence Edition - AI-Powered Research Paper Generation (/paper) + Enhanced Memory System (/memory) + GraphRAG Analysis + Bilingual Academic Papers + Microservice Architecture + Advanced Context Preservation

Package Exports

    This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@bonginkan/maria) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

    Readme

    ๐Ÿค– MARIA Platform v2.1.4 "ML Intelligence Edition"

    TypeScript Node.js npm Downloads License Quality Cognitive Modes Memory System ML Commands AI Self-Improvement

    ๐Ÿค– MARIA Platform v2.1.4 - Machine Learning Intelligence Edition with AI Self-Improvement Cycle, 5 New ML Commands (/train /eval /rlhf /self_play /auto_tune), Hyperparameter Optimization, Synthetic Data Generation, plus Research Paper System, Memory Intelligence, and GraphRAG Analysis!

    ๐Ÿ–ฅ๏ธ MARIA CODE CLI Interface

    MARIA CLI Startup

    MARIA's beautiful startup interface with automatic AI service initialization and local LLM detection

    ๐ŸŒŸ Key Features - Local AI & Privacy-First Development

    ๐Ÿง  Revolutionary Dual-Layer Memory System (v2.0.7)

    • System 1 (Fast/Intuitive): Instant pattern recognition and cache-based responses
    • System 2 (Deliberate/Analytical): Deep reasoning traces and decision trees
    • Context-Aware Intelligence: Every command learns from your patterns
    • Personalized Experience: Adapts to your coding style and preferences
    • 60% Faster Startup: Lazy loading with <50ms memory operations

    ๐Ÿค” Self-Questioning Internal Mode (Auto-Active)

    MARIA่ตทๅ‹•ๅพŒใซใ€่‡ชๅ‹•ใงๅพ…ๆฉŸใ™ใ‚‹ๅ†…้ƒจใƒขใƒผใƒ‰ใจใ—ใฆใ€Self-Questioningๅ†…้ƒจใƒขใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ—ใพใ—ใŸใ€‚ใ“ใฎใƒขใƒผใƒ‰ใฏไปฅไธ‹ใฎๆฉŸ่ƒฝใ‚’ๆไพ›ใ—ใพใ™๏ผš

    • ๆ–‡่„ˆๆŽจๆธฌ: ็†่งฃๅ›ฐ้›ฃใƒป็Ÿ›็›พใƒปไธๆ˜Ž็žญใชๅ…ฅๅŠ›ใงใ‚‚ๆ–‡่„ˆใ‹ใ‚‰ๆ„ๅ›ณใ‚’่‡ชๅ‹•ๆŽจๆธฌ
    • Ultrathink็™บๅ‹•: ่ค‡้›‘ใชๅ ดๅˆใซๆทฑๅฑคๆ€่€ƒใƒขใƒผใƒ‰ใงๅคš่ง’็š„ๅˆ†ๆžใ‚’ๅฎŸ่กŒ
    • ็Ÿ›็›พ่งฃๆž: ็›ธๅใ™ใ‚‹่ฆๆฑ‚ใฎๆ นๆœฌๅŽŸๅ› ใ‚’ๆŽขใ‚Šๅ„ชๅ…ˆ้ †ไฝไปฎ่ชฌใ‚’ๆง‹็ฏ‰
    • ๆŽจๆธฌๆ นๆ‹ ๆ˜Ž็คบ: ๆŽจๆธฌๅ†…ๅฎนใจไฟก้ ผๅบฆ๏ผˆ0.1-1.0๏ผ‰ใ‚’้€ๆ˜ŽๅŒ–ใ—ใฆๆ็คบ
    • ๆŸ”่ปŸ้ฉๅฟœ: ๆŽจๆธฌใŒๅค–ใ‚Œใฆใ‚‚ไฟฎๆญฃๅฏ่ƒฝใชๅ›ž็ญ”ๆง‹้€ ใ‚’่‡ชๅ‹•็ถญๆŒ

    ๐Ÿ†• New in v2.1.4: Machine Learning Intelligence Edition

    ๐Ÿค– 5 New ML Commands - AI Self-Improvement Cycle:

    • /train: Complete model training and fine-tuning with progress visualization
      • Local & S3 dataset support with hyperparameter configuration
      • Real-time progress display and ASCII learning curves
      • Automatic model checkpointing and comprehensive training reports
    • /eval: Model evaluation and benchmarking with multiple metrics
      • Performance analysis (accuracy, F1, BLEU, latency)
      • Confusion matrix visualization and category-wise analysis
      • Automatic improvement suggestions and detailed evaluation reports
    • /rlhf: Reinforcement Learning from Human Feedback
      • Feedback data analysis and reward model training
      • Policy optimization (PPO) with improvement visualization
    • /self_play: AI self-dialogue for synthetic data generation
      • Task-specific dialogue generation (bug_fixing, code_review, qa_generation)
      • High-quality synthetic datasets in JSONL format
    • /auto_tune: Automated hyperparameter optimization
      • Grid, random, and Bayesian search strategies
      • Parameter importance analysis and optimal configuration discovery

    ๐Ÿ”„ Complete AI Self-Improvement Pipeline:

    1. ๐ŸŽฏ Auto-Training: /train learns from datasets
    2. ๐Ÿ“Š Performance Evaluation: /eval measures objective performance
    3. ๐Ÿ”„ Feedback Integration: /rlhf incorporates human feedback
    4. ๐ŸŽฎ Data Expansion: /self_play generates additional training data
    5. โšก Parameter Optimization: /auto_tune discovers optimal settings

    ๐Ÿ†• Enhanced Research & Memory Intelligence

    ๐Ÿ”ฌ Research Paper Generation System:

    • /paper: Complete research workflow from theme to published paper
      • 6-stage research process (Theme โ†’ Literature Review โ†’ Design โ†’ Analysis โ†’ Paper)
      • Bilingual support (Japanese/English)
      • GraphRAG & Agentic RAG specialized knowledge
      • Auto-citation and reference management
      • Organized research papers folder structure

    ๐Ÿง  Enhanced Memory System:

    • /memory: Advanced context preservation and learning
      • Dual-layer memory architecture
      • Cross-session learning persistence
      • Project-specific context management
      • Personalized response pattern learning

    ๐Ÿ—๏ธ Microservice Architecture:

    • SlashCommandManager: Phase-based migration system
    • PaperResearchService: Dedicated research workflow service
    • BaseCommandService: Extensible service framework
    • Full TypeScript strict mode compliance

    ๐Ÿ  Complete Local LLM Integration

    • Automatic Detection & Setup: Auto-configures Ollama, vLLM, LM Studio
    • Privacy-First Development: All processing runs locally on your machine
    • Zero Cloud Dependencies: Work offline with full AI capabilities
    • Multi-Model Support: Seamlessly switch between 20+ local models
    • One-Command Setup: maria setup-ollama / maria setup-vllm for instant configuration

    ๐Ÿค– Enterprise AI Development

    • Memory-Enhanced Commands: All core commands now learn from usage
    • Autonomous Coding Agent: Complete project development from requirements
    • Real-time Code Analysis: Live quality feedback with historical context
    • Multi-Provider Support: OpenAI, Anthropic, Google, Groq + Local LLMs
    • Interactive Commands: 40+ slash commands for development workflow
    • Professional Engineering Modes: 50+ specialized AI cognitive states

    ๐Ÿš€ Instant Setup & Usage

    npm install -g @bonginkan/maria
    maria setup-ollama          # Auto-install local AI
    maria                        # Start interactive development

    Core Capabilities:

    • โœ… Local AI Models: Complete offline development environment
    • โœ… Code Generation: AI-powered development assistance
    • โœ… Machine Learning: 5 ML commands for AI self-improvement cycle
    • โœ… Quality Analysis: Real-time code review and optimization
    • โœ… Multi-Language: Support for all major programming languages
    • โœ… Enterprise Ready: Professional development workflows

    ๐ŸŽฏ Key Features

    • Interactive Learning: Hands-on algorithm education with visualization
    • Performance Analysis: Real-time performance metrics and optimization
    • Professional Engineering: Industry-standard development practices
    • Visual Progress: Beautiful CLI interface with progress tracking
    • Autonomous Execution: Complete task automation from requirements

    ๐Ÿค– Intelligent Router - Natural Language Command System

    • ๐ŸŒ 5-Language Support: Native understanding in English, Japanese, Chinese, Korean, Vietnamese
    • Intent Recognition: "write code" โ†’ /code automatic execution (95%+ accuracy)
    • Contextual Understanding: Smart parameter extraction from natural conversation
    • Learning Engine: Adapts to user patterns for personalized experience

    Multi-Language Examples:

    English:    "write code"          โ†’ /code
    Japanese:   "ใ‚ณใƒผใƒ‰ใ‚’ๆ›ธใ„ใฆ"        โ†’ /code
    Chinese:    "ๅ†™ไปฃ็ "              โ†’ /code
    Korean:     "์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•ด"        โ†’ /code
    Vietnamese: "viแบฟt code"           โ†’ /code

    ๐Ÿ’ป AI-Powered Coding Assistant

    Professional development with intelligent AI assistance:

    /code     # Generate any code instantly with AI
    /review   # Professional code review & optimization
    /bug      # Intelligent bug detection & auto-fix
    /lint     # Code quality analysis & auto-correction
    /test     # Generate comprehensive test suites

    Real Results for Engineers:

    • Generates production-ready code in seconds
    • Detects 40+ bug patterns with AI analysis
    • Automatically fixes ESLint and TypeScript issues
    • Creates test cases that actually pass
    • Professional code reviews with improvement suggestions

    ๐Ÿ  Complete Local LLM Integration

    Privacy-first development with local AI models:

    /setup    # One-command setup for Ollama, vLLM, LM Studio
    /model    # Switch between cloud & local models instantly
    /status   # Monitor local AI service health

    Privacy & Performance Benefits:

    • Your code never leaves your machine
    • Works 100% offline with local models
    • Supports 20+ local LLM models
    • Auto-detects and configures local AI services
    • No API keys required for local models

    ๐Ÿง  Advanced Intelligence for Researchers

    Sophisticated AI features for research & complex projects:

    /mode     # Access 50+ cognitive modes (โœฝ Thinking, โœฝ Analyzing...)
    /memory   # Intelligent context preservation across sessions
    /agents   # Deploy specialized AI research assistants
    /paper    # Transform research papers into working code

    Research-Grade Features:

    • 50+ internal cognitive modes for different thinking patterns
    • Cross-session learning and knowledge retention
    • Multi-agent orchestration for complex tasks
    • Paper-to-code transformation for research implementation

    ๐ŸŽจ Creative Tools & Documentation

    Bonus features for presentations and documentation:

    /image    # AI image generation for presentations & documentation
    /video    # Create demo videos & tutorials
    /avatar   # Interactive ASCII avatar companion
    /voice    # Voice-based coding conversations

    Creative Benefits:

    • Generate diagrams and visuals for technical documentation
    • Create demo videos for project presentations
    • Interactive avatar for engaging user experiences
    • Voice conversations for hands-free coding

    ๐Ÿ’ก Why Engineers & Researchers Choose MARIA

    # Natural language commands for complex tasks:
    "Create a React component for user authentication"
    "Fix this TypeScript error in my API"  
    "Generate comprehensive tests for my algorithm"
    "Set up Ollama with local LLM models"
    "Switch to thinking mode for complex debugging"

    Real developer feedback:

    • "MARIA saved me 6 hours on my last research project"
    • "Local LLM support means my proprietary code stays secure"
    • "The cognitive modes help me think through complex algorithms"
    • "Best AI coding assistant for serious development work"

    Quick Start

    Installation

    # Install globally via npm
    npm install -g @bonginkan/maria
    
    # Verify installation
    maria --version
    # Output: MARIA Platform v2.1.4 "ML Intelligence Edition"
    
    # Setup local AI models (optional)
    maria setup-ollama    # Install and configure Ollama
    maria setup-vllm      # Install and configure vLLM
    
    # Start interactive mode with natural language
    maria

    ๐Ÿ–ฅ๏ธ Live CLI Session Example

    MARIA CLI Startup

    Terminal Output:

    ๐Ÿš€ Initializing AI Services...
    
    Local AI Services:
    LM Studio    [โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—] 0% โš™๏ธ Checking availability...
    Ollama       [โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—] 0% โš™๏ธ Checking availability...
    vLLM         [โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—โ—] 0% โš™๏ธ Checking availability...
    
    ๐Ÿš€ Initializing AI Services...
    
    > _

    ๐Ÿ’ป Core Development Commands (Essential for Engineers):

    /code <prompt>    # AI-powered code generation with memory learning
    /test <prompt>    # Generate comprehensive test suites
    /bug              # Intelligent bug detection & auto-fix
    /review           # Professional code review & optimization
    /paper <query>    # Transform research papers into working code

    ๐Ÿค– Machine Learning Commands (AI Self-Improvement):

    /train <dataset> <model>     # Train/fine-tune models with progress visualization
    /eval <model> <test-data>    # Evaluate model performance with metrics
    /rlhf <model> <feedback>     # Reinforcement learning from human feedback
    /self_play <model> <tasks>   # Generate synthetic training data via AI dialogue
    /auto_tune <model> <data>    # Optimize hyperparameters automatically

    ๐Ÿ  Local AI Integration (Privacy-First Development):

    /model            # Interactive model selector (โ†‘/โ†“ arrows)
    /status           # Check all AI service availability
    maria setup-ollama    # Auto-configure Ollama (CLI command)
    maria setup-vllm      # Auto-configure vLLM (CLI command)

    ๐Ÿง  Advanced Intelligence Features (For Researchers):

    /mode             # Access 50+ cognitive modes (โœฝ Thinking, โœฝ Analyzing...)
    /memory           # Dual-layer memory system status & management
    /agents           # Deploy specialized AI research assistants
    /mcp              # Model Context Protocol integration
    /chain <commands> # Command chaining for complex workflows

    ๐ŸŒ Natural Language Support (5 Languages):

    English:  "write code"       # โ†’ /code
    Japanese: "ใ‚ณใƒผใƒ‰ใ‚’ๆ›ธใ„ใฆ"     # โ†’ /code
    Chinese:  "ๅ†™ไปฃ็ "           # โ†’ /code
    Korean:   "์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•ด"     # โ†’ /code
    Vietnamese: "viแบฟt code"      # โ†’ /code

    ๐ŸŽจ Creative & Productivity Tools:

    /image <prompt>   # AI image generation for presentations
    /video <prompt>   # Create demo videos & documentation
    /avatar           # Interactive ASCII avatar companion
    /template         # Code template management
    /alias            # Custom command aliases

    Alternative Installation Methods

    # Using yarn
    yarn global add @bonginkan/maria
    
    # Using pnpm
    pnpm add -g @bonginkan/maria

    ๐ŸŽฏ Usage Examples

    Basic Interactive Mode

    # Start MARIA interactive CLI (default command)
    maria
    
    # One-shot commands (non-interactive)
    maria ask "How do I implement OAuth?"
    maria code "React component for login"
    maria vision image.png "Describe this diagram"
    
    # Available slash commands in interactive mode:
    > /help                          # Show all 40+ commands
    > /code "hello world function"   # AI code generation with memory
    > /model                         # Interactive model selector
    > /memory                        # Dual-layer memory system
    > /status                        # System & AI service status
    > /agents                        # Multi-agent orchestration
    > /exit                          # Exit interactive mode

    Algorithm Education Commands

    # Start MARIA and use algorithm education slash commands
    maria
    > /sort quicksort --visualize     # Interactive sorting visualization
    > /learn algorithms               # Complete CS curriculum
    > /benchmark sorting              # Performance analysis
    > /algorithm complexity           # Big O notation tutorials
    > /code "merge sort implementation" # AI-generated algorithms

    40+ Interactive Slash Commands

    # All commands are slash commands within interactive mode
    maria
    > /help                          # Show all 40+ commands
    > /model                         # Interactive AI model selection
    > /code "function"               # AI code generation with memory
    > /test "unit tests"             # Generate comprehensive tests
    > /memory                        # Dual-layer memory system
    > /agents                        # Multi-agent orchestration
    > /paper "ML optimization"       # Research paper to code
    > /status                        # System & AI service status
    > /exit                          # Exit MARIA

    ๐ŸŽจ Key Features

    ๐Ÿค– Autonomous Coding Agent

    • Complete SOW Generation: Automatic Statement of Work creation
    • Visual Mode Display: Real-time progress with beautiful UI
    • Active Reporting: Progress tracking and status updates
    • Self-Evolution: Learning engine that improves over time
    • 120+ Engineering Modes: Professional development patterns

    ๐Ÿ“Š Algorithm Education Platform

    • Interactive QuickSort: Step-by-step algorithm visualization
    • Performance Benchmarking: Compare algorithm efficiency
    • Memory Profiling: Analyze memory usage patterns
    • Educational Tools: Computer science curriculum support
    • Sorting Algorithms: Complete collection with analysis

    ๐Ÿ”ง Development Tools

    • AI Code Generation: Multi-language code creation
    • Intelligent Assistance: Context-aware development help
    • Project Analysis: Codebase understanding and insights
    • Quality Assurance: Automated testing and validation
    • Version Control: Git integration and workflow support

    ๐ŸŒ Supported Platforms

    • Node.js: 18.0.0 - 22.x
    • Primary OS Support: macOS, Linux (optimized)
    • Secondary OS Support: Windows
    • Terminals: All major terminal applications
    • Shells: bash, zsh (recommended), fish, PowerShell

    ๐Ÿ“š Documentation

    Command Reference

    • Interactive Mode: maria (starts directly)
    • All Commands: /help within interactive mode
    • Algorithm Education: /sort, /learn, /algorithm commands
    • AI Development: /code, /bug, /lint, /model commands
    • Machine Learning: /train, /eval, /rlhf, /self_play, /auto_tune commands
    • System Status: /status command

    Examples and Tutorials

    • Getting Started: Run maria and type /help
    • Algorithm Learning: Use /sort quicksort --visualize for interactive tutorials
    • Development Workflow: AI-assisted coding with /code commands
    • Performance Analysis: Built-in benchmarking with /benchmark commands

    ๐Ÿ”ง Configuration

    MARIA works out of the box with no configuration required. For advanced features:

    # Start interactive mode (default)
    maria
    
    # Check system status
    > /status
    
    # Configure AI providers
    > /model  # Select from 22+ AI models (GPT, Claude, Gemini, Local LLMs)
    
    # Algorithm education
    > /sort quicksort --visualize  # Interactive learning

    ๐Ÿค Contributing

    We welcome contributions to MARIA! Please check our contribution guidelines.

    Development Setup

    # Clone the repository
    git clone https://github.com/bonginkan/maria.git
    cd maria
    
    # Install dependencies
    npm install
    
    # Build the project
    npm run build
    
    # Run locally
    ./bin/maria

    ๐Ÿ“„ License

    MIT License: Free and open-source for all users

    This project is licensed under the MIT License - see the LICENSE file for details.

    ๐ŸŽฏ What Makes MARIA Special

    Revolutionary AI Development

    • First Autonomous AI: Complete software development from requirements
    • Visual Progress: Beautiful CLI with real-time feedback
    • Educational Focus: Algorithm learning with interactive visualization
    • Professional Quality: Industry-standard engineering practices

    Cutting-Edge Technology

    • Advanced AI Integration: Multiple AI model support
    • Intelligent Automation: Self-learning and adaptation
    • Modern CLI Experience: Beautiful, responsive interface
    • Cross-Platform: Works everywhere Node.js runs

    Perfect for:

    • Students: Learn algorithms with interactive visualization
    • Developers: Accelerate development with AI assistance
    • Teams: Collaborative development with autonomous agents
    • Educators: Teach computer science with hands-on tools

    Experience the Machine Learning Intelligence Revolution with MARIA Platform v2.1.4

    ๐Ÿš€ Start your journey: npm install -g @bonginkan/maria && maria

    release/v1.6.0