JSPM

minimax-ai-cli

2.1.2
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 6
  • Score
    100M100P100Q40881F
  • License MIT

Professional AI coding assistant with advanced terminal UI powered by MiniMax AI models

Package Exports

    This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (minimax-ai-cli) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

    Readme

    MiniMax AI - Claude-Style Coding Assistant

    npm version License Downloads

    A powerful Claude-style AI coding assistant powered by MiniMax AI models. Get intelligent code analysis, generation, and interactive programming assistance right from your terminal.

    🚀 Quick Start

    # Install globally via npm
    npm install -g @minimax-ai/minimax
    
    # Set your Hugging Face token
    export HF_TOKEN=your_token_here
    
    # Start using MiniMax AI
    minimax --help
    minimax analyze review myfile.py
    minimax generate function "calculate fibonacci"
    minimax chat start

    ✨ Features

    • 🔍 Code Analysis: Review, explain, debug, and optimize your code
    • 🛠️ Code Generation: Generate functions, classes, tests, and entire projects
    • ✏️ Smart Editing: Modify and refactor code with AI assistance
    • 💬 Interactive Mode: Chat-based coding assistance with context awareness
    • 📁 Project Intelligence: Understand and work with entire codebases
    • 🌍 Cross-Platform: Works on Windows, macOS, and Linux
    • Easy Installation: One command global installation via npm

    📋 Prerequisites

    • Node.js 14+ (for npm installation)
    • Python 3.8+ (automatically managed)
    • Hugging Face account with API access

    📦 Installation

    # Install globally via npm (recommended)
    npm install -g @minimax-ai/minimax
    
    # Alternative: Install locally in a project
    npm install @minimax-ai/minimax

    The installation automatically:

    • ✅ Detects and validates Python 3.8+
    • ✅ Creates an isolated Python environment
    • ✅ Installs required Python dependencies
    • ✅ Sets up the global minimax command

    Option 3: Production Installation

    Install from PyPI (when published):

    pip install minimax-client

    🔑 Setup

    1. Get Your Hugging Face Token

    1. Visit https://huggingface.co/settings/tokens
    2. Create a new token with read access
    3. Set it as an environment variable:
    # Linux/macOS
    export HF_TOKEN='your_token_here'
    
    # Windows (Command Prompt)
    set HF_TOKEN=your_token_here
    
    # Windows (PowerShell)
    $env:HF_TOKEN='your_token_here'

    2. Verify Installation

    minimax --version
    minimax --help

    Usage

    Command Line Interface

    After installation, use the minimax-client command:

    # Basic usage with defaults
    minimax-client
    
    # Custom message
    minimax-client --message "Explain quantum computing in simple terms"
    
    # Different model
    minimax-client --model "MiniMaxAI/MiniMax-M1-40k" --message "Hello, world!"
    
    # Non-streaming mode
    minimax-client --no-streaming --message "What is AI?"
    
    # Custom provider
    minimax-client --provider "hf-inference-endpoints" --message "Tell me a joke"
    
    # Show version
    minimax-client --version

    Legacy Script

    The original script is still available for backwards compatibility:

    python minimax_client_legacy.py

    Configuration

    CLI Arguments

    Argument Short Default Description
    --model -m MiniMaxAI/MiniMax-M1-80k Model name to use
    --message -msg What is the capital of France? User message to send
    --provider -p auto Inference provider to use
    --no-streaming False Disable streaming (use regular completion)
    --verbose -v False Enable verbose logging
    --help -h Show help message

    Environment Variables

    Variable Required Description
    HF_TOKEN Yes Hugging Face API token
    MINIMAX_MODEL No Default model name (overridden by CLI)
    MINIMAX_PROVIDER No Default provider (overridden by CLI)

    Configuration Precedence

    1. CLI Arguments (highest priority)
    2. Environment Variables
    3. Default Values (lowest priority)

    Examples

    # Use environment variable for model
    export MINIMAX_MODEL="MiniMaxAI/MiniMax-M1-40k"
    minimax-client --message "Hello!"
    
    # Override environment with CLI
    export MINIMAX_MODEL="MiniMaxAI/MiniMax-M1-40k"
    minimax-client --model "MiniMaxAI/MiniMax-M1-80k" --message "Hello!"
    
    # Complex configuration
    minimax-client \
      --model "MiniMaxAI/MiniMax-M1-80k" \
      --message "Explain the theory of relativity" \
      --provider "hf-inference-endpoints" \
      --verbose

    Expected Output

    The CLI tool will:

    1. ✓ Validate environment and configuration
    2. ✓ Initialize the InferenceClient with specified provider
    3. 🚀 Send request to the configured model
    4. 🔄 Display streaming response in real-time (if enabled)
    5. ✅ Show completion status with proper logging

    Example output: ``` INFO - Starting MiniMax client with model: MiniMaxAI/MiniMax-M1-80k INFO - Environment validation successful INFO - InferenceClient initialized successfully INFO - Sending message: What is the capital of France? INFO - Streaming response:

    The capital of France is Paris. It is the largest city in France and serves as the country's political, economic, and cultural center...

    INFO - Chat completion successful INFO - Client operation completed successfully

    
    ## Package Structure
    

    src/minimax_client/ ├── init.py # Package initialization and version info ├── main.py # CLI entry point and orchestration ├── config.py # Configuration management and CLI parsing ├── environment.py # Environment variable validation ├── client.py # InferenceClient initialization and management └── chat.py # Chat completion logic and streaming

    
    ### Key Components
    
    1. **Configuration Management** (`config.py`): Handles CLI arguments, environment variables, and default values
    2. **Environment Validation** (`environment.py`): Validates HF_TOKEN and optional .env file loading
    3. **Client Management** (`client.py`): InferenceClient initialization with specific error handling
    4. **Chat Processing** (`chat.py`): Streaming and non-streaming chat completions with enhanced error recovery
    5. **Main Orchestrator** (`main.py`): Coordinates all components and provides CLI interface
    
    ## Error Handling
    
    ### Exit Codes
    
    | Code | Category | Description |
    |------|----------|-------------|
    | 0 | Success | Operation completed successfully |
    | 1 | General Error | Unspecified error occurred |
    | 2 | Configuration Error | Invalid configuration or missing environment variables |
    | 3 | Network Error | Network connectivity or timeout issues |
    | 4 | Authentication Error | Invalid or missing API token |
    | 5 | Model Error | Model-related issues (not found, gated, unavailable) |
    
    ### Specific Exception Handling
    
    | Exception Type | Exit Code | Common Causes | Solutions |
    |----------------|-----------|---------------|-----------|
    | `HfHubHTTPError` | 3, 4, 5 | HTTP errors (401, 404, 429, 500) | Check token, model name, rate limits |
    | `RepositoryNotFoundError` | 5 | Model doesn't exist | Verify model name spelling |
    | `GatedRepoError` | 5 | Model requires access approval | Request access on Hugging Face |
    | `InferenceTimeoutError` | 5 | Model temporarily unavailable | Retry later or use different model |
    | `BadRequestError` | 5 | Invalid parameters | Check message format and model requirements |
    | `ValueError` | 4 | Authentication issues | Verify HF_TOKEN validity |
    | `requests.exceptions.*` | 3 | Network problems | Check internet connection |
    
    ## Development
    
    ### Setting Up Development Environment
    
    ```bash
    # Clone the repository
    git clone <repository-url>
    cd MiniMax
    
    # Create virtual environment
    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    
    # Install in development mode
    pip install -e ".[dev]"
    
    # Install pre-commit hooks (optional)
    pre-commit install

    Running Tests

    # Run all tests
    pytest
    
    # Run with coverage
    pytest --cov=src/minimax_client
    
    # Run specific test file
    pytest tests/test_config.py
    
    # Run with verbose output
    pytest -v

    Project Structure

    MiniMax/
    ├── src/minimax_client/      # Main package source code
    ├── tests/                   # Unit tests
    ├── pyproject.toml          # Modern Python packaging configuration
    ├── requirements.txt        # Runtime dependencies
    ├── README.md              # This documentation
    ├── minimax_client_legacy.py # Original script (backwards compatibility)
    └── .env.example           # Example environment file

    Contributing

    1. Fork the repository
    2. Create a feature branch: git checkout -b feature-name
    3. Make your changes with proper tests
    4. Run the test suite: pytest
    5. Submit a pull request

    Code Style

    • Follow PEP 8 guidelines
    • Use type hints for all functions
    • Add docstrings for public APIs
    • Write unit tests for new functionality
    • Use proper logging instead of print statements

    API Documentation

    Using as a Python Package

    from minimax_client.config import Configuration
    from minimax_client.client import initialize_client
    from minimax_client.chat import create_chat_completion
    
    # Create configuration
    config = Configuration(
        model_name="MiniMaxAI/MiniMax-M1-80k",
        user_message="Hello, world!",
        provider="auto",
        streaming=True
    )
    
    # Initialize client
    client = initialize_client(config)
    
    # Create chat completion
    create_chat_completion(client, config)

    Configuration Class

    class Configuration:
        """Configuration management for MiniMax client."""
        
        def __init__(self, model_name: str = None, user_message: str = None, 
                     provider: str = None, streaming: bool = None):
            """Initialize configuration with optional overrides."""
            
        @classmethod
        def from_args(cls, args: list = None) -> 'Configuration':
            """Create configuration from command line arguments."""
            
        def validate(self) -> None:
            """Validate configuration values."""

    Troubleshooting

    Environment Issues

    Problem Solution
    HF_TOKEN not found Set environment variable or create .env file
    Permission denied Check file permissions and virtual environment
    Module not found Install package with pip install -e .

    Authentication Issues

    Problem Solution
    401 Unauthorized Verify HF_TOKEN is valid and not expired
    403 Forbidden Request access to gated models
    Invalid token format Check token doesn't have extra spaces/characters

    Model Issues

    Problem Solution
    Model not found (404) Verify model name spelling and availability
    Model temporarily unavailable (503) Try again later or use different model
    Rate limit exceeded (429) Wait before retrying or upgrade API plan
    Model loading timeout Use smaller model or retry later

    Network Issues

    Problem Solution
    Connection timeout Check internet connection and firewall
    DNS resolution failed Verify DNS settings
    SSL certificate error Update certificates or check system time

    Configuration Issues

    Problem Solution
    Invalid provider Use 'auto', 'hf-inference-endpoints', or other valid providers
    Message too long Reduce message length or use model with larger context
    Invalid streaming parameter Use boolean values (True/False)

    Getting Help

    • GitHub Issues: Report bugs and request features
    • Documentation: Check this README and inline code documentation
    • Hugging Face Forums: Community support for model-specific issues
    • API Documentation: Hugging Face Inference API docs

    License

    This client library is provided under the MIT License. See LICENSE file for details.