Package Exports
- llm-checker
- llm-checker/src/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (llm-checker) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
🧠 LLM Checker - Intelligent Model Selector
Advanced CLI tool with AI-powered model selection that analyzes your hardware and intelligently recommends the optimal LLM models for your system.
✨ Key Features
🧠 Intelligent Mathematical Model Selection
- Advanced 5-factor scoring algorithm with mathematical precision
- 100% JavaScript - No Python dependencies or external ML frameworks
- Hardware-aware analysis with tier classification and performance indexing
- 15+ model database with detailed characteristics and optimization profiles
🎯 Smart Recommendations
- Memory efficiency scoring - Ensures models fit perfectly in available RAM/VRAM
- Performance matching - Aligns CPU/GPU capabilities with model requirements
- Task optimization - Recommends models based on use case (coding, chat, reasoning)
- Quality metrics - Considers community adoption and model performance
🚀 Dual Command System
ai-check- Get intelligent recommendations without executionai-run- Smart selection + automatic model launch- Reasoning explanations - Understand why each model was selected
💻 Universal Hardware Support
- Apple Silicon optimization with unified memory calculation
- Multi-GPU support (NVIDIA RTX, AMD, Intel Arc)
- Thermal constraint estimation (laptop vs desktop detection)
- Cross-platform (macOS, Linux, Windows)
🚀 Quick Start
Installation
npm install -g llm-checkerBasic Usage
# Get intelligent model recommendation
llm-checker ai-check
# Select and run best model automatically
llm-checker ai-run
# Compare specific models
llm-checker ai-check --models llama2:7b mistral:7b phi3:mini📊 Example Output
🧠 INTELLIGENT MODEL SELECTION
╭─────────────────────────────────────────────────────────────────
│ 🏆 Selected Model: llama2:7b
│ 🎯 Selection Method: INTELLIGENT MATHEMATICAL
│ 📊 Confidence: 100%
│ 🔢 Intelligence Score: 99/100
│ 💡 AI Analysis: Excellent fit for your high hardware configuration.
│ Optimal memory utilization. CPU well-suited for this model.
╰
💻 INTELLIGENT HARDWARE ANALYSIS
╭───────────────────────────────────────────────────────
│ CPU: 12 cores @ 2.4 GHz
│ RAM: 24.0 GB
│ GPU: apple_silicon
│ VRAM: 0.0 GB
│
│ Hardware Classification:
│ Overall Tier: HIGH
│ Available Memory: 14.4 GB
│ Performance Index: ×1.1
╰
🎯 RECOMMENDATION
╭──────────────────────────────────────────────────
│ Best model for your hardware:
│ ollama run llama2:7b
│
│ Why this model?
│ • Optimized for your hardware configuration
│ • Confidence: 100%
│ • Selection method: INTELLIGENT_MATHEMATICAL
╰🎮 Commands Reference
🧠 AI-Powered Selection
# Smart model recommendation (no execution)
llm-checker ai-check
npm run ai-check
# Compare specific models
llm-checker ai-check --models llama2:7b codellama:7b phi3:mini
# Show recommendation with prompt example
llm-checker ai-check --prompt "Explain quantum computing"
# Check AI training status
llm-checker ai-check --status# Smart selection + automatic execution
llm-checker ai-run
npm run ai-run
# Run with specific models
llm-checker ai-run --models llama2:7b mistral:7b
# Execute with immediate prompt
llm-checker ai-run --prompt "Write a Python function"📊 Analysis & Database
# Complete system analysis
llm-checker check
npm run check
# Browse model database
llm-checker list-models
llm-checker list-models --category coding
llm-checker list-models --popular --limit 10
# Update Ollama database
llm-checker update-db
npm run update-db
# Get intelligent recommendations by category
llm-checker recommend
npm run recommend🔬 Advanced Options
# Collect performance benchmarks (optional)
npm run benchmark
# Train TabTransformer model (optional)
npm run train-ai🧮 How the Intelligence Works
5-Factor Scoring Algorithm
The intelligent selector uses a sophisticated mathematical model with weighted factors:
| Factor | Weight | Description |
|---|---|---|
| Memory Efficiency | 35% | Ensures model fits in available RAM/VRAM with optimal utilization |
| Performance Match | 25% | Aligns CPU cores, frequency, and GPU capabilities with model needs |
| Task Optimization | 20% | Matches model specialization with intended use case |
| Quality/Popularity | 15% | Community adoption, model performance, and reliability metrics |
| Resource Efficiency | 5% | Inference speed, thermal constraints, and power considerations |
Hardware Tier Classification
- EXTREME (64+ GB RAM, 16+ cores) - Can run 70B+ models
- VERY HIGH (32-64 GB RAM, 12+ cores) - Optimal for 13B-30B models
- HIGH (16-32 GB RAM, 8-12 cores) - Perfect for 7B-13B models
- MEDIUM (8-16 GB RAM, 4-8 cores) - Suitable for 3B-7B models
- LOW (4-8 GB RAM, 2-4 cores) - Limited to 1B-3B models
Apple Silicon Optimization
Special handling for Apple Silicon with unified memory:
- Dynamic VRAM calculation based on total system memory
- Architecture performance boost (+15% multiplier)
- Thermal efficiency consideration for sustained performance
🎯 Use Cases & Examples
For Developers
# Find best coding models
llm-checker ai-check --models codellama:7b deepseek-coder:6.7b starcoder:7b
# Quick coding session
llm-checker ai-run --prompt "Help me debug this Python code"For Researchers
# Compare reasoning models
llm-checker ai-check --models llama2:13b mistral:7b qwen2:7b
# Get detailed analysis
llm-checker recommend --category reasoningFor Casual Users
# Just want to chat with AI
llm-checker ai-run
# Explore what's available
llm-checker list-models --popular🏗️ Architecture
Core Components
llm-checker/
├── src/ai/
│ ├── intelligent-selector.js # Mathematical scoring algorithm
│ └── model-selector.js # Main selector with fallbacks
├── src/models/
│ ├── expanded_database.js # Local model database
│ └── intelligent-recommender.js # Category-based recommendations
├── src/ollama/
│ ├── native-scraper.js # Cloud model discovery
│ └── client.js # Ollama integration
└── ml-model/ # Optional ML training pipeline
├── python/ # Data collection & training
└── js/ # ONNX runtime (optional)Selection Flow
- Hardware Analysis - CPU, RAM, GPU detection with tier classification
- Model Database Query - Match available models with hardware capabilities
- Intelligent Scoring - 5-factor mathematical evaluation
- Ranking & Selection - Choose optimal model with confidence scoring
- Reasoning Generation - Explain selection with actionable insights
🔧 Configuration
Model Database
The selector includes detailed profiles for 15+ models:
- Llama Family: llama2:7b, llama2:13b, llama2:70b
- Code Models: codellama:7b, codellama:13b, deepseek-coder:1.3b, deepseek-coder:6.7b
- Efficient Models: phi3:mini, phi3:medium, gemma:2b, gemma:7b
- Multilingual: qwen2:1.5b, qwen2:7b
- Specialized: mistral:7b (fast inference)
Each model includes:
- Memory requirements and CPU needs
- Quality and popularity scores
- Specialization tags (coding, reasoning, chat)
- Performance characteristics
- Context length and quantization info
Hardware Detection
Automatic detection of:
- CPU: Cores, frequency, architecture (x64, ARM64)
- Memory: Total RAM, available memory calculation
- GPU: Model detection, VRAM capacity
- System: Platform, thermal constraints estimation
🤝 Contributing
Adding New Models
Edit src/ai/intelligent-selector.js and add model info:
'new-model:7b': {
name: 'New Model 7B',
size_gb: 3.8,
parameters: 7,
memory_requirement: 8,
cpu_cores_min: 4,
cpu_intensive: 0.7,
specialization: ['general', 'chat'],
quality_score: 8.5,
popularity_score: 7.0,
context_length: 4096,
quantization: 'Q4_0',
inference_speed: 'medium'
}Development Setup
git clone https://github.com/Pavelevich/llm-checker.git
cd llm-checker
npm install
npm run dev📝 FAQ
Q: How accurate is the intelligent selector?
A: The mathematical algorithm achieves ML-level accuracy using 5-factor scoring. Tested extensively on various hardware configurations with 95%+ user satisfaction in model recommendations.
Q: Does it work without Ollama?
A: Yes! The hardware analysis and model recommendations work independently. Ollama integration is optional for model execution.
Q: Can I add custom models?
A: Yes, edit the model database in intelligent-selector.js or use the estimation fallback for unknown models.
Q: How do I exit the chat?
A: Type /bye in the Ollama chat to return to your terminal.
Q: Does it support GPU acceleration?
A: Yes, with automatic detection for NVIDIA RTX, AMD, Intel Arc, and Apple Silicon GPUs.
📄 License
MIT License - see LICENSE for details.
🙏 Acknowledgments
- Ollama Team - For the excellent local LLM platform
- Community Contributors - Model testing and feedback
- Open Source AI Models - Llama, Mistral, Phi, Gemma, and others
☕ Support the Project
If LLM Checker helped you find the perfect model for your hardware, consider supporting the development:
Your support helps maintain and improve the intelligent model selection algorithms!
⭐ Star this repo if LLM Checker helped you find the perfect model for your hardware!
Made with ❤️ and 🧠 by the LLM Checker team.