claude-master-toolkit
Operation layer for Claude Code: local dashboard, cost telemetry, context-guardian hooks, persistent memory (Pandorica), model router, MCP server, and a tree-sitter symbol index — all offline.
Found 36 results for model-router
Operation layer for Claude Code: local dashboard, cost telemetry, context-guardian hooks, persistent memory (Pandorica), model router, MCP server, and a tree-sitter symbol index — all offline.
Smart model routing + context compression proxy for OpenClaw
Universal AI model proxy — route any coding tool through OpenRouter, Ollama, LMStudio, llama.cpp, or any LLM provider
Conversation-aware model routing for OpenCode using Cast AI's Kimchi models
Prompt compiler + central learning brain for multi-model AI apps. Swap models without rewriting prompts.
Ollama agent router and local LLM router for OpenAI-compatible model routing, GPU-aware queues, async jobs, and CLI configuration.
Local AI gateway for OpenCode — use any model via OpenAI, Anthropic, or Gemini API format
Self-hosted remote-assistant CLI for routing AI agents, model gateways, and AI-to-AI rooms across machines.
AI model intelligence router — adapt every request for the target model
Intelligent per-turn model router extension for the pi coding agent (Enhanced Fork)
Intelligent per-turn model router extension for the pi coding agent
Terminal UI and local bridge for Ollama Cloud and Claude Code with model switching and API key rotation
TypeScript-native Claude model router — auto-routes to Haiku/Sonnet/Opus by complexity, logs cost savings per call
Pi extension that routes model groups to concrete models. Balances intelligence, cost, and availability.
Smart LLM router plugin for OpenClaw — classify requests and route to the best model using your own API keys. 14-dimension scoring, <1ms classification, per-prompt/session model switching.
OpenClaw plugin: smart routing between local Ollama models and AIPing cloud (Kimi-K2.5). ~90% requests stay local.
Pi extension that auto-assigns an executor model per task and exposes an advisor tool for high-leverage guidance.
Two-stage LLM model router: hard rules + embedding similarity. Zero runtime deps, sub-5ms classification.
A single-command local AI proxy gateway with OpenAI-compatible endpoints for OpenAI, Anthropic, and Gemini
Self-hosted OpenAI-compatible LLM router that auto-imports providers from your OpenClaw config
Self-hosted remote-assistant CLI for routing AI agents, model gateways, and AI-to-AI rooms across machines.
Multi-provider LLM proxy SDK with smart routing, streaming, and cost optimization
Intelligent AI model router — route every LLM call to the cheapest model that can handle it
AI Model Integration Layer for AIMF
Multi-provider AI model router with 6 strategies, sovereign profile, and composite scoring across 9+ providers
TypeScript SDK for the Model Router — typed routing tiers, streaming with reasoning, tool execution loops, and Vercel AI SDK integration
Official Node.js SDK for Crazyrouter — One API key for 300+ AI models (GPT-5, Claude Opus 4, Gemini 3, DeepSeek V3 and more). Drop-in replacement for OpenAI SDK.
Intelligently route AI requests to optimal models based on task type, cost, speed, and quality
Quality gate for AI responses. Score, route, and auto-escalate across models transparently.
Unified LLM routing with automatic fallback, cost-aware model selection, and semantic caching.
The unified LLM runtime — local inference, API proxy, and monitoring in one blazing-fast tool. A powerful alternative to Ollama + LiteLLM, built in Rust.
Research-backed Multi-LLM Router with parallel execution, learned routing (RouteLLM), prefix caching (RadixAttention), speculative decoding (Medusa/EAGLE), token compression (ISON), local LLM support (Ollama/vLLM/LM Studio), batch processing. Python bindi
A3M Router - Adaptive Memory Multi-Model Router with learned routing (RouteLLM), prefix caching (RadixAttention), speculative decoding (Medusa), TokenJuice-style compression. 14 LLM providers, 10 integrations, Python bindings. 20x more adaptable for ML/AI
Pi Coding Agent extension that exposes a CLIProxyAPIPlus (VibeProxy) instance as Anthropic and OpenAI model providers.
OpenClaw provider plugin for the openclaw-turbocharger sidecar
Shogo Agent — agent-runtime primitives (loop, router, hooks, orchestration) for backends built on pi-ai/pi-agent-core.