Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (n8n-nodes-openrouter-selector) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
n8n-nodes-openrouter-selector
n8n community node for intelligent OpenRouter model selection based on task, budget, and benchmarks.
Features
Task-Based Selection: Optimized model recommendations for:
- Translation (Chinese ↔ English optimized)
- Coding & Development
- Data Analysis & Reasoning
- Vision & Image Analysis
- Conversational Chat
- Text Embedding
- Summarization
- Mathematical Reasoning
Budget Awareness: Three budget tiers:
- Cheap: Lowest cost, quality secondary
- Balanced: Good price-performance ratio
- Premium: Best quality, cost no concern
Benchmark-Based Scoring: Uses external benchmark data:
- Artificial Analysis (Intelligence, Coding, Math indices)
- LMSYS Chatbot Arena (Elo ratings)
- LiveBench (Coding, Math, Reasoning scores)
Dynamic Model Override: Select specific models with real-time scoring preview
Flexible Filtering:
- Minimum context length
- JSON mode requirement
- Vision/multimodal requirement
- Cost limits
- Provider whitelist/blacklist
Installation
In n8n
- Go to Settings → Community Nodes
- Click Install a community node
- Enter:
n8n-nodes-openrouter-selector - Click Install
Manual Installation
# In your n8n custom nodes directory
cd ~/.n8n/custom
npm install n8n-nodes-openrouter-selectorDevelopment Installation
git clone https://github.com/ecolights/n8n-nodes-openrouter-selector.git
cd n8n-nodes-openrouter-selector
pnpm install
pnpm build
# Link to n8n
cd ~/.n8n/custom
npm link /path/to/n8n-nodes-openrouter-selectorPrerequisites
This node requires:
Supabase Database with the benchmark schema (see docs/BENCHMARK_SYSTEM.md):
models_catalog- OpenRouter model data (synced via separate workflow)model_name_mappings- Master mapping table (manually maintained)model_benchmarks- Benchmark scores (auto-synced weekly)task_profiles- Task-specific scoring weightsunmatched_models- Review queue for new models
n8n Workflow:
TN_benchmark_sync_artificial_analysisfor weekly benchmark syncCredentials: Supabase URL and API key (service role for write access)
Quick Schema Setup
The full schema with triggers, functions, and RLS policies is documented in docs/BENCHMARK_SYSTEM.md.
Core Tables Overview:
-- Model name mappings (Source of Truth - manually maintained)
CREATE TABLE model_name_mappings (
openrouter_id TEXT UNIQUE NOT NULL, -- e.g. "anthropic/claude-sonnet-4"
canonical_name TEXT NOT NULL, -- Display name
aa_name TEXT, -- Artificial Analysis name
aa_slug TEXT, -- AA URL slug
provider TEXT, -- anthropic, openai, google, etc.
verified BOOLEAN DEFAULT false
);
-- Benchmark scores (auto-filled by sync workflow)
CREATE TABLE model_benchmarks (
openrouter_id TEXT REFERENCES model_name_mappings(openrouter_id),
-- Artificial Analysis
aa_intelligence_index DECIMAL(5,2),
aa_coding_index DECIMAL(5,2),
aa_math_index DECIMAL(5,2),
-- LMSYS Arena
lmsys_elo INTEGER,
-- LiveBench
livebench_overall DECIMAL(5,2),
livebench_coding DECIMAL(5,2),
-- Computed composites (via trigger)
composite_general DECIMAL(5,2),
composite_code DECIMAL(5,2),
composite_math DECIMAL(5,2)
);
-- Task-specific scoring weights
CREATE TABLE task_profiles (
task_name TEXT UNIQUE NOT NULL, -- general, code, translation, etc.
weight_aa_intelligence DECIMAL(3,2),
weight_aa_coding DECIMAL(3,2),
weight_lmsys_elo DECIMAL(3,2),
weight_livebench DECIMAL(3,2),
boost_anthropic DECIMAL(3,2),
boost_openai DECIMAL(3,2),
boost_deepseek DECIMAL(3,2)
);Usage
Basic Usage
- Add the OpenRouter Model Selector node to your workflow
- Configure credentials (Supabase URL + API Key)
- Select a Task Category (e.g., "coding")
- Select a Budget (e.g., "balanced")
- Execute to get the recommended model
Output Format
Full Output (default):
{
"recommended": {
"modelId": "anthropic/claude-sonnet-4",
"provider": "anthropic",
"displayName": "Claude Sonnet 4",
"contextLength": 200000,
"supportsJson": true,
"modality": "text+image->text",
"pricing": {
"promptPer1kUsd": 0.003,
"completionPer1kUsd": 0.015,
"combinedPer1kUsd": 0.009
},
"score": 87.5,
"scoreBreakdown": {
"benchmarkFit": 38,
"taskFit": 28,
"budgetFit": 18,
"capabilityFit": 9,
"providerBonus": 1.5
},
"reasoning": "Excellent benchmark performance for Coding & Development, ideal balanced pricing, anthropic provider bonus (+15%)."
},
"alternatives": [...],
"queryMetadata": {
"task": "coding",
"budget": "balanced",
"totalModelsEvaluated": 313,
"modelsPassingFilters": 187,
"executionTimeMs": 245
}
}Using the Selected Model
Connect the output to an HTTP Request node or OpenRouter integration:
[OpenRouter Model Selector] → [HTTP Request to OpenRouter API]
URL: https://openrouter.ai/api/v1/chat/completions
Body: { "model": "={{$json.recommended.modelId}}", ... }Scoring Algorithm
The scoring formula is deterministic and based on external benchmarks:
score = (benchmark_fit × 0.4) + (task_fit × 0.3) + (budget_fit × 0.2) + (capability_fit × 0.1) × provider_boostComponents
| Component | Weight | Description |
|---|---|---|
| Benchmark Fit | 40% | Score from external benchmarks (AA, LMSYS, LiveBench) |
| Task Fit | 30% | How well the model matches task requirements |
| Budget Fit | 20% | Cost alignment with budget preference |
| Capability Fit | 10% | Context length, JSON support, verification status |
| Provider Boost | ×1.0-1.2 | Task-specific provider bonuses |
Provider Boosts by Task
| Task | Provider Boosts |
|---|---|
| Translation | DeepSeek +20%, Qwen +15%, Anthropic +10% |
| Coding | Anthropic +15%, OpenAI +10%, DeepSeek +8% |
| Analysis | Anthropic +15%, OpenAI +10%, Google +5% |
| Vision | OpenAI +15%, Google +12%, Anthropic +8% |
| Math | DeepSeek +15%, Qwen +12%, OpenAI +10% |
Configuration
Node Parameters
| Parameter | Type | Description |
|---|---|---|
| Task Category | Dropdown | Type of task (coding, translation, etc.) |
| Budget | Dropdown | Cost preference (cheap, balanced, premium) |
| Model Override | Dynamic Dropdown | Override with specific model |
| Filters | Collection | Advanced filtering options |
| Options | Collection | Output configuration |
Filter Options
| Filter | Type | Default | Description |
|---|---|---|---|
| Min Context Length | Number | 8000 | Minimum tokens |
| Require JSON Mode | Boolean | false | Only JSON-capable models |
| Require Vision | Boolean | false | Only multimodal models |
| Max Cost per 1K | Number | 0 | Cost limit (0 = no limit) |
| Provider Whitelist | Multi-select | [] | Include only these providers |
| Provider Blacklist | Multi-select | [] | Exclude these providers |
Benchmark Sync Workflow
The node requires benchmark data to be synced weekly via n8n workflow.
n8n Workflow: TN_benchmark_sync_artificial_analysis
Trigger: Weekly (Sunday 03:00 UTC) + Manual + Webhook
Data Flow:
[1. Fetch Artificial Analysis] ──────┐
│
[2. Fetch OpenRouter Catalog] ────────┼──► [4. Merge All Data]
│ │
[3. Fetch Existing Mappings] ────────┘ ▼
[5. Process & Match Models]
│
┌──────────────┴──────────────┐
▼ ▼
[6. Upsert Benchmarks] [7. Store Unmatched]
│ │
└──────────────┬──────────────┘
▼
[8. Merge Results]
│
▼
[9. Telegram Notification]Detailed Documentation: See docs/BENCHMARK_SYSTEM.md
Manual Sync Trigger
# Via n8n Webhook
curl -X POST https://n8n.dev.ecolights.de/webhook/benchmark-syncDevelopment
# Install dependencies
pnpm install
# Build
pnpm build
# Watch mode
pnpm dev
# Lint
pnpm lint
# Format
pnpm formatLicense
MIT
Author
EcoLights (dev@ecolights.de)