Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (frouter-cli) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
frouter
Free model router CLI — discover, ping, and configure free AI models for OpenCode / OpenClaw.

Install
npx frouter-cli
# or
npm i -g frouter-cli
# or
bunx frouter-cli
# or
bun install -g frouter-cliRun
frouterOn first run, a setup wizard prompts for API keys (ESC to skip any provider).
First-run onboarding test (clean state)
Use an isolated temporary HOME to test onboarding from zero without deleting your real install/config:
npm run test:onboarding
npm run test:fresh-starttest:fresh-start launches interactive onboarding with:
- no
~/.frouter.jsonin the temp home - provider env keys unset (
NVIDIA_API_KEY,OPENROUTER_API_KEY) - your real
~/.frouter.jsonuntouched
Optional:
npm run test:fresh-start -- --keep-homeThis keeps the temp HOME path after exit for inspection.
Providers
| Provider | Free key |
|---|---|
| NVIDIA NIM | build.nvidia.com — prefix nvapi- |
| OpenRouter | openrouter.ai/keys — prefix sk-or- |
API key priority: environment variable → ~/.frouter.json → keyless ping (latency still shown).
NVIDIA_API_KEY=nvapi-xxx frouter
OPENROUTER_API_KEY=sk-or-xxx frouterTUI
The interactive TUI pings all models in parallel every 2 seconds and shows live latency, uptime, and verdict.
Columns
| Column | Description |
|---|---|
# |
Rank |
Tier |
Capability tier derived from SWE-bench score (S+ → C) |
Provider |
NIM or OpenRouter |
Model |
Display name |
Ctx |
Context window size |
AA |
Arena Elo / intelligence score |
Avg |
Rolling average latency (HTTP 200 only) |
Lat |
Latest ping latency |
Up% |
Uptime percentage this session |
Verdict |
Condition summary (🚀 Perfect / ✅ Normal / 🔥 Overloaded / …) |
Default ranking: availability first, then higher tier first (S+ → S → A+ …), then lower latency.
Keyboard shortcuts
Navigation
| Key | Action |
|---|---|
↑ / k |
Move up |
↓ / j |
Move down |
PgUp / PgDn |
Page up / down |
g |
Jump to top |
G |
Jump to bottom |
Actions
| Key | Action |
|---|---|
Enter |
Select model → target picker (OpenCode / OpenClaw) |
/ |
Search / filter models (Enter in search = apply to both targets) |
A |
Quick API key add/change (opens key editor in Settings) |
T |
Cycle tier filter: All → S+ → S → A+ → … |
P |
Settings screen (edit keys, toggle providers, test) |
W / X |
Faster / slower ping interval |
? |
Help overlay |
q / Ctrl+C |
Quit |
Sort (press to sort, press again to reverse)
| Key | Column |
|---|---|
0 |
Priority (default) |
1 |
Tier |
2 |
Provider |
3 |
Model name |
4 |
Avg latency |
5 |
Latest ping |
6 |
Uptime % |
7 |
Context window |
8 |
Verdict |
9 |
AA Intelligence |
Target picker
After pressing Enter on a model:
| Key | Action |
|---|---|
↑ / ↓ |
Navigate (OpenCode CLI / OpenClaw) |
Enter / G |
Write config + launch tool |
S |
Write config only (no launch) |
ESC |
Cancel |
If OpenCode fallback remaps the provider (for example NIM Stepfun → OpenRouter)
and the effective provider key is missing, frouter asks:
Launch opencode anyway? (Y/n, default: n).
Configs written:
- OpenCode CLI →
~/.config/opencode/opencode.json - OpenClaw →
~/.openclaw/openclaw.json
Existing configs are backed up before writing.
Settings screen (P)
Tip: press A from the main list to jump directly into API key editing.
| Key | Action |
|---|---|
↑ / ↓ |
Navigate providers |
Enter |
Edit API key inline |
Space |
Toggle provider enabled / disabled |
T |
Fire a live test ping |
D |
Delete key for this provider |
ESC |
Back to main list |
Flags
| Flag | Behavior |
|---|---|
| (none) | Interactive TUI |
--best |
Non-interactive: ping 4 rounds, print best model ID to stdout |
--help / -h |
Show help |
--best scripted usage
# Print best model ID after ~10 s analysis
frouter --best
# Capture in a variable
MODEL=$(frouter --best)
echo "Best model: $MODEL"Requires at least one API key to be configured. Selection tri-key sort: status=up → lowest avg latency → highest uptime.
Config
Stored at ~/.frouter.json (permissions 0600).
{
"apiKeys": {
"nvidia": "nvapi-xxx",
"openrouter": "sk-or-xxx"
},
"providers": {
"nvidia": { "enabled": true },
"openrouter": { "enabled": true }
}
}Tier scale (SWE-bench Verified)
| Tier | Score | Description |
|---|---|---|
| S+ | ≥ 70% | Elite frontier |
| S | 60–70% | Excellent |
| A+ | 50–60% | Great |
| A | 40–50% | Good |
| A- | 35–40% | Decent |
| B+ | 30–35% | Average |
| B | 20–30% | Below average |
| C | < 20% | Lightweight / edge |
Verdict legend
| Verdict | Trigger |
|---|---|
| 🔥 Overloaded | Last HTTP code = 429 |
| ⚠️ Unstable | Was up, now failing |
| 👻 Not Active | Never responded |
| ⏳ Pending | Waiting for first success |
| 🚀 Perfect | Avg < 400 ms |
| ✅ Normal | Avg < 1000 ms |
| 🐢 Slow | Avg < 3000 ms |
| 🐌 Very Slow | Avg < 5000 ms |
| 💀 Unusable | Avg ≥ 5000 ms |
Test
npm run lint
npm test
npm run typecheck
# optional perf workflow
npm run perf:baseline
npm run test:perfModel catalog auto-sync (GitHub Actions)
frouter includes a scheduled workflow to keep model metadata current:
- Workflow:
.github/workflows/model-catalog-sync.yml - Triggers:
- Daily:
17 3 * * *(UTC) - Weekly AA refresh:
47 4 * * 1(UTC) - Manual:
workflow_dispatch
- Daily:
- Updates:
model-rankings.jsonmodel-support.json(OpenCode support map)
- If changes exist, it opens/updates a PR on
chore/model-catalog-sync. - If unresolved new-model tiers remain, PR gets
needs-tier-review.
Repository secrets used by this workflow:
NVIDIA_API_KEYOPENROUTER_API_KEYARTIFICIAL_ANALYSIS_API_KEY
Local sync commands:
npm run models:sync
npm run models:sync:applyDevelopment notes
- TypeScript source of truth:
src/(app + tests) - ESLint config is TypeScript:
eslint.config.ts - Runtime JS output is generated only in
dist/vianpm run build - Tests run from compiled
dist/tests/output after build