JSPM

  • Created
  • Published
  • Downloads 1339
  • Score
    100M100P100Q124327F
  • License MIT

SkillFM local sidecar — a tiny localhost HTTP proxy that lets any AI agent activate and run SkillFM skills in the current conversation without restarting its MCP runtime. Writes ~/.skillfm/local.json for service discovery.

Package Exports

  • @skillfm/local
  • @skillfm/local/mcp-stdio

Readme

@skillfm/local

Tiny localhost sidecar that lets any AI agent activate and use SkillFM in the current conversation, without restarting its MCP runtime.

Why this exists

The MCP spec has a hole: most MCP clients (Claude Code, Claude Desktop, Cursor, Cherry Studio) read their server list once at startup and freeze it. Adding a new MCP server means restarting the runtime and starting a fresh conversation — a dealbreaker for IM-hosted agents (OpenClaw on DingTalk/WeChat) where there is only one persistent chat window.

This package sidesteps the problem: instead of asking the agent to install a new MCP server, it spawns a minimal HTTP server on 127.0.0.1 and writes ~/.skillfm/local.json so the agent can discover it. The agent uses its existing fetch / bash / HTTP capability — no MCP machinery, no restart, no new session.

Prerequisites

  • Node.js ≥ 20. Check with node -v. Quick install:
    # macOS
    brew install node
    # Ubuntu / Debian
    curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash - && sudo apt-get install -y nodejs
    # Windows (PowerShell, Admin)
    winget install OpenJS.NodeJS
  • Hermes-agent users only: also bootstrap the Python MCP SDK inside hermes's venv (hermes v0.10.0 ships without it, MCP add will silently fail with a misleading NameError: StdioServerParameters):
    /root/hermes-agent/venv/bin/python -m ensurepip
    /root/hermes-agent/venv/bin/python -m pip install "mcp>=1.24.0"
  • Or run the one-shot installer which does both of the above plus global install:
    curl -fsSL https://raw.githubusercontent.com/ericm1018/SkillFM/main/install.sh | bash

Install and run

npx -y @skillfm/local@latest start

The package exposes four bin aliases:

  • local — natural npx form (npx invokes the file path directly, never hits bash's local keyword).
  • skillfm-local — same HTTP sidecar entry; use this if npm i -g @skillfm/local, since bash would shadow a global local.
  • skillfm-guard — hook enforcement shim used by harness integrations.
  • skillfm-mcp-stdiostdio MCP server entry (new in 2.1.0), for stdio-only agent runtimes like Claude Desktop. See stdio MCP entry below.

Explicit equivalent: npx -y --package @skillfm/local@latest skillfm-local start.

Output is a single line of JSON the agent can parse:

{"ok":true,"url":"http://127.0.0.1:19821","pid":12345,"settings_file":"/Users/you/.skillfm/local.json","activated":false,"hint_for_agent":"skillfm-local is running at http://127.0.0.1:19821. Not yet activated — POST http://127.0.0.1:19821/activate/request {\"email\":\"<user email>\"} to start the flow."}

The sidecar binds to loopback only. Kill it with skillfm-local stop or by sending SIGTERM to the pid.

Service discovery

Agents discover the sidecar by reading ~/.skillfm/local.json:

{
  "version": 1,
  "url": "http://127.0.0.1:19821",
  "pid": 12345,
  "started_at": "2026-04-15T10:30:00.000Z",
  "api_base_url": "https://api.skillfm.ai/api/v1",
  "package": "@skillfm/local",
  "package_version": "0.1.0"
}

A stale pid (process gone) is cleaned up automatically on the next start.

Endpoints

All endpoints return JSON and include a hint_for_agent field on errors so the model knows exactly what to do next.

Method Path Description
GET /health Liveness check
GET /status Activation state, pid, started_at, hint_for_agent
POST /activate/request Start the email+code bootstrap. Body: {email, locale?}
POST /activate/verify Verify the 6-digit code. Body: {bootstrap_id, code}
GET /skills List skills available to the activated user
POST /brain/run Execute a skill. Body: {skill, input}

On successful /activate/verify, the brain_key is persisted to ~/.skillfm/config.json (read by both skillfm-local start and the skillfm-mcp-stdio bin in this same package). Next start picks it up automatically and reports activated: true.

Agent-native flow in plain words

  1. Agent spawns the sidecar: npx -y @skillfm/local@latest start &
  2. Agent reads ~/.skillfm/local.json to get the URL.
  3. Agent asks the user for their email and POSTs /activate/request.
  4. The backend emails a 6-digit code. Agent asks the user for the code.
  5. Agent POSTs /activate/verify with the code. Backend returns a brain_key, sidecar stores it.
  6. Agent runs SkillFM skills via POST /brain/run with {skill, input}.

All of this happens in a single conversation turn sequence. No MCP client reload, no runtime restart, no new session.

Security model

  • Loopback only: the HTTP server rejects any connection whose remote address is not 127.0.0.1 / ::1 / ::ffff:127.0.0.1. It is not reachable from the network.
  • No credentials in flight: the sidecar proxies to api.skillfm.ai over HTTPS. brain_key is stored in ~/.skillfm/config.json with 0600 permissions.
  • Zero runtime dependencies: only Node built-ins (http, fs, os, crypto). The entire source is ~400 lines in one file. Audit it yourself: sdk/skillfm-local/src/index.ts.
  • Kill switch: skillfm-local stop, or just kill <pid>. The settings file is removed on exit.

3-beat completion ritual (new in 2.3.0)

The brain_run tool now orchestrates a three-beat finish, timed to how the brain registers "this is done, I can share it":

  1. T=0✅ 完成 (总耗时 46s) with a terminal bell. Triggered by the SSE done event.
  2. T+500ms — character-art quality card appears. The half-second gap turns it from "more output" into "a reward".
  3. T+~1500ms — the agent, reading envelope.final.share_hint.nudge_zh, invites the user to share in its own voice. The brain_run tool description spells this out for Claude Code / Cursor / Cherry Studio.

Each beat is independently skippable: SKILLFM_DISABLE_PROGRESS=1 turns off stderr output, share_hint.available=false (grade C) silently skips beat 3.

Live progress & quality card (new in 2.2.0)

Starting with 2.2.0 the stdio MCP server (skillfm-mcp-stdio) automatically subscribes to the skill run's SSE progress stream and writes two things to stderr:

  1. A single-line live progress bar that updates ~every 4 seconds (⏳ [N/7] 配图 (预计 12 秒)…), so the user sees the pipeline is alive between continuation rounds instead of staring at silence.
  2. A character-art quality card at done time — total score, dimension bars, anti-AI badge (S/A/B/C), and evidence ("最赞的一处" + "还能再涨分的地方").

Output goes to stderr on purpose: it stays out of the JSON-RPC channel so agents that pipe stdout into their protocol layer are not affected, while Claude Code / Claude Desktop / Cursor surface stderr to the user.

Disable with SKILLFM_DISABLE_PROGRESS=1 if your terminal does not handle ANSI escapes cleanly (CI log files, dumb terminals). The underlying data is always also present in brain_run's JSON response (quality_report), so no functionality is lost.

stdio MCP entry (new in 2.1.0)

Starting with 2.1.0, @skillfm/local also ships a stdio MCP server entry — skillfm-mcp-stdio — so stdio-only agent runtimes (Claude Desktop, VS Code Claude extension stdio mode, etc.) no longer need a separate npm package.

Claude Desktop config example:

{
  "mcpServers": {
    "skillfm": {
      "command": "npx",
      "args": ["-y", "--package", "@skillfm/local@latest", "skillfm-mcp-stdio"]
    }
  }
}

This exposes the same 7 tools (list_skills, brain_run, continuation_abort, charter_get, charter_ack, subscribe, my_status) + 2 resources (skill-manifest, skill-catalog) that used to live in the now-retired @skillfm/mcp-server. brain_key is read from ~/.skillfm/config.json — the same file written by skillfm-local start, so a single activation serves both transports.

Which entry to use

Agent runtime Entry
Claude Code / Cursor / Cline / OpenClaw / Aider (HTTP-capable) skillfm-local start (HTTP sidecar, 127.0.0.1:19821)
Claude Desktop (stdio-only) skillfm-mcp-stdio

Both share ~/.skillfm/config.json — activate once, either path works.

@skillfm/mcp-server retired

@skillfm/mcp-server@3.5.0 (2026-04-20) was the final standalone release of the stdio MCP server package. Starting with @skillfm/local@2.1.0 its code lives here as the skillfm-mcp-stdio bin (same tools, same resources, same ~/.skillfm/config.json activation store). The standalone package will receive an npm deprecate tag pointing at this one.

Migration for Claude Desktop users:

// OLD
{ "command": "npx", "args": ["-y", "@skillfm/mcp-server@latest"] }
// NEW
{ "command": "npx", "args": ["-y", "--package", "@skillfm/local@latest", "skillfm-mcp-stdio"] }

No data migration needed — the existing ~/.skillfm/config.json works as-is.

License

MIT