Package Exports
- @skillfm/local
- @skillfm/local/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@skillfm/local) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
@skillfm/local
Tiny localhost sidecar that lets any AI agent activate and use SkillFM in the current conversation, without restarting its MCP runtime.
Why this exists
The MCP spec has a hole: most MCP clients (Claude Code, Claude Desktop, Cursor, Cherry Studio) read their server list once at startup and freeze it. Adding a new MCP server means restarting the runtime and starting a fresh conversation — a dealbreaker for IM-hosted agents (OpenClaw on DingTalk/WeChat) where there is only one persistent chat window.
This package sidesteps the problem: instead of asking the agent to install a new MCP server, it spawns a minimal HTTP server on 127.0.0.1 and writes ~/.skillfm/local.json so the agent can discover it. The agent uses its existing fetch / bash / HTTP capability — no MCP machinery, no restart, no new session.
Install and run
npx -y @skillfm/local@latest startThe package exposes three bin aliases:
local(natural npx form — works because npx invokes the file path directly and never hits bash'slocalkeyword),skillfm-local(same entry — use this if younpm i -g @skillfm/local, since bash would shadow a globallocalcommand with its reserved keyword), andskillfm-guard(hook enforcement shim used by harness integrations). Explicit equivalent:npx -y --package @skillfm/local@latest skillfm-local start.
Output is a single line of JSON the agent can parse:
{"ok":true,"url":"http://127.0.0.1:19821","pid":12345,"settings_file":"/Users/you/.skillfm/local.json","activated":false,"hint_for_agent":"skillfm-local is running at http://127.0.0.1:19821. Not yet activated — POST http://127.0.0.1:19821/activate/request {\"email\":\"<user email>\"} to start the flow."}The sidecar binds to loopback only. Kill it with skillfm-local stop or by sending SIGTERM to the pid.
Service discovery
Agents discover the sidecar by reading ~/.skillfm/local.json:
{
"version": 1,
"url": "http://127.0.0.1:19821",
"pid": 12345,
"started_at": "2026-04-15T10:30:00.000Z",
"api_base_url": "https://api.skillfm.ai/api/v1",
"package": "@skillfm/local",
"package_version": "0.1.0"
}A stale pid (process gone) is cleaned up automatically on the next start.
Endpoints
All endpoints return JSON and include a hint_for_agent field on errors so the model knows exactly what to do next.
| Method | Path | Description |
|---|---|---|
GET |
/health |
Liveness check |
GET |
/status |
Activation state, pid, started_at, hint_for_agent |
POST |
/activate/request |
Start the email+code bootstrap. Body: {email, locale?} |
POST |
/activate/verify |
Verify the 6-digit code. Body: {bootstrap_id, code} |
GET |
/skills |
List skills available to the activated user |
POST |
/brain/run |
Execute a skill. Body: {skill, input} |
On successful /activate/verify, the brain_key is persisted to ~/.skillfm/config.json (shared with @skillfm/mcp-server). Next start picks it up automatically and reports activated: true.
Agent-native flow in plain words
- Agent spawns the sidecar:
npx -y @skillfm/local@latest start & - Agent reads
~/.skillfm/local.jsonto get the URL. - Agent asks the user for their email and POSTs
/activate/request. - The backend emails a 6-digit code. Agent asks the user for the code.
- Agent POSTs
/activate/verifywith the code. Backend returns abrain_key, sidecar stores it. - Agent runs SkillFM skills via
POST /brain/runwith{skill, input}.
All of this happens in a single conversation turn sequence. No MCP client reload, no runtime restart, no new session.
Security model
- Loopback only: the HTTP server rejects any connection whose remote address is not
127.0.0.1/::1/::ffff:127.0.0.1. It is not reachable from the network. - No credentials in flight: the sidecar proxies to
api.skillfm.aiover HTTPS. brain_key is stored in~/.skillfm/config.jsonwith0600permissions. - Zero runtime dependencies: only Node built-ins (
http,fs,os,crypto). The entire source is ~400 lines in one file. Audit it yourself: sdk/skillfm-local/src/index.ts. - Kill switch:
skillfm-local stop, or justkill <pid>. The settings file is removed on exit.
Relationship to @skillfm/mcp-server
This package is the fast-path activation companion to @skillfm/mcp-server. They are complementary, not competing:
@skillfm/local(this package) — immediate use in the current conversation via localhost HTTP. Runs in seconds, no config file edits, no runtime restart.@skillfm/mcp-server— proper MCP integration with schema-validated tools. Installed viaclaude mcp addor equivalent, activates after the next runtime cold start for a permanent upgrade.
Both share ~/.skillfm/config.json so once you activate via @skillfm/local, the next time you restart your MCP runtime the MCP path picks up the same brain_key and the 7 MCP tools just work.
License
MIT