Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@peixl/ifq) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
ifq
Intelligent Fast Query — Agent OS for your terminal
Think it. Ask it. Done.
cli.ifq.ai · npm · GitHub
A tiny AI companion that lives in your terminal — now with built-in tools, MCP integration, and a programmable hooks engine.
No browser. No context switching. No bloat.
Just you, your keyboard, and an answer — instantly.
Persistent chat mode, tool execution, MCP servers, event hooks, skills system, and multi-model deep analysis — all zero-dependency.
Why ifq?
You're deep in terminal. You have a question. Don't leave. Don't switch windows. Don't break flow.
Just ask.
ifq "what's the difference between rebase and merge"That's it. AI answers, right where you are.
What it does
Ask anything — like having a brilliant friend on speed dial.
ifq ask "explain kubernetes in one sentence"Stay in chat — enter the app once, then keep talking without repeating ifq.
ifq
ifq > explain why my curl command returns 403
ifq > now rewrite it with headers
ifq > /switch work
ifq > /model gpt-4o
ifq > /exitDecode any command — never Google a cryptic shell command again.
ifq explain "find . -name '*.log' -mtime +7 -delete"Generate shell commands — describe what you want, get the command.
ifq shell "find all png files larger than 1MB modified in the last week"Review code — instant code review from a diff.
git diff | ifq review
git diff --cached | ifq crExecute tools — run built-in tools directly from chat.
ifq tools # list all registered tools
ifq tools --schema # show tool schemas (JSON)
ifq mcp # show MCP server statusTranslate instantly — Chinese to English. English to Chinese. Auto-detected.
ifq t "这段代码有什么问题"Write commit messages — because you'd rather ship code than describe it.
git add .
ifq commitPipe anything — ifq plays well with the tools you already love.
cat error.log | ifq ask "what went wrong"
curl -s api.example.com | ifq ask "summarize this response"
git diff | ifq review
git diff --cached | ifq crGet started
Two steps. Thirty seconds.
Docs, examples, and release notes live at cli.ifq.ai.
npm install -g @peixl/ifq
ifq config --key sk-your-api-keyDone. Start asking.
Secure Agent OS prompt store
ifq now ships the full Agent OS prompt-engineering templates inside the npm package, but stores the deployed runtime copies encrypted at rest in your user directory.
What this means:
npm install -g @peixl/ifq
ifqOn first run, ifq will:
- verify the packaged template manifest
- deploy the Agent OS templates into the secure store
- encrypt them under
~/.ifq/.secure/files/ - read and decrypt them only when needed at runtime
- preserve your local changes during future updates unless you force overwrite
Useful commands:
ifq evolve init # initialize or补齐 Agent OS prompt store
ifq evolve init --force # force refresh packaged templates
ifq evolve doctor # check encryption store, manifest integrity, plaintext leftoversYou can also provide your own master key:
export IFQ_PROMPT_MASTER_KEY=<64-char-hex-or-base64-32-byte-key>If no environment key is provided, ifq generates a local key file at ~/.ifq/.keys/prompt-master.key.
Built-in Tools 🔧
ifq ships with 6 built-in tools that the AI can invoke during conversations, or you can call directly:
| Tool | Description | Approval |
|---|---|---|
shell |
Execute shell commands | ✅ Required |
read_file |
Read file contents | — |
write_file |
Write/create files | ✅ Required |
list_dir |
List directory (recursive supported) | — |
search_files |
Regex search across files | — |
web_fetch |
Fetch URL content | — |
In chat mode:
/tools List all registered tools (builtin + custom + MCP)
/tool read_file {"path": "./package.json"} Execute a tool by name
/exec ls -la Execute shell with approval workflowFrom command line:
ifq tools # List registered tools
ifq tools --schema # Show tool JSON schemasRegister custom tools programmatically, or let MCP servers provide them automatically.
MCP Integration 🔌
Connect to Model Context Protocol servers to extend ifq with external tools and resources.
Configure servers in ~/.ifq/mcp.json:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
},
"context7": {
"command": "npx",
"args": ["-y", "@upstash/context7-mcp"]
}
}
}In chat mode:
/mcp Show MCP server status
/mcp start Start all configured servers
/mcp stop Stop all servers
/mcp tools List tools from connected MCP serversFrom command line:
ifq mcp # MCP server statusMCP tools are automatically merged with built-in tools and exposed to the AI with mcp__server__tool naming convention.
Hooks Engine ⚡
Programmable event hooks for customizing AI behavior, tool execution, and session management.
6 event types:
| Event | When it fires | Can modify? |
|---|---|---|
pre_chat |
Before AI call | ✅ Messages, config |
post_chat |
After AI response | — |
pre_tool |
Before tool execution | ✅ Args, or block |
post_tool |
After tool execution | — |
on_error |
On any error | — |
on_session |
Session events | — |
Drop hook files in ~/.ifq/hooks/:
// ~/.ifq/hooks/log-tools.js
module.exports = {
event: 'post_tool',
priority: 10,
handler: ({ tool, args, result, elapsed }) => {
console.log(`[hook] ${tool} completed in ${elapsed}ms`);
}
};In chat mode:
/hooks List active hooks with statsSkills System 📚
Multi-level skill discovery with override priority: cwd → repo → global.
Install skills as SKILL.md files under ~/.ifq/skills/{name}/SKILL.md:
ls ~/.ifq/skills/
# code-review/ brainstorming/ security-guard/ web-search/ ...Skills are automatically discovered and injected into AI context when relevant.
In chat mode:
/skills List discovered skills
/skill <query> Search skillsFrom command line:
ifq skills # List all skillsDesign · IFQ Design Skills 🎨
First-class integration with ifq-design-skills — 12 professional design modes (launch film / keynote / dashboard / white paper / infographic / business card / brand system / ...) + 24 hand-drawn SVG icons + ifq.ai brand signature, all shipped as one skill package.
ifq design init # Sync latest skill from GitHub + install deps
ifq design modes # List 12 professional modes
ifq design templates # List built-in HTML templates
ifq design new M-08 my-keynote.html # Fork a mode's default template
ifq design smoke # Skill's own integrity + deps check
ifq design "做一份发布会 keynote,25 秒片头" # Chat with the synced skill loadedOutput: self-contained HTML (React+Babel inline) + optional mp4 / gif / pdf / pptx. Every deliverable carries a quiet Designed with ifq.ai signature; user-brand mode downgrades it to a corner colophon.
Works with everything
OpenAI, Anthropic (Claude), OpenRouter, DeepSeek, Ollama, any OpenAI-compatible API. Provider is auto-detected from URL, or can be set explicitly.
OpenAI (default)
ifq config --key sk-your-openai-key
ifq config --model gpt-4o-miniAnthropic (Claude)
ifq config --key sk-ant-your-key
ifq config --url https://api.anthropic.com/v1
ifq config --model claude-sonnet-4-20250514OpenRouter
ifq config --key sk-or-your-key
ifq config --url https://openrouter.ai/api/v1
ifq config --model anthropic/claude-sonnet-4-20250514DeepSeek
ifq config --url https://api.deepseek.com/v1
ifq config --model deepseek-chatOr use environment variables:
export IFQ_API_KEY=sk-...
export IFQ_API_URL=https://api.openai.com/v1
export IFQ_MODEL=gpt-4o-mini
export IFQ_PROVIDER=openai # optional: openai, anthropic, openrouterDeep Analysis — the killer feature 🧠
Ask one question. Get answers from every model — then a synthesized consensus.
Deep Analysis queries multiple models in parallel (OpenClaw model chain or your configured model), compares their answers, and produces a single expert synthesis. Think of it as a panel of AI experts working for you.
ifq deep "is Rust or Go better for microservices in 2025?"What happens behind the scenes:
- Your question is sent to up to 4 models simultaneously
- Each model responds independently
- A synthesis pass compares all answers and produces a consensus
- You get individual perspectives + the final expert synthesis
Inside chat mode:
/deep explain the trade-offs of event sourcing vs CRUDThis is something no single-model tool can do. It's like having a committee of domain experts — instantly.
Web context injection 🌐
Capture a live web page and feed it directly into your next chat message.
/web https://docs.example.com/api-reference
explain how the auth flow worksThe page snapshot becomes invisible context for your very next question — no copy-pasting, no switching windows.
OpenClaw integration
When OpenClaw is installed, ifq automatically detects it and unlocks a full suite of capabilities — including model import, proxy mode, and deep multi-model analysis.
Model management
One command scans all installed AI CLI tools and imports their models into ifq:
ifq i # Scan & import from all tools
ifq import gemini-flash # Import a specific model by aliasSupported tools:
| Tool | What it extracts |
|---|---|
| OpenClaw | Full model chain, aliases, context windows |
| Claude Code | Active model, API endpoint + credentials |
| Codex (OpenAI) | Model catalog from cache, active model |
| Gemini CLI | Detected via OAuth presence |
| OpenCode | Recently used models + providers |
Models with extractable credentials (e.g. Claude Code's API token) are stored as profiles — switching to them auto-applies the correct endpoint and key.
Browse, switch, and probe models interactively:
ifq m # List models with status + interactive selection
ifq m 3 # Switch to model #3
ifq m gemini-flash # Switch by name/alias
ifq m --probe # List models + live latency testInside chat: /m, /m 3, /i, /m probe — same shortcuts work.
Invisible router: When you select an OpenClaw model (e.g. openai-codex/gpt-5.4), ifq automatically routes queries through the OpenClaw agent. When you select a profiled model (e.g. MiniMax-M2.7 from Claude Code), ifq auto-applies the stored credentials — no manual config needed.
Proxy mode
Don't have an API key? Use OpenClaw as your AI backend:
ifq config --proxy on # Route all queries through OpenClaw agentWhen proxy mode is active, ifq falls back to OpenClaw's agent for all chat queries — zero API keys needed.
Command line
ifq m # List models + interactive switch
ifq m --probe # List models + latency test
ifq i # Scan & import all AI tool models
ifq claw # Status & capabilities
ifq claw agent "summarize my last session" # Talk to the OpenClaw agent
ifq claw models [--probe] # Model chain + auth + aliases + optional probe
ifq claw import [model] # Import models (all or by name)
ifq claw skills # List available skills
ifq claw skill "web scraping" # Search ClawHub skills
ifq claw memory "project plan"# Search semantic memory
ifq claw browser https://example.com # Navigate browser
ifq claw snapshot # Browser page snapshot
ifq claw send <target> <msg> # Send via channels
ifq claw docs "mcp setup" # Search OpenClaw docs
ifq claw cron # List scheduled jobs
ifq deep "question" # Multi-model deep analysisInside chat mode
/m [N|name|probe] List models / switch by number or name / latency probe
/i [model] Scan & import AI models (or one by name)
/d <question> Deep analysis shortcut
/claw OpenClaw status
/agent <msg> Run an agent turn
/deep <question> Multi-model deep analysis
/web <url> Capture web page for next chat context
/tools List registered tools (builtin + custom + MCP)
/tool <n> [args] Execute a tool by name
/exec <cmd> Execute shell with approval
/mcp MCP server status
/mcp start Start all MCP servers
/mcp stop Stop all MCP servers
/mcp tools List MCP-provided tools
/hooks List active hooks
/skills List skills
/skill <query> Search ClawHub
/memory <query> Search memory
/browser <url> Navigate browser
/snapshot Browser page snapshot
/send <target> <msg> Send message
/docs <query> Search docs
/cron List cron jobs
/status Full OpenClaw statusWhen OpenClaw is connected, the AI system prompt is automatically enriched with available capabilities for smarter context-aware responses.
Design principles
- Zero dependencies. Nothing to break. Nothing to audit.
- Multi-provider. OpenAI, Anthropic, OpenRouter — auto-detected from URL.
- Streams by default. Answers appear as they're written.
- Built-in tools. Shell, file I/O, search, web fetch — with approval workflow.
- MCP native. Connect any MCP server for instant tool expansion.
- Programmable hooks. Customize AI behavior with event-driven hooks.
- Multi-level skills. cwd → repo → global discovery with override priority.
- 30s connection timeout. No more hanging on bad networks.
- Pipes welcome. Compose with grep, cat, curl — whatever.
- Your key, your model. No middleman. No data collection.
- One config file.
~/.ifqrc, permission 600. That's it. - Persistent chat memory. The latest 10 messages stay verbatim; older turns are compressed into key memory points.
- Performance-first memory. Older turns are compacted in batches, not on every single round.
- Graceful error recovery. API failures don't crash the chat — you stay in the REPL.
- Config cached. Config file is only re-read when it changes on disk.
- Response timing. See how long each response takes.
Quick reference
| Command | What it does |
|---|---|
ifq |
Enter persistent chat app |
ifq --session <name> |
Enter a named chat session |
ifq chat [question] |
Enter chat app, optionally with a first message |
ifq chat --session <name> [question] |
Enter a named chat session |
ifq sessions |
List local chat sessions |
ifq delete <name> |
Delete a session |
ifq "question" |
Ask anything (shorthand) |
ifq ask <question> |
Ask with explicit subcommand |
ifq deep <question> |
Multi-model deep analysis |
ifq explain <cmd> |
Explain a shell command |
ifq shell <desc> |
Generate a shell command |
ifq translate <text> |
Translate (zh↔en) |
ifq t <text> |
Quick translate |
ifq commit |
Generate commit message |
ifq review |
Code review from diff |
ifq tools |
List registered tools |
ifq tools --schema |
Show tool JSON schemas |
ifq mcp |
MCP server status |
ifq m |
List / switch models (interactive) |
ifq m --probe |
Model list + live latency test |
ifq i |
Scan & import all AI tool models |
ifq claw import [model] |
Import OpenClaw model into ifq |
ifq config --show |
View current config |
ifq config --proxy on |
Enable OpenClaw proxy mode |
ifq help |
Show help |
Chat memory
Interactive chat stores session data locally under ~/.ifq/sessions/<session>.json.
- The most recent 10 messages are kept exactly as-is.
- Older messages are queued and compacted into a rolling summary of goals, facts, decisions, preferences, and open questions.
- The first overflow compacts immediately to seed memory; after that, compaction happens in batches for better performance.
- Memory compaction failures are silently retried on the next turn — your messages are never lost.
- Use
--session <name>to separate work, ops, research, or personal contexts. - Use
/clearinside chat to reset the current session memory. - Use
/sessioninside chat to show the current session and model. - Use
/sessionsorifq sessionsto list local sessions and recent activity. - Use
/switch <name>to switch sessions without leaving chat. - Use
/model <name>to change model on the fly. - Use
/retryto re-run the last message. - Use
/delete <name>orifq delete <name>to remove a session. - Use
/summaryinside chat to inspect the current memory summary.
The philosophy
Software should feel light. It should solve real problems in the fewest keystrokes. It should respect your time, your privacy, and your flow.
ifq is built for people who think fast, work fast, and want AI that keeps up.
Beautiful tools make beautiful work.
Made with care. Zero dependencies. Open source.
Tools · MCP · Hooks · Skills · Deep Analysis
cli.ifq.ai