Package Exports
- titan-agent
- titan-agent/dist/cli/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (titan-agent) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
TITAN — The Intelligent Task Automation Network
An autonomous AI agent framework that actually does things. Sub-agent orchestration, goal-driven autopilot, deliberative reasoning, sandbox code execution, 9 channels, 21 providers, 95 tools, 3,323 tests. Pure JavaScript. No native compilation. No, seriously.
Quick Start • What It Does • Architecture • Autonomy • Mission Control • Channels • Providers • Voice • Mesh • Sandbox • CLI
WARNING — EXPERIMENTAL SOFTWARE TITAN is experimental, actively developed software. It can execute shell commands, modify files, access the network, and take autonomous actions on your system. Use at your own risk. Think of it less as "software you install" and more as "a very motivated intern with root access." The author and contributors provide this software "as is" without warranty of any kind. By installing or running TITAN, you accept full responsibility for any consequences, including but not limited to data loss, system instability, unintended actions, API charges, or security issues. Always review TITAN's configuration, run it in supervised mode first, and never grant it access to systems or credentials you cannot afford to lose. See LICENSE for the full legal terms.
Quick Start
Requirements: Node.js >= 20, an API key, and a healthy sense of adventure.
npm install -g titan-agent
titan onboard # Interactive setup — pick a provider, paste your API key, give TITAN a soul
titan gateway # Launch Mission Control at http://localhost:48420That's it. Three commands from zero to a running autonomous agent with a dashboard.
From Source
git clone https://github.com/Djtony707/TITAN.git && cd TITAN
npm install
npm run dev:gateway # Start in dev modeFirst Contact
titan agent -m "Hello" # Terminal chat
titan agent -m "What's on my calendar?" # Uses tools automaticallyOr open http://localhost:48420 and use the WebChat panel. Same agent, nicer UI.
What TITAN Does
TITAN is not a chatbot wrapper. It's a framework for building AI agents that take real actions. Here's what that looks like in practice:
"Research competitors and draft a report" TITAN enters deliberation mode. It decomposes the task into subtasks, spawns a browser sub-agent to research each competitor in parallel, synthesizes findings, writes a structured report, and saves it to disk. You approve the plan before execution starts.
"Monitor Upwork for Node.js contracts and send me the best ones" TITAN creates a recurring goal. Every cycle, it searches freelance platforms, scores matches against your profile, drafts proposals, and queues them for your review. The autopilot handles scheduling.
"Set up a content pipeline for my blog"
TITAN researches SEO keywords, generates outlines, writes drafts, and schedules publishing. Each step uses specialized tools — web_search for research, content_outline for structure, content_publish for output.
"What did I spend on APIs this month?" TITAN queries the income tracker, pulls cost data from provider logs, runs the numbers in a sandbox Python script, and returns a summary with a chart.
"Deploy this to my mini PC"
TITAN SSHs into the target machine via the mesh network, pulls the latest code, builds the Docker container, and reports back. All through the shell tool with mesh routing.
No custom code required for any of the above. TITAN ships with 36 built-in skills exposing 95 tools. When it needs a capability it doesn't have, it can generate a new skill on the fly.
Architecture at a Glance
CLI Interface
onboard | gateway | agent | mesh | doctor | config | autopilot
|
Gateway Server
HTTP + WebSocket Control Plane
Express REST API | Dashboard | WS Broadcast
|
+-----------------+-----------------+
| | |
Multi-Agent Channel Security
Router (1-5) Adapters (9) Sandbox + Pairing
| Discord Shield + Vault
Agent Core Telegram Audit Log
Session Mgmt Slack
Reflection WhatsApp Browsing
Sub-Agents Teams Browser Pool
Orchestrator Google Chat Stagehand
Goals Matrix
Initiative Signal Mesh
| WebChat mDNS + Tailscale
+----+----+--------+ Peer Discovery
| | | WS Transport
Skills LLM Providers Voice
36 files 21 providers Chatterbox TTS
95 tools (4 native + Whisper STT
| 17 compat)
Memory + Learning
Graph + Relationship
BriefingsFull architecture details: ARCHITECTURE.md
Autonomy System
TITAN v2026.6.7 introduced a complete autonomy overhaul. This isn't just tool calling — it's self-directed goal pursuit with reflection, delegation, and initiative.
Reflection
Every N tool-call rounds, TITAN pauses and asks itself: "Am I making progress? Should I continue, change approach, or stop?" Uses the fast model alias for cheap, quick self-checks. Prevents runaway loops and wasted tokens.
Sub-Agents
The spawn_agent tool delegates tasks to isolated sub-agents with constrained toolsets. Four templates:
| Template | Tools | Use Case |
|---|---|---|
explorer |
Web search, fetch, browse | Research and information gathering |
coder |
Shell, filesystem, edit | Code generation and modification |
browser |
Browser pool, Stagehand | Interactive web automation |
analyst |
Memory, data analysis | Analysis and synthesis |
Max depth = 1 (no sub-sub-agents). Each sub-agent gets its own session and returns results to the parent.
Orchestrator
When a task involves multiple independent subtasks, the orchestrator:
- Analyzes the request for delegation potential
- Breaks it into parallel and sequential assignments
- Spawns sub-agents for each
- Runs independent tasks in parallel, dependent tasks sequentially
- Synthesizes all results into a unified response
Goals
Persistent goals with subtasks, scheduling, budget tracking, and progress monitoring. Goals drive the autopilot system — each cycle picks the next actionable subtask.
You: "I want to grow my Twitter following to 10K"
TITAN creates a goal with subtasks:
1. Analyze current posting patterns [complete]
2. Research trending topics in your niche [complete]
3. Draft 5 tweets for review [in_progress]
4. Schedule optimal posting times [pending]
5. Monitor engagement metrics weekly [recurring]Tools: goal_create, goal_list, goal_update, goal_delete
Self-Initiative
After completing a goal subtask, TITAN checks for the next ready task. In autonomous mode, it starts working immediately. In supervised mode, it proposes the next action. Rate-limited to prevent runaway execution.
Autonomy Modes
| Mode | Behavior |
|---|---|
autonomous |
Full auto — executes all tools, auto-triggers deliberation, self-initiates goal subtasks |
supervised |
Asks before dangerous operations, proposes next steps (default) |
locked |
Asks permission for every tool call |
titan config set autonomy.mode supervisedDeliberative Reasoning
When TITAN detects an ambitious request, it enters a multi-stage loop:
- Analyze — Examines the request from multiple angles using high thinking
- Plan — Generates a structured, dependency-aware execution plan
- Approve — Presents the plan for your review
- Execute — Runs each step, reporting progress via WebSocket
- Adapt — Re-analyzes and adjusts if a step fails
/plan figure out how to monetize this homelab # Force deliberation
/plan status # Check progress
/plan cancel # AbortMission Control
A 12-panel dark-mode dashboard at http://localhost:48420.
| Panel | What It Does |
|---|---|
| Overview | System health, uptime, memory usage, model info, cost stats |
| WebChat | Real-time chat with your agent via WebSocket |
| Agents | Spawn, stop, and monitor up to 5 agent instances |
| Settings | 6-tab live config: AI, Providers, Channels, Security, Gateway, Profile |
| Channels | Connection status for all 9 channel adapters |
| Skills | 36 installed skills with per-skill enable/disable toggles |
| Sessions | Active sessions with message counts and history |
| Learning | Tool success rates and knowledge base stats |
| Autopilot | Schedule, status, history, and run control |
| Security | Audit log viewer and DM pairing management |
| Logs | Color-coded real-time log viewer with filtering |
| Mesh | Peer management — approve, reject, revoke connections |
| Memory Graph | Visual force-directed graph of entities and relationships |
Settings includes a SOUL.md live editor (Profile tab) and Google OAuth connection manager (Providers tab). All changes take effect without restarting the gateway.
Channels
TITAN connects to 9 messaging platforms. All support the DM pairing security system.
| Channel | Library | Notes |
|---|---|---|
| Discord | discord.js | Full bot integration |
| Telegram | grammY | Bot API with webhook support |
| Slack | @slack/bolt | Workspace app integration |
| Baileys | No official API needed | |
| Microsoft Teams | botbuilder | Enterprise integration |
| Google Chat | Webhooks | Real webhook-based adapter |
| Matrix | matrix-js-sdk | Decentralized chat support |
| Signal | signal-cli REST | Privacy-focused messaging |
| WebChat | Built-in WebSocket | Included in Mission Control |
Configure via ~/.titan/titan.json or the Mission Control Settings panel.
Providers
21 AI providers. Add your API key and go. TITAN routes, fails over, and load-balances automatically.
| Provider | Type | Notable Models |
|---|---|---|
| Anthropic | Native | Claude Opus 4, Sonnet 4, Haiku 4, 3.5 Sonnet/Haiku |
| OpenAI | Native | GPT-4o, GPT-4o-mini, o1, o3-mini |
| Native | Gemini 2.5 Pro/Flash, 2.0 Flash | |
| Ollama | Native | Any locally installed model |
| Groq | OpenAI-compat | LLaMA 3.3 70B, Mixtral, DeepSeek-R1 Distill |
| Mistral | OpenAI-compat | Mistral Large, Codestral, Nemo |
| OpenRouter | OpenAI-compat | 290+ models from all providers |
| Together | OpenAI-compat | LLaMA 3.3, DeepSeek-R1, Qwen 2.5 |
| Fireworks | OpenAI-compat | LLaMA 3.3, Mixtral, Qwen 3 |
| xAI | OpenAI-compat | Grok-3, Grok-3-fast, Grok-3-mini |
| DeepSeek | OpenAI-compat | DeepSeek Chat, DeepSeek Reasoner |
| Cerebras | OpenAI-compat | LLaMA 3.3, Qwen 3 |
| Cohere | OpenAI-compat | Command-R+, Command-R |
| Perplexity | OpenAI-compat | Sonar, Sonar Pro, Sonar Reasoning |
| Venice AI | OpenAI-compat | LLaMA 3.3 70B, DeepSeek-R1 671B |
| AWS Bedrock | OpenAI-compat | Claude, Titan Text, LLaMA 3 (via proxy) |
| LiteLLM | OpenAI-compat | Any model via universal proxy |
| Azure OpenAI | OpenAI-compat | GPT-4o, GPT-4o-mini, o1 |
| DeepInfra | OpenAI-compat | LLaMA 3.3, Mixtral 8x22B, Qwen 2.5 |
| SambaNova | OpenAI-compat | LLaMA 3.3, DeepSeek-R1 Distill |
| Kimi | OpenAI-compat | Kimi K2.5 |
4 native providers with full API integration. 17 OpenAI-compatible providers that work through a unified adapter. All 21 support automatic failover — your agent stays up even when OpenAI doesn't.
titan model --discover # Live-detect all available models
titan model --set anthropic/claude-sonnet-4-20250514
titan model --alias fast=openai/gpt-4o-mini # Create shortcutsBuilt-in aliases: fast, smart, cheap, reasoning, local — fully configurable.
Running locally? See docs/MODELS.md for GPU-tiered Ollama model recommendations.
Voice Pipeline
TITAN supports text-to-speech and speech-to-text through the voice skill. The real story is what happens when you point it at dedicated hardware.
The current setup runs on a machine with an RTX 5090 (32GB VRAM):
- Chatterbox TTS — An open-source text-to-speech model that clones voices from a 5-second audio sample. TITAN's default voice is a Robin Williams clone. Yes, really. A 5-second clip of Robin Williams doing improv, fed into Chatterbox, produces eerily convincing speech synthesis. The ethical implications are left as an exercise for the reader.
- Whisper STT — OpenAI's Whisper model running locally for speech-to-text. No cloud API calls, no transcription costs, no audio leaving your network.
The voice tools (generate_speech, transcribe_audio) work with any provider, but local inference on a GPU means sub-second response times and zero per-request costs.
titan agent -m "Read me the latest news headlines" # TTS output
titan agent --voice # Voice input modeMesh Networking
Connect up to 5 machines so they share AI models and API keys. One machine has a GPU? The others can use its local models. One machine has an OpenAI key? Everyone benefits.
TITAN finds your other machines automatically via mDNS on your local network, or via Tailscale if they're remote.
Quick Setup
Machine 1 (your GPU desktop):
titan mesh --init # Generates a secret: TITAN-a1b2-c3d4-e5f6
titan gatewayMachine 2 (your laptop):
titan mesh --join "TITAN-a1b2-c3d4-e5f6"
titan gatewayMachine 2 discovers Machine 1 automatically. You approve the connection (or enable --auto-approve), and both machines now share all available models.
How Routing Works
- Local first — If your machine has the model, it runs locally
- Mesh fallback — If not, TITAN checks connected peers
- Provider failover — Last resort, tries other local providers
Approval System
New peers are quarantined until approved. Manage from CLI or the Mesh dashboard panel:
titan mesh --pending # See who's waiting
titan mesh --approve <nodeId> # Allow connection
titan mesh --reject <nodeId> # Deny connection
titan mesh --revoke <nodeId> # Disconnect an approved peer
titan mesh --auto-approve # Trust all peers with your secretApproved peers persist to ~/.titan/approved-peers.json and reconnect automatically on restart.
Remote Machines (Tailscale)
{
"mesh": {
"enabled": true,
"tailscale": true
}
}Or add static peers manually: titan mesh --add "192.168.1.100:48420"
Sandbox Code Execution
When the LLM needs to run complex logic — loops, data processing, batch operations — it writes Python and executes it in an isolated Docker container. Tool calls from inside the sandbox route through a secure HTTP bridge back to TITAN.
Traditional approach: 50 individual tool calls x LLM round-trips = bloated context + slow
Sandbox approach: 1 Python script with a for-loop = fast, accurate, minimal tokensThe LLM writes code like this:
from tools import web_search, read_file
results = []
for topic in ["AI agents", "LLM tools", "code sandbox"]:
data = web_search(query=topic)
results.append(data)
print(f"Found {len(results)} results")Security: Containers run with --cap-drop=ALL, --read-only, --security-opt=no-new-privileges, memory/CPU limits, and session-token authenticated bridge. Dangerous tools (shell, exec, process) are blocked inside the sandbox.
{
"sandbox": {
"enabled": true,
"timeoutMs": 60000,
"memoryMB": 512,
"deniedTools": ["shell", "exec", "code_exec", "process", "apply_patch"]
}
}Built-in Tools
36 skills exposing 95 tools. All individually toggleable from Mission Control.
| Category | Tools |
|---|---|
| Shell & Process | shell, exec, process (list, kill, spawn, poll, log) |
| Filesystem | read_file, write_file, edit_file, list_dir, apply_patch |
| Web | web_search, web_fetch, web_read, web_act, browser (CDP), browse_url, browser_search, browser_auto_nav |
| Intelligence | auto_generate_skill, analyze_image, transcribe_audio, generate_speech |
| GitHub | github_repos, github_issues, github_prs, github_commits, github_files |
email_send, email_search, email_read, email_list (Gmail OAuth + SMTP) |
|
| Computer Use | screenshot, mouse_click, mouse_move, keyboard_type, keyboard_press, screen_read |
| Data & Documents | data_analysis, csv_parse, csv_stats, pdf_read, pdf_info |
| Smart Home | ha_devices, ha_control, ha_status |
| Image Generation | generate_image, edit_image |
| Weather | weather (real-time via wttr.in, no API key) |
| Automation | cron, webhook |
| Memory | memory, switch_model, graph_remember, graph_search, graph_entities, graph_recall |
| Sandbox | code_exec (Python/JS in isolated Docker with tool bridge) |
| Meta | tool_search (discover tools on demand), plan_task (deliberative planning) |
| Sessions | sessions_list, sessions_history, sessions_send, sessions_close |
| Income Tracking | income_log, income_summary, income_list, income_goal |
| Freelance | freelance_search, freelance_match, freelance_draft, freelance_track |
| Content | content_research, content_outline, content_publish, content_schedule |
| Lead Gen | lead_scan, lead_score, lead_queue, lead_report |
| Goals | goal_create, goal_list, goal_update, goal_delete |
| X/Twitter | x_post, x_reply, x_search, x_review |
| Sub-Agents | spawn_agent (delegate to isolated sub-agents) |
Tool Search — Compact Mode
TITAN doesn't dump all 95 tool schemas into every LLM call. It sends only 8 core tools plus tool_search. When the LLM needs a capability, it calls tool_search("email") and gets the relevant tools added dynamically.
Before: 95 tools x ~50 tokens each = ~4,750 input tokens
After: 10 core tools + tool_search = ~700 input tokens (85% reduction)Works with all 21 providers. Especially beneficial for smaller local models where context window is precious.
CLI Reference
| Command | Description |
|---|---|
titan onboard |
Interactive setup wizard (profile, soul, provider, autonomy) |
titan gateway |
Start Mission Control + API server |
titan agent -m "..." |
Send a message from the terminal |
titan send --to ch:id -m "..." |
Message a specific channel |
titan model --list |
Show all configured models |
titan model --discover |
Live-detect available models |
titan model --set <model> |
Switch the active model |
titan model --alias <name>=<model> |
Create a model alias |
titan agents |
Multi-agent management (spawn, stop, list) |
titan mesh --init |
Initialize mesh networking |
titan mesh --status |
View peers, pending, and shared models |
titan mesh --pending |
Show peers waiting for approval |
titan mesh --approve <id> |
Approve a discovered peer |
titan mesh --reject <id> |
Reject a pending peer |
titan mesh --revoke <id> |
Disconnect an approved peer |
titan mesh --auto-approve |
Toggle auto-approve mode |
titan skills |
List installed skills |
titan skills --create "..." |
Generate a skill with AI |
titan pairing |
Manage DM access control |
titan doctor |
System diagnostics |
titan doctor --fix |
Auto-fix detected issues |
titan vault |
Manage encrypted secrets vault |
titan config [key] |
View/edit configuration |
titan graphiti --init |
Initialize knowledge graph |
titan graphiti --stats |
Graph statistics |
titan mcp |
Manage MCP servers |
titan recipe --list |
List and run saved recipes |
titan monitor |
Manage proactive monitors |
titan autopilot --init |
Create AUTOPILOT.md checklist |
titan autopilot --run |
Trigger immediate autopilot run |
titan autopilot --enable |
Toggle autopilot scheduling |
titan autopilot --status |
View schedule and last run info |
titan update |
Update to latest version |
Custom Skills
Create new tools in seconds. Drop files into ~/.titan/skills/:
YAML (Easiest)
# ~/.titan/skills/word_count.yaml
name: word_count
description: Count words, lines, and characters in a file
parameters:
filePath:
type: string
description: Path to the file
required: true
script: |
const fs = require('fs');
const content = fs.readFileSync(args.filePath, 'utf-8');
const lines = content.split('\n').length;
const words = content.split(/\s+/).filter(Boolean).length;
return 'Lines: ' + lines + ', Words: ' + words + ', Characters: ' + content.length;JavaScript
// ~/.titan/skills/hello.js
export default {
name: 'hello',
description: 'Greet someone by name',
parameters: {
type: 'object',
properties: {
name: { type: 'string', description: 'Name to greet' }
},
required: ['name']
},
execute: async (args) => `Hello, ${args.name}!`
};AI-Generated
titan skills --create "a tool that converts CSV files to JSON"TITAN writes, compiles, and hot-loads the skill instantly.
Security
Defense-in-depth, not "we'll add auth later."
| Layer | What It Does |
|---|---|
| Prompt Injection Shield | Two-layer detection — heuristic engine + keyword density analysis |
| DM Pairing | New senders quarantined until approved |
| Sandbox | Docker isolation with --cap-drop=ALL, read-only filesystem, memory limits |
| Secrets Vault | AES-256-GCM encrypted credential store with PBKDF2 key derivation |
| Audit Log | HMAC-SHA256 chained tamper-evident JSONL trail |
| E2E Encryption | AES-256-GCM per-session encryption, keys held in memory only |
| Tool Allowlists | Configurable per-agent tool permissions |
| Network Allowlists | Configurable outbound connection boundaries |
| Autonomy Gates | Risk classification with human-in-the-loop approval |
Memory Systems
| System | Storage | Purpose |
|---|---|---|
| Episodic | ~/.titan/titan-data.json |
Conversation history per session |
| Learning | ~/.titan/knowledge.json |
Tool success/failure rates, error patterns, resolutions |
| Relationship | ~/.titan/profile.json |
User preferences, work context, personal continuity |
| Temporal Graph | ~/.titan/graph.json |
Entities, episodes, relationships — searchable across time |
The temporal graph is pure TypeScript — no Neo4j, no Docker, no external services. Entities are extracted from conversations automatically, linked with timestamps, and injected into every system prompt.
Development
npm run build # tsup ESM production build
npm run test # vitest (3,323 tests across 94 files)
npm run test:coverage # ~82% line coverage
npm run ci # typecheck + full test suite
npm run typecheck # tsc --noEmit
npm run lint # ESLint
npm run dev:gateway # Dev mode with tsx hot-reloadSee CONTRIBUTING.md for the full development guide and ARCHITECTURE.md for the codebase layout.
Roadmap
Recently Shipped (v2026.6.x)
- v2026.6.7: Autonomy Overhaul — reflection, sub-agents, orchestrator, goals, initiative, shared browser pool, Stagehand browser automation, X/Twitter integration, deliberation fallback fix. 3,323 tests across 94 files.
Previously Shipped (v2026.5.x)
- v2026.5.17: GitHub-hosted Skills Marketplace, dynamic model dropdown, 3,171 tests
- v2026.5.14: Income Automation Skills (16 new tools), autopilot playbooks, per-skill toggles
- v2026.5.13: Kimi K2.5 provider, web_read + web_act tools
- v2026.5.11: Deliberative Reasoning, Gmail OAuth, Soul Onboarding, 2,860+ tests
- v2026.5.9: Small model tool reduction, config validation, stall detector
- v2026.5.4: Secrets vault, audit log, self-healing doctor, 3 new providers
Upcoming
- Vector Search & RAG — SQLite FTS5 + embeddings for semantic memory
- Team Mode & RBAC — Role-based access control for multi-user deployments
Contributing
See CONTRIBUTING.md for the full guide.
git clone https://github.com/Djtony707/TITAN.git && cd TITAN
npm install
npm run dev:gatewayWe don't bite. Unless you submit a PR that adds is-even as a dependency.
Acknowledgments
Architectural Inspiration
- OpenClaw by Peter Steinberger — TITAN's architecture, CLI surface, tool signatures, workspace layout (AGENTS.md, SOUL.md, TOOLS.md), and DM pairing system are inspired by OpenClaw. Licensed under MIT.
Temporal Knowledge Graph
- Graphiti by Zep AI — Inspired the episodic memory and temporal graph approach. Licensed under Apache 2.0. Research paper: arXiv:2501.13956.
Browser Automation
- Skyvern by Skyvern AI — AI browser automation via vision + LLMs. Licensed under AGPL-3.0 (separate service).
Open-Source Libraries
Express, Zod, Commander.js, ws, Chalk, Ora, Boxen, Inquirer, dotenv, node-cron, uuid, Playwright, bonjour-service, tsup, Vitest, TypeScript.
Support
If TITAN saves you time, consider supporting its development:
Disclaimer
TITAN IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS, COPYRIGHT HOLDERS, OR CONTRIBUTORS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF, OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
TITAN is an autonomous agent that can execute arbitrary commands, modify your filesystem, make network requests, and incur API costs. The author accepts no responsibility or liability for any actions taken by the software. You are solely responsible for reviewing and approving all actions taken by TITAN on your systems.
License
MIT License — Copyright (c) 2026 Tony Elliott
Created by Tony Elliott (Djtony707)