JSPM

aidevops

3.5.128
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 9317
  • Score
    100M100P100Q133013F
  • License MIT

AI DevOps Framework - AI-assisted development workflows, code quality, and deployment automation

Package Exports

    This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (aidevops) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

    Readme

    AI DevOps Framework

    aidevops.sh — An OpenCode plugin and AI operations platform for launching and managing development, business, marketing, and creative projects. 13 specialist AI agents handle the automatable work across every domain so your time is preserved for real-world discovery and decisions that AI cannot yet reach.

    Recommended setup: OpenCode + Claude models (Anthropic). All features, agents, and workflows are designed and tested for OpenCode first. Claude models (haiku, sonnet, opus) deliver the best results across all agent tiers.

    "Scope a mission to redesign the landing pages — break it into milestones, dispatch workers in parallel, validate each milestone, and track budget across the whole project"

    One conversation, autonomous project delivery — with enterprise-level security & quality-control.

    Founded by Marcus Quinn on 9th November 2025 to help anyone level-up their AI & Open-Source game.

    The Philosophy

    Maximum value for your time and money. aidevops is built on these principles:

    • Autonomous orchestration - An AI supervisor runs every 2 minutes, dispatching parallel workers, merging PRs, detecting stuck processes, and advancing multi-day missions — no human babysitting required
    • Multi-domain agents - 13 specialist agents (code, automation, SEO, marketing, content, legal, sales, research, video, business, accounts, social media, health) with 900+ subagents loaded on demand
    • Multi-model safety - High-stakes operations (force push, production deploy, data migration) are verified by a second cross-provider model before execution — different providers have different failure modes, so correlated hallucinations are rare
    • Resource efficiency - Cost-aware model routing (local → haiku → flash → sonnet → pro → opus), project-type bundles that auto-configure quality gates and model tiers, budget tracking with burn-rate analysis
    • Self-healing - When something breaks, diagnose the root cause, create tasks, and fix it. Every error is a live test case for a permanent solution
    • Self-improving - When patterns of failure or inefficiency emerge, improve the framework itself. Session mining extracts learnings from past sessions automatically
    • Gap awareness - Every session is an opportunity to identify what's missing — gaps in automation, documentation, coverage, or processes — and create tasks to fill them
    • Git-first workflow - Protected branches, PR reviews, quality gates before merge. Sane vibe-coding through structure
    • Parallel agents - Multiple AI sessions running full Ralph loops on separate branches via git worktrees
    • Progressive discovery - /slash commands and @subagent mentions load knowledge into context only when needed
    • Open-source ready - Contribute to any project the same way you work on your own. Clone a repo, develop solutions to issues locally, and submit pull requests — the same full-loop workflow works everywhere

    The result: an AI operations platform that manages projects across every business domain — absorbing everything automatable so you can focus on what matters.

    Built on proven patterns: aidevops implements industry-standard agent design patterns - including multi-layer action spaces, context isolation, and iterative execution loops.

    Why This Framework?

    Beyond single-task AI. Most AI coding tools handle one conversation, one repo, one task at a time. aidevops manages your entire operation — dispatching parallel AI agents across multiple repos, routing tasks to domain-specialist agents, and running autonomously for days on multi-milestone projects.

    What makes it different:

    • Autonomous supervisor - AI pulse runs every 2 minutes: merges ready PRs, dispatches workers, kills stuck processes, advances missions, triages quality findings — no human in the loop
    • Cross-domain intelligence - 13 agents spanning code, automation, business, marketing, legal, sales, content, video, research, SEO, social media, health, and accounts — each with domain expertise and specialist subagents
    • Multi-model safety - Destructive operations verified by a second AI model from a different provider before execution
    • 30+ service integrations - Hosting, Git platforms, DNS, security, monitoring, deployment, payments, communications
    • Mission orchestration - Multi-day autonomous projects broken into milestones with validation, budget tracking, and automatic advancement

    GitHub Actions Quality Gate Status CodeFactor Maintainability Codacy Badge CodeRabbit

    License: MIT Copyright

    GitHub stars GitHub forks GitHub watchers

    GitHub release (latest by date) npm version Homebrew GitHub Release Date GitHub commits since latest release

    GitHub repo size GitHub language count GitHub top language

    GitHub issues GitHub closed issues GitHub pull requests GitHub contributors

    Services Supported AGENTS.md AI Optimized MCP Servers API Integrations

    Quick Reference

    • Purpose: AI-assisted DevOps automation framework
    • Install: npm install -g aidevops && aidevops update
    • Entry: aidevops CLI, ~/.aidevops/agents/AGENTS.md
    • Stack: Bash scripts, TypeScript (Bun), MCP servers

    Key Commands

    • aidevops init - Initialize in any project
    • aidevops update - Update framework
    • aidevops auto-update - Automatic update polling (enable/disable/status)
    • aidevops secret - Manage secrets (gopass encrypted, AI-safe)
    • aidevops security - Full security assessment (posture, secrets, supply chain)
    • /onboarding - Interactive setup wizard (in AI assistant)

    Agent Structure

    • 13 primary agents (Build+, Automate, SEO, Marketing, etc.) with specialist @subagents on demand
    • 900+ subagent markdown files organized by domain
    • 390+ helper scripts in .agents/scripts/
    • 69 slash commands for common workflows

    Enterprise-Grade Quality & Security

    Comprehensive DevOps framework with tried & tested services integrations, popular and trusted MCP servers, and enterprise-grade infrastructure quality assurance code monitoring and recommendations.

    Security Notice

    This framework provides agentic AI assistants with powerful infrastructure access. Use responsibly.

    Capabilities: Execute commands, access credentials, modify infrastructure, interact with APIs Your responsibility: Use trusted AI providers, rotate credentials regularly, monitor activity

    Security Commands

    aidevops security              # Run ALL checks (posture + hygiene + supply chain)
    aidevops security posture      # Interactive security posture setup (gopass, gh auth, SSH)
    aidevops security status       # Combined posture + hygiene summary
    aidevops security scan         # Secret hygiene & supply chain scan only
    aidevops security scan-pth     # Python .pth file audit (supply chain attack vector)
    aidevops security scan-secrets # Plaintext credential locations only
    aidevops security scan-deps    # Unpinned dependency check
    aidevops security check        # Per-repo security posture assessment
    aidevops security dismiss <id> # Dismiss a security advisory after taking action

    Running aidevops security with no arguments is the single command that covers everything — user security posture, plaintext secret detection, supply chain IoC scanning, and active advisories.

    Security advisories are delivered via aidevops update and shown in the session greeting until dismissed. The scanner never exposes secret values — only file locations and key names. All remediation commands should be run in a separate terminal, not inside AI chat sessions.

    Supply chain hardening: All Python dependencies are pinned to exact versions (==) to prevent malicious package upgrades. The .pth file auditor detects known supply chain attack indicators (e.g., the LiteLLM March 2026 PyPI compromise).

    Quick Start

    Installation Options

    npm (recommended - verified provenance):

    npm install -g aidevops && aidevops update

    Note: npm suppresses postinstall output. The && aidevops update deploys agents to ~/.aidevops/agents/. The CLI will remind you if agents need updating.

    Bun (fast alternative):

    bun install -g aidevops && aidevops update

    Homebrew (macOS/Linux):

    brew install marcusquinn/tap/aidevops && aidevops update

    Direct from source (aidevops.sh):

    bash <(curl -fsSL https://aidevops.sh/install)

    Manual (git clone):

    git clone https://github.com/marcusquinn/aidevops.git ~/Git/aidevops
    ~/Git/aidevops/setup.sh

    That's it! The setup script will:

    • Clone/update the repo to ~/Git/aidevops
    • Deploy agents to ~/.aidevops/agents/
    • Install the aidevops CLI command
    • Configure your AI assistants automatically
    • Offer to install Oh My Zsh (optional, opt-in) for enhanced shell experience
    • Guide you through recommended tools (Tabby, Zed, Git CLIs)
    • Ensure all PATH and alias changes work in both bash and zsh

    New users: Start OpenCode and type /onboarding to configure your services interactively. OpenCode is the recommended tool for aidevops - all features, agents, and workflows are designed and tested for it first. The onboarding wizard will:

    • Explain what aidevops can do
    • Ask about your work to give personalized recommendations
    • Show which services are configured vs need setup
    • Guide you through setting up each service with links and commands

    After installation, use the CLI:

    aidevops status           # Check what's installed
    aidevops doctor           # Detect duplicate installs and PATH conflicts
    aidevops update           # Update framework + check registered projects
    aidevops auto-update      # Manage automatic update polling (every 10 min)
    aidevops init             # Initialize aidevops in any project
    aidevops features         # List available features
    aidevops repos            # List/add/remove registered projects
    aidevops detect           # Scan for unregistered aidevops projects
    aidevops upgrade-planning # Upgrade TODO.md/PLANS.md to latest templates
    aidevops update-tools     # Check and update installed tools
    aidevops uninstall        # Remove aidevops

    Project tracking: When you run aidevops init, the project is automatically registered in ~/.config/aidevops/repos.json. Running aidevops update checks all registered projects for version updates.

    Use aidevops in Any Project

    Initialize aidevops features in any git repository:

    cd ~/your-project
    aidevops init                         # Enable all features
    aidevops init planning                # Enable only planning
    aidevops init planning,time-tracking  # Enable specific features

    This creates:

    • .aidevops.json - Configuration with enabled features
    • .agents symlink → ~/.aidevops/agents/
    • TODO.md - Quick task tracking with time estimates
    • todo/PLANS.md - Complex execution plans
    • .beads/ - Task graph database (if beads enabled)

    Available features: planning, git-workflow, code-quality, time-tracking, beads

    Upgrade Planning Files

    When aidevops templates evolve, upgrade existing projects to the latest format:

    aidevops upgrade-planning           # Interactive upgrade with backup
    aidevops upgrade-planning --dry-run # Preview changes without modifying
    aidevops upgrade-planning --force   # Skip confirmation prompt

    This preserves your existing tasks while adding TOON-enhanced parsing, dependency tracking, and better structure.

    Automatic detection: aidevops update now scans all registered projects for outdated planning templates (comparing TOON meta version numbers) and offers to upgrade them in-place with backups.

    Task Graph Visualization with Beads

    Beads provides task dependency tracking and graph visualization:

    aidevops init beads              # Enable beads (includes planning)

    Task Dependencies:

    - [ ] t001 First task
    - [ ] t002 Second task blocked-by:t001
    - [ ] t001.1 Subtask of t001
    Syntax Meaning
    blocked-by:t001 Task waits for t001 to complete
    blocks:t002 This task blocks t002
    t001.1 Subtask of t001 (hierarchical)

    Commands:

    Command Purpose
    /ready Show tasks with no open blockers
    /list-verify List verification queue (pending, passed, failed)
    /sync-beads Sync TODO.md/PLANS.md with Beads graph
    bd list List all tasks in Beads
    bd ready Show ready tasks (Beads CLI)
    bd graph <id> Show dependency graph for an issue

    Architecture: aidevops markdown files (TODO.md, PLANS.md) are the source of truth. Beads syncs from them for visualization.

    Optional Viewers: Beyond the bd CLI, there are community viewers for richer visualization:

    • beads_viewer (Python TUI) - PageRank, critical path analysis
    • beads-ui (Web) - Live updates in browser
    • bdui (React/Ink TUI) - Modern terminal UI
    • perles (Rust TUI) - BQL query language

    See .agents/tools/task-management/beads.md for complete documentation and installation commands.

    Your AI assistant now has agentic access to 30+ service integrations.

    OpenCode Anthropic OAuth (Built-in)

    OpenCode includes Anthropic OAuth authentication natively — no API key needed. OAuth is covered by your Claude Pro/Max subscription at zero additional cost.

    Authenticate via the pool (recommended):

    aidevops model-accounts-pool add anthropic
    # Opens browser OAuth flow — no API key required
    # Restart OpenCode after adding

    Or via the OpenCode TUI:

    Open OpenCode → Ctrl+A → Select AnthropicLogin with Claude.ai → follow browser OAuth flow.

    Note: opencode auth login prompts for an API key, not OAuth. Use the commands above for subscription-based OAuth access.

    Benefits:

    • Zero cost for Claude Pro/Max subscribers (covered by subscription)
    • Automatic token refresh — no manual re-authentication needed
    • Multiple accounts — add more accounts to the pool for automatic rotation when one hits rate limits
    • Beta features enabled — extended thinking modes and latest features

    Cursor Models via Pool Proxy

    Access Cursor Pro models (Composer 2, Claude 4.6 Opus/Sonnet, GPT-5.x, Gemini 3.1 Pro) in OpenCode through a local gRPC proxy that translates OpenAI-compatible requests to Cursor's protobuf/HTTP2 protocol.

    Setup:

    # Add your Cursor account to the pool (reads from local Cursor IDE)
    oauth-pool-helper.sh add cursor
    
    # Restart OpenCode — Cursor models appear in Ctrl+T model picker

    How it works:

    • Reads Cursor credentials from the local IDE state database
    • Starts a gRPC proxy that speaks Cursor's native protocol (not the cursor-agent CLI)
    • Discovers available models via gRPC and registers them as an OpenCode provider
    • Supports true streaming, tool calling, and automatic token refresh
    • Falls back gracefully if no Cursor accounts are in the pool

    Benefits:

    • Zero additional cost for Cursor Pro subscribers
    • True streaming — responses stream as they arrive (not buffered)
    • Tool calling — Cursor's native MCP tool protocol works through the proxy
    • Model discovery — automatically detects all models available to your account
    • Pool rotation — multiple accounts with LRU rotation and 429 failover

    Google AI Pool (Gemini CLI / Vertex AI)

    Use your Google AI Pro, AI Ultra, or Workspace subscription for Gemini models. Tokens are injected as ADC bearer tokens that Gemini CLI, Vertex AI SDK, and the Gemini API pick up automatically.

    Setup:

    # Add your Google account to the pool (browser OAuth flow)
    aidevops model-accounts-pool add google
    
    # Restart OpenCode — token is injected as GOOGLE_OAUTH_ACCESS_TOKEN

    Supported plans:

    • Google AI Pro (~$25/mo) — daily Gemini CLI limits
    • Google AI Ultra (~$65/mo) — higher daily limits
    • Google Workspace with Gemini add-on — enterprise daily limits

    Isolation guarantee: Google auth failures never affect Anthropic/OpenAI/Cursor providers. A Google 429 or auth error only puts the Google pool into cooldown.

    GitHub AI Agent Integration

    Enable AI-powered issue resolution directly from GitHub. Comment /oc fix this on any issue and the AI creates a branch, implements the fix, and opens a PR.

    Security-first design - The workflow includes:

    • Trusted users only (OWNER/MEMBER/COLLABORATOR)
    • ai-approved label required on issues before AI processing
    • Prompt injection pattern detection
    • Audit logging of all invocations
    • 15-minute timeout and rate limiting

    Quick setup:

    # 1. Install the OpenCode GitHub App
    # Visit: https://github.com/apps/opencode-agent
    
    # 2. Add API key secret
    # Repository → Settings → Secrets → ANTHROPIC_API_KEY
    
    # 3. Create required labels
    gh label create "ai-approved" --color "0E8A16" --description "Issue approved for AI agent"
    gh label create "security-review" --color "D93F0B" --description "Requires security review"

    The secure workflow is included at .github/workflows/opencode-agent.yml.

    Usage:

    Context Command Result
    Issue (with ai-approved label) /oc fix this Creates branch + PR
    Issue /oc explain this AI analyzes and replies
    PR /oc review this PR Code review feedback
    PR Files tab /oc add error handling here Line-specific fix

    See .agents/tools/git/opencode-github-security.md for the full security documentation.

    Supported AI tool: OpenCode is the recommended and tested AI coding tool for aidevops. All features, agents, and workflows are designed and tested for OpenCode first. We recommend Claude models (Anthropic) for the best results across all agent tiers -- haiku for triage, sonnet for implementation, opus for complex reasoning.

    Recommended stack:

    • OpenCode - The recommended AI coding agent. Powerful agentic TUI/CLI with native MCP support, Tab-based agent switching, LSP integration, plugin ecosystem, and excellent DX. All aidevops features are designed and tested for OpenCode first.
    • OpenCode Zen - Free tier of OpenCode with included models. Start working with AI straight away at no cost -- no API keys or subscriptions required.
    • Claude (Anthropic) - Our most-used and tested model provider. Claude haiku, sonnet, and opus deliver the best results across all aidevops agent tiers and workflows. Recommended for users who want the highest quality output.
    • Tabby - Recommended terminal. Colour-coded Profiles per project/repo, auto-syncs tab title with git repo/branch.
    • Zed - Recommended editor. High-performance with AI integration (use with the OpenCode Agent Extension).

    Troubleshooting Auth

    If you see "Anthropic Key Missing", "OpenAI Key Missing", or the model stops responding, run these commands from any terminal — no working model session required.

    Step 1 — Check pool health

    aidevops model-accounts-pool status       # counts: available / rate-limited / auth-error
    aidevops model-accounts-pool check        # live token validity test per account

    Step 2 — Fix based on what you see

    Symptom Command
    Account shows rate-limited aidevops model-accounts-pool rotate anthropic
    All accounts in cooldown aidevops model-accounts-pool reset-cooldowns
    Account shows auth-error aidevops model-accounts-pool add anthropic (re-auth)
    Pool is empty (no accounts) aidevops model-accounts-pool add anthropic
    Recently re-authed, still broken aidevops model-accounts-pool assign-pending anthropic
    Google Gemini CLI rate-limited aidevops model-accounts-pool rotate google
    Google token expired aidevops model-accounts-pool add google (re-auth)

    Step 3 — If still broken, re-add the account

    aidevops model-accounts-pool add anthropic     # Claude Pro/Max — opens browser OAuth
    aidevops model-accounts-pool add openai        # ChatGPT Plus/Pro
    aidevops model-accounts-pool add cursor        # Cursor Pro (reads from local IDE)
    aidevops model-accounts-pool add google        # Google AI Pro/Ultra/Workspace — browser OAuth
    aidevops model-accounts-pool import claude-cli # Import from existing Claude CLI auth

    Restart OpenCode after any add, rotate, or reset-cooldowns to pick up the new credentials.

    Full command reference

    aidevops model-accounts-pool status            # Pool health at a glance
    aidevops model-accounts-pool list              # Per-account detail + expiry
    aidevops model-accounts-pool check             # Live API validity test
    aidevops model-accounts-pool rotate [provider] # Switch to next available account NOW
    aidevops model-accounts-pool reset-cooldowns   # Clear all rate-limit cooldowns
    aidevops model-accounts-pool assign-pending <p># Assign stranded pending token
    aidevops model-accounts-pool remove <p> <email># Remove an account

    Note: reset-cooldowns clears cooldowns in the pool file. If OpenCode is already running, the in-memory token endpoint cooldown is only cleared when OpenCode restarts or when you use the /model-accounts-pool reset-cooldowns slash command inside an active session.

    If you prefer guided help: Open OpenCode with a free model (OpenCode Zen includes free models that don't require any API key or subscription) and run the auth troubleshooting agent by typing:

    @auth-troubleshooting

    The agent contains the full recovery flow and symptom table. Free models work fine for this — no paid subscription needed.

    Terminal Tab Title Sync

    Your terminal tab/window title automatically shows repo/branch context when working in git repositories. This helps identify which codebase and branch you're working on across multiple terminal sessions.

    Supported terminals: Tabby, cmux, iTerm2, Kitty, Alacritty, WezTerm, Hyper, and most xterm-compatible terminals.

    How it works: The pre-edit-check.sh script's primary role is enforcing git workflow protection (blocking edits on main/master branches). As a secondary, non-blocking action, it updates the terminal title via escape sequences. No configuration needed - it's automatic.

    Example format: {repo}/{branch-type}/{description}

    See .agents/tools/terminal/terminal-title.md for customization options.

    Companion tool:

    • claude-code CLI - Called from within OpenCode for sub-tasks and headless dispatch

    Collaborator compatibility: Projects initialized with aidevops init include pointer files (.cursorrules, .windsurfrules, etc.) that reference AGENTS.md, helping collaborators using other editors find project context. aidevops does not install into or configure those tools.

    Repo courtesy files: aidevops init scaffolds standard repo files if they don't exist: README.md, LICENCE (MIT), CHANGELOG.md, CONTRIBUTING.md, SECURITY.md, CODE_OF_CONDUCT.md. Author name and email are auto-detected from git config. Existing files are never overwritten.

    Core Capabilities

    AI-First Infrastructure Management:

    • SSH server access, remote command execution, API integrations
    • DNS management, application deployment, email monitoring
    • Git platform management, domain purchasing, setup automation
    • WordPress management, credential security, code auditing

    Autonomous Orchestration:

    • Pulse supervisor - Autonomous AI supervisor runs every 2 minutes via launchd — merges ready PRs, dispatches workers, kills stuck processes, detects orphaned PRs, syncs TODO state with GitHub, triages quality findings, and advances missions. No human in the loop
    • Missions - Multi-day autonomous projects: /mission scopes a high-level goal into milestones and features. The pulse dispatches workers, validates milestones, tracks budget, and advances through the project automatically (mission-dashboard-helper.sh)
    • Multi-model verification - Destructive operations (force push, production deploy, data migration) are verified by a second AI model from a different provider before execution. Different providers have different failure modes, so correlated hallucinations are rare
    • Supervisor - SQLite state machine dispatches tasks to parallel AI agents with retry cycles, batch management, and cron scheduling
    • Runners - Named headless agent instances with persistent identity, instructions, and memory namespaces
    • /runners command - Batch dispatch from task IDs, PR URLs, or descriptions with concurrency control and progress monitoring
    • Mailbox - SQLite-backed inter-agent messaging for coordination across parallel sessions
    • Worktree isolation - Each agent works on its own branch in a separate directory, no merge conflicts
    • Budget tracking - Append-only cost log (budget-tracker-helper.sh) with burn-rate analysis and /budget-analysis command for model routing decisions
    • Observability - LLM request capture plugin (observability.mjs) for cost tracking, performance analysis, and debugging
    • Rate limits - Per-provider rate limit configuration (rate-limits.json) with throttle-risk warnings

    Project Intelligence:

    • Bundles - Project-type presets that auto-configure model tiers, quality gates, and agent routing per repo. 7 built-in bundles (web-app, library, cli-tool, content-site, infrastructure, agent, schema) with auto-detection from marker files (bundle-helper.sh)
    • TTSR rules - Soft rule engine (ttsr-rule-loader.sh) with .agents/rules/ directory for AI output correction (e.g., no-edit-on-main, no-glob-for-discovery)
    • Cross-review - /cross-review dispatches the same prompt to multiple AI models in parallel, diffs results, and optionally auto-scores via a judge model
    • Local models - Run AI models locally via llama.cpp for free, private, offline inference (local-model-helper.sh) with HuggingFace GGUF model management
    • Tech stack lookup - /tech-stack detects technology stacks of URLs or finds sites using specific technologies (Wappalyzer, httpx, nuclei, BuiltWith)
    • IP reputation - ip-reputation-helper.sh checks IP addresses against multiple reputation databases (Spamhaus, ProxyCheck, AbuseIPDB) before VPS purchase or deployment

    Conversational Memory & Entity System:

    • Entity memory - Cross-channel relationship continuity (entity-helper.sh): people, agents, and services tracked across Matrix, SimpleX, email, and CLI with versioned profiles
    • Conversational memory - Per-conversation context management (conversation-helper.sh): idle detection, immutable summaries, tone profile extraction
    • Three-layer architecture - Layer 0 (immutable raw log), Layer 1 (tactical summaries), Layer 2 (strategic entity profiles) in shared SQLite

    Communications:

    • SimpleX bot - Channel-agnostic gateway with SimpleX Chat as first adapter for AI agent dispatch (simplex-bot/)
    • Matterbridge - Multi-platform chat bridge connecting 20+ platforms including Matrix, Discord, Telegram, Slack, IRC, WhatsApp, XMPP (matterbridge-helper.sh)
    • Localdev - Local development environment manager with dnsmasq, Traefik, mkcert for production-like .local domains with HTTPS (localdev-helper.sh)

    MCP Toolkit:

    • MCPorter - Discover, call, compose, and generate CLIs/typed clients for MCP servers (mcporter npm package)
    • OpenAPI Search - Search and explore any OpenAPI specification via MCP (zero install, Cloudflare Worker)
    • Cloudflare Code Mode - Full Cloudflare API (2,500+ endpoints) via 2 tools in ~1,000 tokens

    Unified Interface:

    • Standardized commands across all providers
    • Automated SSH configuration and multi-account support for all services
    • Security-first design with comprehensive logging, code quality reviews, and continual feedback-based improvement

    Quality Control & Monitoring:

    • Multi-Platform Analysis: SonarCloud, CodeFactor, Codacy, CodeRabbit, Qlty, Gemini Code Assist, Snyk
    • Performance Auditing: PageSpeed Insights, Lighthouse, WebPageTest, Core Web Vitals (/performance command)
    • SEO Toolchain: 40+ SEO subagents including Semrush, Ahrefs, ContentKing, Screaming Frog, Bing Webmaster Tools, Rich Results Test, programmatic SEO, analytics tracking, schema validation, content analysis, keyword mapping, and AI readiness
    • SEO Debugging: Open Graph validation, favicon checker, social preview testing
    • Email Deliverability: SPF/DKIM/DMARC/MX validation, blacklist checking
    • Uptime Monitoring: Updown.io integration for website and SSL monitoring

    Imported Skills

    aidevops includes curated skills imported from external sources. Skills support automatic update tracking:

    Skill Source Description
    cloudflare-platform dmmulroy/cloudflare-skill 60 Cloudflare products: Workers, Pages, D1, R2, KV, Durable Objects, AI, networking, security
    heygen heygen-com/skills AI avatar video creation API: avatars, voices, video generation, streaming, webhooks
    remotion remotion-dev/skills Programmatic video creation with React, animations, rendering
    video-prompt-design snubroot/Veo-3-Meta-Framework AI video prompt engineering - 7-component meta prompt framework for Veo 3
    animejs animejs.com JavaScript animation library patterns and API (via Context7)
    caldav-calendar ClawdHub CalDAV calendar sync via vdirsyncer + khal (iCloud, Google, Fastmail, Nextcloud)
    proxmox-full ClawdHub Complete Proxmox VE hypervisor management via REST API

    CLI Commands:

    aidevops skill add <owner/repo>    # Import a skill from GitHub
    aidevops skill add clawdhub:<slug> # Import a skill from ClawdHub
    aidevops skill list                # List imported skills
    aidevops skill check               # Check for upstream updates
    aidevops skill update [name]       # Update specific or all skills
    aidevops skill scan [name]         # Security scan skills (Cisco Skill Scanner)
    aidevops skill remove <name>       # Remove an imported skill

    Skills are registered in ~/.aidevops/agents/configs/skill-sources.json with upstream tracking for update detection.

    Security Scanning:

    Imported skills are automatically security-scanned using Cisco Skill Scanner when installed. Scanning runs on both initial import and updates -- pulling a new version of a skill triggers the same security checks as the first import. CRITICAL/HIGH findings block the operation; MEDIUM/LOW findings warn but allow. Telemetry is disabled - no data is sent to third parties.

    When a VirusTotal API key is configured (aidevops secret set VIRUSTOTAL_MARCUSQUINN), an advisory second layer scans file hashes against 70+ AV engines and checks domains/URLs referenced in skill content. VT scans are non-blocking -- the Cisco scanner remains the security gate.

    Scenario Security scan runs? CRITICAL/HIGH blocks?
    aidevops skill add <source> Yes Yes
    aidevops skill update [name] Yes Yes
    aidevops skill add <source> --force Yes Yes
    aidevops skill add <source> --skip-security Yes (reports only) No (warns)
    aidevops skill scan [name] Yes (standalone) Report only

    The --force flag only controls file overwrite behavior (replacing an existing skill without prompting). To bypass security blocking, use --skip-security explicitly -- this separation ensures that routine updates and re-imports never silently skip security checks.

    Scan results are logged to .agents/SKILL-SCAN-RESULTS.md automatically on each batch scan and skill import, providing a transparent audit trail of security posture over time.

    Browse community skills: skills.sh | ClawdHub | Specification: agentskills.io

    Reference:

    Agent Sources (Private Repos)

    Sync agents from private Git repositories into the framework. Private repos keep their own agents, helper scripts, and slash commands — aidevops sources sync deploys them alongside the core agents.

    aidevops sources add ~/Git/my-private-agents     # Register a local repo
    aidevops sources add-remote git@github.com:u/r.git  # Clone and register a remote repo
    aidevops sources list                             # List configured sources
    aidevops sources sync                             # Sync all sources
    aidevops sources remove my-private-agents         # Remove a source

    How it works: Private repos contain a .agents/ directory with agent subdirectories. Agents with mode: primary in their frontmatter are symlinked to the agents root for auto-discovery as primary agent tabs. Markdown files with agent: frontmatter are deployed as /slash commands. All sources sync automatically during aidevops update.

    Reference: .agents/aidevops/agent-sources.md

    Agent Design Patterns

    aidevops implements proven agent design patterns identified by Lance Martin (LangChain).

    Pattern Description aidevops Implementation
    Give Agents a Computer Filesystem + shell for persistent context ~/.aidevops/.agent-workspace/, 390+ helper scripts
    Multi-Layer Action Space Few tools, push actions to computer Per-agent MCP filtering (~12-20 tools each)
    Knowledge Graph Routing Indexed, cross-referenced agents instead of isolated skills subagent-index.toon maps 900+ agents by domain, purpose, and dependency — agents discover related context through the graph, not just their own file
    Progressive Disclosure Load context on-demand Subagent routing with content summaries, YAML frontmatter, read-on-demand
    Offload Context Write results to filesystem .agent-workspace/work/[project]/ for persistence
    Cache Context Prompt caching for cost Stable instruction prefixes
    Isolate Context Sub-agents with separate windows Subagent files with specific tool permissions
    Multi-Agent Orchestration Coordinate parallel agents TOON mailbox, agent registry, supervisor dispatch
    Compaction Resilience Preserve context across compaction OpenCode plugin injects dynamic state at compaction time
    Ralph Loop Iterative execution until complete /full-loop, full-loop-helper.sh
    Evolve Context Learn from sessions /remember, /recall with SQLite FTS5 + opt-in semantic search
    Pattern Tracking Learn what works/fails /patterns command, memory-helper.sh
    Token-Efficient Serialisation Minimise context overhead for structured data TOON format — 20-60% token reduction vs JSON/YAML for agent indexes, registries, and data exchange
    Cost-Aware Routing Match model to task complexity model-routing.md with 7-tier guidance, /route command
    Model Comparison Compare models side-by-side /compare-models (live data), /compare-models-free (offline)
    Response Scoring Evaluate actual model outputs /score-responses with structured criteria

    Key insight: Context is a finite resource with diminishing returns. aidevops treats every token as precious - loading only what's needed, when it's needed.

    See .agents/aidevops/architecture.md for detailed implementation notes and references.

    Multi-Agent Orchestration

    Run multiple AI agents in parallel on separate branches, coordinated through a lightweight mailbox system. Each agent works independently in its own git worktree while the supervisor manages task distribution and status reporting.

    Architecture:

    Supervisor (pulse loop)
    ├── Agent Registry (TOON format - who's active, what branch, idle/busy)
    ├── Mailbox System (SQLite WAL-mode, indexed queries)
    │   ├── task_assignment → worker inbox
    │   ├── status_report → coordinator outbox
    │   └── broadcast → all agents
    └── Model Routing (tier-based: haiku/sonnet/opus/flash/pro)

    Key components:

    Component Script Purpose
    Mailbox mail-helper.sh SQLite-backed inter-agent messaging (send, check, broadcast, archive)
    Supervisor supervisor-helper.sh Autonomous multi-task orchestration with SQLite state machine, batches, retry cycles, cron scheduling, auto-pickup from TODO.md
    Registry mail-helper.sh register Agent registration with role, branch, worktree, heartbeat
    Model routing model-routing.md, /route Cost-aware 7-tier routing guidance (local/haiku/flash/sonnet/pro/opus/grok)
    Budget tracking budget-tracker-helper.sh Append-only cost log for model routing decisions
    Observability observability.mjs plugin LLM request capture for cost tracking and performance analysis

    How it works:

    1. Each agent registers on startup (mail-helper.sh register --role worker)
    2. Supervisor runs periodic pulses (supervisor-helper.sh pulse)
    3. Pulse collects status reports, dispatches queued tasks to idle workers
    4. Agents send completion reports back via mailbox
    5. SQLite WAL mode + busy_timeout handles concurrent access (79x faster than previous file-based system)

    Compaction plugin (.agents/plugins/opencode-aidevops/): When OpenCode compacts context (at ~200K tokens), the plugin injects current session state - agent registry, pending mailbox messages, git context, and relevant memories - ensuring continuity across compaction boundaries.

    Custom system prompt (.agents/prompts/build.txt): Based on upstream OpenCode with aidevops-specific overrides for tool preferences, professional objectivity, and per-model reinforcements for weaker models.

    Subagent index (.agents/subagent-index.toon): Compressed TOON routing table listing all agents, subagents, workflows, and scripts with model tier assignments - enables fast agent discovery without loading full markdown files.

    Autonomous Orchestration & Parallel Agents

    Why this matters: Long-running tasks -- batch PR reviews, multi-site audits, large refactors, multi-day feature projects -- are where AI agents deliver the most value. Instead of babysitting one task at a time, the supervisor dispatches work to parallel agents, each in its own git worktree, with automatic retry, progress tracking, and batch completion reporting.

    Pulse Supervisor: Autonomous AI Operations

    The pulse is the heartbeat of aidevops — an autonomous AI supervisor that runs every 2 minutes via launchd. There is no human at the terminal. It manages the entire development pipeline across all repos registered with pulse: true.

    What it does each cycle:

    Phase Action
    Capacity check Circuit breaker, dynamic worker slots calculated from available RAM
    Merge ready PRs Green CI + no blocking reviews → squash merge (free — no worker slot needed)
    Fix failing PRs Dispatch a worker to fix CI failures or address review feedback
    Detect stuck work PRs open 6+ hours with no activity → flag or close and re-file
    Dispatch workers Route open issues to available worker slots, respecting priority and blocked-by: dependencies
    Advance missions Check active multi-day missions, dispatch features, validate milestones, track budget
    Triage quality Read daily quality sweep findings (ShellCheck, SonarCloud, Codacy, CodeRabbit), create issues for actionable findings
    Sync TODOs Create GitHub issues for unsynced TODO entries, commit ref changes
    Kill stuck workers Workers running 3+ hours with no PR are killed to free slots
    Detect orphaned PRs Open PRs with no active worker and no activity for 6+ hours are flagged for re-dispatch

    Operational intelligence:

    • Struggle-ratio — computes messages / max(1, commits) for each active worker. High ratio (>30) with >30 min elapsed and zero commits flags the worker as "struggling". Ratio >50 after 1 hour flags "thrashing". Informational signal — the supervisor LLM decides the action (kill, wait, re-dispatch with more context)
    • Circuit breaker — prevents cascading failures by tracking success/failure rates and tripping when error rate exceeds threshold
    • Dynamic concurrency — worker slot count adapts to available RAM, not a hardcoded constant
    • Stale assignment recovery — tasks assigned to workers that died (no active process, no PR, 3+ hours stale) are automatically unassigned and made available for re-dispatch
    • Priority ordering — green PRs (free merge) > failing PRs (closer to done) > high-priority/bug issues > active mission features > product repos > smaller tasks > oldest

    The pulse is an LLM, not a script. It reads issue bodies, assesses context, and uses judgment. When it encounters something unexpected — an issue body that says "completed", a task with no clear description, a label that doesn't match reality — it handles it the way a competent human manager would.

    # Pulse runs automatically via launchd (every 2 minutes)
    # Manual trigger:
    opencode run "/pulse"

    See: .agents/scripts/commands/pulse.md for the full supervisor specification.

    Missions: Multi-Day Autonomous Projects

    Missions are the highest-level orchestration primitive — autonomous multi-day projects that break a high-level goal into milestones, features, and validation criteria. The pulse supervisor advances them automatically.

    # Scope a mission interactively
    /mission "Redesign the landing pages for mobile-first with A/B testing"

    How missions work:

    1. /mission scopes the goal into milestones with features and acceptance criteria
    2. Each feature becomes a TODO entry tagged mission:mNNN with a GitHub issue
    3. The pulse dispatches features as regular workers (respecting MAX_WORKERS)
    4. When all features in a milestone complete, the pulse dispatches a validation worker to verify integration
    5. Passed milestones advance automatically — the next milestone's features are dispatched
    6. Budget tracking pauses the mission if any category exceeds the alert threshold (default 80%)

    Two execution modes:

    Mode Workflow Best for
    Full Worktree + PR per feature, standard review flow Production code, collaborative projects
    POC Direct commits, skip ceremony Prototypes, experiments, proof-of-concept

    Mission state is tracked in a JSON file committed to the repo. Each pulse cycle reads the state, acts on it, and commits updates — so any session (or the next pulse) can pick up where the last one left off.

    See: .agents/workflows/mission-orchestrator.md for the full orchestrator specification, .agents/scripts/commands/dashboard.md for the mission progress dashboard.

    Multi-Model Verification: Cross-Provider Safety

    High-stakes operations are verified by a second AI model from a different provider before execution. This catches single-model hallucinations before destructive operations cause irreversible damage.

    When verification triggers:

    Risk Level Examples Action
    Critical git push --force to main, DROP DATABASE, production deploy Blocked unless second model agrees
    High Force push to feature branch, data migration, secret exposure Warned, verification recommended
    Medium Bulk file deletion, config changes Logged
    Low Normal edits, test runs No verification

    How it works:

    1. pre-edit-check.sh screens operations against the high-stakes taxonomy
    2. For critical/high operations, verify-operation-helper.sh sends the operation context to a second model (different provider than the primary)
    3. The verifier independently assesses whether the operation is safe
    4. On disagreement, the operation is blocked (critical) or warned (high)
    5. All verification decisions are logged for audit

    Why cross-provider? Same-provider models share training data and failure modes. A Claude hallucination is unlikely to be reproduced by Gemini or GPT, and vice versa. The verification uses the cheapest model tier (haiku-equivalent) — cost is minimal per check.

    Configuration: Per-repo via .agents/reference/high-stakes-operations.md. Opt-out with VERIFY_ENABLED=false (not recommended).

    See: .agents/tools/verification/parallel-verify.md for the verification agent specification.

    Project Bundles: Auto-Configuration

    Bundles are project-type presets that auto-configure model tiers, quality gates, and agent routing per repo. Instead of manually configuring each project, bundles detect what kind of project you're working on and apply sensible defaults.

    Built-in bundles:

    Bundle Auto-detected by Model default Quality gates Agent routing
    web-app package.json + framework markers sonnet Full (lint, test, build, a11y) Build+ default
    library package.json with main/exports sonnet Full + API docs check Build+ default
    cli-tool bin field in package.json sonnet ShellCheck, test Build+ default
    content-site CMS markers, wp-config.php haiku Lighthouse, SEO Marketing for content tasks
    infrastructure Dockerfile, terraform/, ansible/ sonnet ShellCheck, security scan Build+ default
    agent AGENTS.md, .agents/ opus Agent review, prompt quality Build+ default

    Resolution priority: Explicit bundle field in repos.json > .aidevops.json project config > auto-detection from marker files.

    CLI:

    bundle-helper.sh detect <repo-path>    # Auto-detect bundle type
    bundle-helper.sh resolve <repo-path>   # Show resolved config (with overrides)
    bundle-helper.sh show <bundle-name>    # Show bundle defaults
    bundle-helper.sh list                  # List all available bundles

    See: .agents/bundles/ for bundle definitions, .agents/scripts/bundle-helper.sh for the CLI.

    Parallel Agents & Headless Dispatch

    Run multiple AI sessions concurrently with isolated contexts. Named runners provide persistent agent identities with their own instructions and memory.

    Feature Description
    Headless dispatch opencode run for one-shot tasks, opencode serve + --attach for warm server
    Runners Named agent instances with per-runner AGENTS.md, config, and run logs (runner-helper.sh)
    Session management Resume sessions with -s <id> or -c, fork with SDK
    Memory namespaces Per-runner memory isolation with shared access when needed
    SDK orchestration @opencode-ai/sdk for TypeScript parallel dispatch via Promise.all
    Matrix integration Chat-triggered dispatch via self-hosted Matrix (optional)
    # Create a named runner
    runner-helper.sh create code-reviewer --description "Reviews code for security and quality"
    
    # Dispatch a task (one-shot)
    runner-helper.sh run code-reviewer "Review src/auth/ for vulnerabilities"
    
    # Dispatch against warm server (faster, no MCP cold boot)
    opencode serve --port 4096 &
    runner-helper.sh run code-reviewer "Review src/auth/" --attach http://localhost:4096
    
    # Parallel dispatch via CLI
    opencode run --attach http://localhost:4096 --title "Review" "Review src/auth/" &
    opencode run --attach http://localhost:4096 --title "Tests" "Generate tests for src/utils/" &
    wait
    
    # List runners and status
    runner-helper.sh list
    runner-helper.sh status code-reviewer

    Architecture:

    OpenCode Server (opencode serve)
    ├── Session 1 (runner/code-reviewer)
    ├── Session 2 (runner/seo-analyst)
    └── Session 3 (scheduled-task)
             ↑
        HTTP API / SSE Events
             ↑
    ┌────────┴────────┐
    │  Dispatch Layer │ ← runner-helper.sh, cron, Matrix bot, SDK
    └─────────────────┘

    Example runner templates: code-reviewer, seo-analyst - copy and customize for your own runners.

    Matrix bot dispatch (optional): Bridge Matrix chat rooms to runners for chat-triggered AI. Each room maintains persistent conversation context via SQLite -- on idle timeout, the session is compacted (summarised) and stored, so the next message resumes with full context.

    # Setup Matrix bot (interactive wizard)
    matrix-dispatch-helper.sh setup
    
    # Map rooms to runners (each room = separate session)
    matrix-dispatch-helper.sh map '!dev-room:server' code-reviewer
    matrix-dispatch-helper.sh map '!seo-room:server' seo-analyst
    
    # Start bot (daemon mode)
    matrix-dispatch-helper.sh start --daemon
    
    # In Matrix room: "!ai Review src/auth.ts for security issues"
    
    # Manage sessions
    matrix-dispatch-helper.sh sessions list
    matrix-dispatch-helper.sh sessions stats

    See: headless-dispatch.md for full documentation including parallel vs sequential decision guide, SDK examples, CI/CD integration, and custom agent configuration. matrix-bot.md for Matrix bot setup including Cloudron Synapse guide and session persistence.

    Self-Improving Agent System

    Agents that learn from experience and contribute improvements:

    Phase Description
    Review Analyze memory for success/failure patterns (memory-helper.sh)
    Refine Generate and apply improvements to agents
    Test Validate in isolated OpenCode sessions
    PR Contribute to community with privacy filtering

    Safety guardrails:

    • Worktree isolation for all changes
    • Human approval required for PRs
    • Mandatory privacy filter (secretlint + pattern redaction)
    • Dry-run default, explicit opt-in for PR creation
    • Audit log to memory

    Agent Testing Framework

    Test agent behavior through isolated AI sessions with automated validation:

    # Create a test suite
    agent-test-helper.sh create my-tests
    
    # Run tests (auto-detects claude or opencode CLI)
    agent-test-helper.sh run my-tests
    
    # Quick single-prompt test
    agent-test-helper.sh run-one "What tools do you have?" --expect "bash"
    
    # Before/after comparison for agent changes
    agent-test-helper.sh baseline my-tests   # Save current behavior
    # ... modify agents ...
    agent-test-helper.sh compare my-tests    # Detect regressions

    Test suites are JSON files with prompts and validation rules (expect_contains, expect_not_contains, expect_regex, min_length, max_length). Results are saved for historical tracking.

    See: agent-testing.md subagent for full documentation and example test suites.

    Voice Bridge - Talk to Your AI Agent

    Speak naturally to your AI coding agent and hear it respond. The voice bridge connects your microphone to OpenCode via a fast local pipeline -- ask questions, give instructions, execute tasks, all by voice.

    Mic → Silero VAD → Whisper MLX (1.4s) → OpenCode (4-6s) → Edge TTS (0.4s) → Speaker

    Round-trip: ~6-8 seconds on Apple Silicon. The agent can edit files, run commands, create PRs, and confirm what it did -- all via voice.

    Quick start:

    # Start a voice conversation (installs deps automatically)
    voice-helper.sh talk
    
    # Choose engines and voice
    voice-helper.sh talk whisper-mlx edge-tts en-GB-SoniaNeural
    voice-helper.sh talk whisper-mlx macos-say    # Offline mode
    
    # Utilities
    voice-helper.sh devices      # List audio input/output devices
    voice-helper.sh voices       # List available TTS voices
    voice-helper.sh benchmark    # Test STT/TTS/LLM speeds
    voice-helper.sh status       # Check component availability

    Features:

    Feature Details
    Swappable STT whisper-mlx (fastest on Apple Silicon), faster-whisper (CPU)
    Swappable TTS edge-tts (best quality), macos-say (offline), facebookMMS (local)
    Voice exit Say "that's all", "goodbye", "all for now" to end naturally
    STT correction LLM sanity-checks transcription errors before acting (e.g. "test.txte" → "test.txt")
    Task execution Full tool access -- edit files, git operations, run commands
    Session handback Conversation transcript output on exit for calling agent context
    TUI compatible Graceful degradation when launched from AI tool's Bash (no tty)

    How it works: The bridge uses opencode run --attach to connect to a running OpenCode server for low-latency responses (~4-6s vs ~30s cold start). It automatically starts opencode serve if not already running.

    Requirements: Apple Silicon Mac (for whisper-mlx), Python 3.10+, internet (for edge-tts). The voice helper installs Python dependencies automatically into the S2S venv.

    Speech-to-Speech Pipeline (Advanced)

    For advanced use cases (custom LLMs, server/client deployment, multi-language, phone integration), the full huggingface/speech-to-speech pipeline is also available:

    speech-to-speech-helper.sh setup              # Install pipeline
    speech-to-speech-helper.sh start --local-mac  # Run on Apple Silicon
    speech-to-speech-helper.sh start --cuda       # Run on NVIDIA GPU
    speech-to-speech-helper.sh start --server     # Server mode (remote clients)

    Supported languages: English, French, Spanish, Chinese, Japanese, Korean (auto-detect or fixed).

    Additional voice methods:

    Method Description
    VoiceInk + Shortcut macOS: transcription → OpenCode API → response
    iPhone Shortcut iOS: dictate → HTTP → speak response
    Pipecat STS Full voice pipeline: Soniox STT → AI → Cartesia TTS

    See: speech-to-speech.md for full component options, CLI parameters, and integration patterns (Twilio phone, video narration, voice-driven DevOps).

    Scheduled Agent Tasks

    Cron-based agent dispatch for automated workflows:

    # Example: Daily SEO report at 9am
    0 9 * * * ~/.aidevops/agents/scripts/runner-helper.sh run "seo-analyst" "Generate daily SEO report"

    See: TODO.md tasks t109-t118 for implementation status.

    Requirements

    aidevops itself is lightweight (shell scripts + markdown), but AI model workloads benefit from capable hardware:

    Tier Machine CPU RAM GPU Best For
    Minimum Any modern laptop 4+ cores 8GB None Framework only, cloud AI APIs
    Recommended Mac Studio / desktop Apple M1+ or 8+ cores 16GB+ MPS (Apple) or NVIDIA 8GB+ Local voice, browser automation, dev servers
    Power User Workstation 8+ cores 32GB+ NVIDIA 24GB+ VRAM Full voice pipeline, local LLMs, parallel agents
    Server Cloud GPU Any 16GB+ A100 / H100 Production voice, multi-user, batch processing

    Cloud GPU providers for on-demand GPU access: NVIDIA Cloud, Vast.ai, RunPod, Lambda. See .agents/tools/infrastructure/cloud-gpu.md for the full deployment guide (SSH setup, Docker, model caching, cost optimization).

    Note: Most aidevops features (infrastructure management, SEO, code quality, Git workflows) require no GPU. GPU is only needed for local AI model inference (voice pipeline, vision models, local LLMs).

    Software Dependencies

    # Install dependencies (auto-detected by setup.sh)
    brew install sshpass jq curl mkcert dnsmasq fd ripgrep  # macOS
    sudo apt-get install sshpass jq curl dnsmasq fd-find ripgrep  # Ubuntu/Debian
    
    # Generate SSH key
    ssh-keygen -t ed25519 -C "your-email@domain.com"

    File Discovery Tools

    AI agents use fast file discovery tools for efficient codebase navigation:

    Tool Purpose Speed
    fd Fast file finder (replaces find) ~10x faster
    ripgrep Fast content search (replaces grep) ~10x faster

    Both tools respect .gitignore by default and are written in Rust for maximum performance.

    Preference order for file discovery:

    1. git ls-files '*.md' - Instant, git-tracked files only
    2. fd -e md - Fast, respects .gitignore
    3. rg --files -g '*.md' - Fast, respects .gitignore
    4. Built-in glob tools - Fallback when bash unavailable

    The setup script offers to install these tools automatically.

    Comprehensive Service Coverage

    Infrastructure & Hosting

    • Hostinger: Shared hosting, domains, email
    • Hetzner Cloud: VPS servers, networking, load balancers
    • Closte: Managed hosting, application deployment
    • Coolify Enhanced with CLI: Self-hosted PaaS with CLI integration
    • Cloudron Enhanced with packaging guide: Server and app management platform with custom app packaging support
    • Vercel Enhanced with CLI: Modern web deployment platform with CLI integration
    • AWS: Cloud infrastructure support via standard protocols
    • DigitalOcean: Cloud infrastructure support via standard protocols

    Domain & DNS

    Development & Git Platforms with CLI Integration

    • GitHub Enhanced with CLI: Repository management, actions, API, GitHub CLI (gh) integration
    • GitLab Enhanced with CLI: Self-hosted and cloud Git platform with GitLab CLI (glab) integration
    • Gitea Enhanced with CLI: Lightweight Git service with Gitea CLI (tea) integration
    • Agno: Local AI agent operating system for DevOps automation
    • Pandoc: Document conversion to markdown for AI processing

    AI Orchestration Frameworks

    • Langflow: Visual drag-and-drop builder for AI workflows (MIT, localhost:7860)
    • CrewAI: Multi-agent teams with role-based orchestration (MIT, localhost:8501)
    • AutoGen: Microsoft's agentic AI framework with MCP support (MIT, localhost:8081)

    Video Creation

    • Remotion: Programmatic video creation with React - animations, compositions, media handling, captions
    • Video Prompt Design: AI video prompt engineering using the 7-component meta prompt framework for Veo 3 and similar models
    • MuAPI: Multimodal AI API for image/video/audio/VFX generation, workflows, agents, music (Suno), and lip-sync - unified creative orchestration platform
    • yt-dlp: YouTube video/audio/playlist/channel downloads, transcript extraction, and local file audio conversion via ffmpeg

    WordPress Development

    • LocalWP: WordPress development environment with MCP database access
    • MainWP: WordPress site management dashboard

    Git CLI Enhancement Features:

    • .agents/scripts/github-cli-helper.sh: Advanced GitHub repository, issue, PR, and branch management
    • .agents/scripts/gitlab-cli-helper.sh: Complete GitLab project, issue, MR, and branch management
    • .agents/scripts/gitea-cli-helper.sh: Full Gitea repository, issue, PR, and branch management

    Security & Code Quality

    • gopass: GPG-encrypted secret management with AI-native wrapper (aidevops secret) - subprocess injection + output redaction keeps secrets out of AI context
    • Vaultwarden: Password and secrets management
    • SonarCloud: Security and quality analysis (A-grade ratings)
    • CodeFactor: Code quality metrics (A+ score)
    • Codacy: Multi-tool analysis (0 findings)
    • CodeRabbit: AI-powered code reviews
    • Snyk: Security vulnerability scanning
    • Socket: Dependency security and supply chain protection
    • Sentry: Error monitoring and performance tracking
    • Cisco Skill Scanner: Security scanner for AI agent skills (prompt injection, exfiltration, malicious code)
    • VirusTotal: Advisory threat intelligence via VT API v3 -- file hash scanning (70+ AV engines), domain/URL reputation checks for imported skills
    • Secretlint: Detect exposed secrets in code
    • OSV Scanner: Google's vulnerability database scanner
    • Qlty: Universal code quality platform (70+ linters, auto-fixes)
    • Gemini Code Assist: Google's AI-powered code completion and review

    AI Prompt Optimization

    • Augment Context Engine: Semantic codebase retrieval with deep code understanding
    • Repomix: Pack codebases into AI-friendly context (80% token reduction with compress mode)
    • DSPy: Framework for programming with language models
    • DSPyGround: Interactive playground for prompt optimization
    • TOON Format: Token-Oriented Object Notation - 20-60% token reduction for LLM prompts

    Document Processing & OCR

    • Document Creation Agent (document-creation-helper.sh): Unified document format conversion, template-based creation, and OCR for scanned PDFs/images. Routes to the best available tool (pandoc, odfpy, LibreOffice, Tesseract, EasyOCR, GLM-OCR) based on format pair and availability. Supports 13+ formats (ODT, DOCX, PDF, MD, HTML, EPUB, PPTX, ODP, XLSX, ODS, RTF, CSV, TSV).
    • LibPDF: PDF form filling, digital signatures (PAdES B-B/T/LT/LTA), encryption, merge/split, text extraction
    • MinerU: Layout-aware PDF-to-markdown/JSON conversion with OCR (109 languages), formula-to-LaTeX, and table extraction (53k+ stars, AGPL-3.0)
    • Unstract: LLM-powered structured data extraction from unstructured documents (PDF, images, DOCX)
    • GLM-OCR: Local OCR via Ollama - purpose-built for document text extraction (tables, forms, complex layouts) with zero cloud dependency

    PDF/OCR Tool Selection:

    Need Tool Why
    Format conversion Document Creation Agent Auto-selects best tool, 13+ formats
    Complex PDF to markdown MinerU Layout-aware, formulas, tables, 109-language OCR
    Quick text extraction GLM-OCR Local, fast, no API keys, privacy-first
    Structured JSON output Unstract Schema-based extraction, complex documents
    Screen/window OCR Peekaboo + GLM-OCR peekaboo image --analyze --model ollama/glm-ocr
    PDF text extraction LibPDF Native PDF parsing, no AI needed
    Simple format conversion Pandoc Lightweight, broad format support
    Scanned PDF OCR Document Creation Agent Auto-detects, routes to Tesseract/EasyOCR/GLM-OCR

    Quick start:

    # Document creation agent
    document-creation-helper.sh status                          # Check available tools
    document-creation-helper.sh install --standard              # Install core tools
    document-creation-helper.sh convert report.pdf --to odt     # Convert formats
    document-creation-helper.sh convert scan.pdf --to md --ocr  # OCR scanned PDF
    document-creation-helper.sh template draft --type letter     # Generate template
    
    # GLM-OCR direct
    ollama pull glm-ocr
    ollama run glm-ocr "Extract all text" --images /path/to/document.png

    See .agents/tools/ocr/glm-ocr.md for batch processing, PDF workflows, and Peekaboo integration.

    Communications

    • Twilio: SMS, voice calls, WhatsApp, phone verification (Verify API), call recording & transcription
    • Telfon: Twilio-powered cloud phone system with iOS/Android/Chrome apps for end-user calling interface
    • Matrix: Self-hosted chat with bot integration for AI runner dispatch (matrix-dispatch-helper.sh)
    • SimpleX Chat: Privacy-first messaging with AI bot gateway for agent dispatch (simplex-bot/)
    • Matterbridge: Multi-platform chat bridge connecting 20+ platforms (Matrix, Discord, Telegram, Slack, IRC, WhatsApp, XMPP) with SimpleX adapter (matterbridge-helper.sh)

    Animation & Video

    • Anime.js: Lightweight JavaScript animation library for CSS, SVG, DOM attributes, and JS objects
    • Remotion: Programmatic video creation with React - create videos using code with 29 specialized rule files
    • Video Prompt Design: Structured prompt engineering for AI video generation (Veo 3, 7-component framework, character consistency, audio design)

    Voice AI

    • Voice Bridge: Talk to your AI coding agent via speech -- Silero VAD → Whisper MLX → OpenCode → Edge TTS (~6-8s round-trip)
    • Speech-to-Speech: Open-source modular voice pipeline (VAD → STT → LLM → TTS) with local GPU and cloud GPU deployment
    • Pipecat: Real-time voice agent framework with Soniox STT, Cartesia TTS, and multi-LLM support

    Performance & Monitoring

    • PageSpeed Insights: Website performance auditing
    • Lighthouse: Comprehensive web app analysis
    • WebPageTest: Real-world performance testing from 40+ global locations with filmstrip, waterfall, and Core Web Vitals
    • Updown.io: Website uptime and SSL monitoring

    AI & Documentation

    • Context7: Real-time documentation access for libraries and frameworks
    • Context7 CLI mode: npx ctx7 setup --opencode --cli for docs lookup without MCP transport (useful fallback in shell-first workflows)
    • Local Models: Run AI models locally via llama.cpp for free, private, offline inference with HuggingFace GGUF model management (local-model-helper.sh)

    Local Development

    • Localdev: Local development environment manager with dnsmasq, Traefik, and mkcert for production-like .local domains with HTTPS on port 443 (localdev-helper.sh)

    MCP Integrations

    Model Context Protocol servers for real-time AI assistant integration. The framework configures these MCPs for OpenCode (TUI, Desktop, and Extension for Zed/VSCode).

    All Supported MCPs (20 available)

    MCP packages are installed globally via bun install -g for instant startup (no npx registry lookups). Run setup.sh or aidevops update-tools to update to latest versions.

    MCP Purpose Tier API Key Required
    Augment Context Engine Semantic codebase retrieval Global Yes (Augment account)
    Claude Code MCP Claude as sub-agent Global No
    Amazon Order History Order data extraction Per-agent No
    Chrome DevTools Browser debugging & automation Per-agent No
    Context7 Library documentation lookup Per-agent No
    Docker MCP Container management Per-agent No
    Google Analytics Analytics data Per-agent Yes (Google API)
    Google Search Console Search performance data Per-agent Yes (Google API)
    Grep by Vercel GitHub code search Per-agent No
    LocalWP WordPress database access Per-agent No (local)
    macOS Automator macOS automation Per-agent No
    Playwriter Browser with extensions Per-agent No
    QuickFile Accounting API Per-agent Yes
    Repomix Codebase packing for AI context Per-agent No
    Sentry Error tracking Per-agent Yes
    shadcn UI component library Per-agent No
    Socket Dependency security Per-agent No
    Unstract Document data extraction Per-agent Yes
    OpenAPI Search Search and explore any OpenAPI spec Per-agent No
    Cloudflare Code Mode Full Cloudflare API (2,500+ endpoints via 2 tools) Per-agent Yes (Cloudflare)

    Tier explanation:

    • Global - Tools always available (loaded into every session)
    • Per-agent - Tools disabled globally, enabled per-agent via config (zero context overhead when unused)

    Performance optimization: MCP packages are installed globally via bun install -g for instant startup (~0.1s vs 2-3s with npx). The framework uses a three-tier loading strategy: MCPs load eagerly at startup or on-demand when their subagent is invoked. This reduces OpenCode startup time significantly.

    SEO Integrations (curl subagents - no MCP overhead)

    These use direct API calls via curl, avoiding MCP server startup entirely:

    Integration Purpose API Key Required
    Ahrefs SEO analysis & backlinks Yes
    DataForSEO SERP, keywords, backlinks, on-page Yes
    Serper Google Search API (web, images, news) Yes
    Semrush Domain analytics, keywords, backlinks, competitor research Yes
    ContentKing Real-time SEO monitoring, change tracking, issues Yes
    WebPageTest Real-world performance testing from 40+ global locations Yes
    Hostinger Hosting management Yes
    NeuronWriter Content optimization & NLP analysis Yes
    Outscraper Google Maps & business data extraction Yes

    By Category

    Context & Codebase:

    • Augment Context Engine - Semantic codebase retrieval with deep code understanding
    • llm-tldr - Semantic code analysis with 95% token savings (details below)
    • Context7 - Real-time documentation access for thousands of libraries
    • Repomix - Pack codebases into AI-friendly context
    • OpenAPI Search - Search and explore any OpenAPI specification (zero install, Cloudflare Worker)
    • MCPorter - Discover, call, compose, and generate CLIs/typed clients for MCP servers

    Browser Automation (8 tools + anti-detect stack, benchmarked):

    • Playwright - Fastest engine (0.9s form fill), parallel contexts, extensions, proxy (auto-installed)
    • playwright-cli - Microsoft official CLI for AI agents, --session isolation, built-in tracing
    • dev-browser - Persistent profile, stays logged in, ARIA snapshots, pairs with DevTools
    • agent-browser - CLI/CI/CD, --session parallel, ref-based element targeting, iOS Simulator support (macOS)
    • Crawl4AI - Bulk extraction, arun_many parallel (1.7x), LLM-ready markdown
    • WaterCrawl - Self-hosted crawling with web search, sitemap generation, JS rendering, proxy support
    • Playwriter - Your browser's extensions/passwords/proxy, already unlocked
    • Stagehand - Natural language automation, self-healing selectors
    • Chrome DevTools MCP - Companion: Lighthouse, network throttling, CSS coverage (pairs with any tool)
    • Cloudflare Browser Rendering - Server-side web scraping
    • Peekaboo - macOS screen capture and GUI automation (pixel-accurate captures, AI vision analysis)
    • Sweet Cookie - Browser cookie extraction for API calls without launching a browser
    • Anti-Detect Stack (details):
      • Camoufox (4.9k stars) - Firefox anti-detect, C++ fingerprint injection, WebRTC/Canvas/WebGL spoofing
      • rebrowser-patches (1.2k stars) - Chromium CDP leak prevention, automation signal removal
      • Multi-profile management - Persistent/clean/warm/disposable profiles (like AdsPower/GoLogin)
      • Proxy integration - Residential, SOCKS5, VPN per profile with geo-targeting

    SEO & Research:

    • Google Search Console - Search performance insights (MCP)
    • Grep by Vercel - Search code snippets across GitHub repositories (MCP)
    • Ahrefs - SEO analysis, backlink research, keyword data (curl subagent)
    • DataForSEO - Comprehensive SEO data APIs (curl subagent)
    • Serper - Google Search API (curl subagent)
    • SEO Audit - Comprehensive technical SEO auditing: crawlability, indexation, Core Web Vitals, on-page optimization, E-E-A-T signals (imported skill from marketingskills)
    • Keyword Research - Strategic keyword research with SERP weakness detection (via DataForSEO + Serper + Ahrefs)
    • Site Crawler - Screaming Frog-like SEO auditing: broken links, redirects, meta issues, structured data
    • Domain Research - DNS intelligence via THC (4.51B records) and Reconeer APIs: rDNS, subdomains, CNAMEs
    • NeuronWriter - Content optimization with NLP analysis, competitor research, and content scoring (curl subagent)

    Data Extraction:

    • Outscraper - Google Maps, business data, reviews extraction (curl subagent)
    • curl-copy - Authenticated scraping via DevTools "Copy as cURL" (no browser automation needed)

    Performance & Security:

    • PageSpeed Insights - Website performance auditing
    • Snyk - Security vulnerability scanning
    • Cloudflare Code Mode - Full Cloudflare API (2,500+ endpoints) via 2 tools in ~1,000 tokens (DNS, WAF, DDoS, R2, Workers, Zero Trust)
    • IP Reputation - Multi-provider IP reputation checking (Spamhaus, ProxyCheck, AbuseIPDB) for VPS/proxy vetting (ip-reputation-helper.sh)

    WordPress & Development:

    CRM & Marketing:

    • FluentCRM - WordPress CRM: contacts, tags, lists, campaigns, automations, smart links, webhooks

    Accounts & Finance:

    Document Processing & OCR:

    • LibPDF - PDF form filling, digital signatures, encryption, merge/split (via helper script)
    • Unstract - LLM-powered structured data extraction from PDFs, images, DOCX (MCP)
    • GLM-OCR - Local OCR via Ollama for document text extraction (subagent)

    Quick Setup

    # Install all MCP integrations
    bash .agents/scripts/setup-mcp-integrations.sh all
    
    # Install specific integration
    bash .agents/scripts/setup-mcp-integrations.sh stagehand          # JavaScript version
    bash .agents/scripts/setup-mcp-integrations.sh stagehand-python   # Python version
    bash .agents/scripts/setup-mcp-integrations.sh stagehand-both     # Both versions
    bash .agents/scripts/setup-mcp-integrations.sh chrome-devtools

    OpenCode LSP Configuration

    OpenCode includes built-in LSP servers for 35+ languages. For aidevops projects that use Markdown and TOON extensively, add these optional LSP servers to your opencode.json for real-time diagnostics during editing:

    {
      "lsp": {
        "markdownlint": {
          "command": ["markdownlint-language-server", "--stdio"],
          "extensions": [".md"]
        },
        "toon-lsp": {
          "command": ["toon-lsp"],
          "extensions": [".toon"]
        }
      }
    }

    Install the servers:

    npm install -g markdownlint-language-server  # Markdown diagnostics
    cargo install toon-lsp                        # TOON syntax validation

    These catch formatting and syntax issues during editing, reducing preflight/postflight fix cycles.

    Browser Automation

    8 browser tools + anti-detect stack + device emulation, benchmarked and integrated for AI-assisted web automation, dev testing, mobile/responsive testing, data extraction, and bot detection evasion. Agents automatically select the optimal tool based on task requirements.

    Performance Benchmarks

    Tested on macOS ARM64, all headless, warm daemon:

    Test Playwright playwright-cli dev-browser agent-browser Crawl4AI Playwriter Stagehand
    Navigate + Screenshot 1.43s ~1.9s 1.39s 1.90s 2.78s 2.95s 7.72s
    Form Fill (4 fields) 0.90s ~1.4s 1.34s 1.37s N/A 2.24s 2.58s
    Data Extraction (5 items) 1.33s ~1.5s 1.08s 1.53s 2.53s 2.68s 3.48s
    Multi-step (click + nav) 1.49s ~2.0s 1.49s 3.06s N/A 4.37s 4.48s
    Parallel (3 sessions) 1.6s ~2.0s N/A 2.0s 3.0s N/A Slow

    Feature Matrix

    Feature Playwright playwright-cli dev-browser agent-browser Crawl4AI Playwriter Stagehand
    Headless Yes Yes (default) Yes Yes (default) Yes No (your browser) Yes
    Proxy/VPN Full No Via args No Full Your browser Via args
    Extensions Yes (persistent) No Yes (profile) No No Yes (yours) Possible
    Password managers Partial (needs unlock) No Partial No No Yes (unlocked) No
    Device emulation Full (100+ devices) No No No No No Via Playwright
    Parallel sessions 5 ctx/2.1s --session Shared 3 sess/2.0s arun_many 1.7x Shared Per-instance
    Session persistence storageState Profile dir Profile dir state save/load user_data_dir Your browser Per-instance
    Tracing Full API Built-in CLI Via Playwright Via Playwright No Via CDP Via Playwright
    Natural language No No No No LLM extraction No Yes
    Self-healing No No No No No No Yes
    iOS Simulator No No No Yes (macOS) No No No
    Maintainer Microsoft Microsoft Community Vercel Community Community Browserbase

    Tool Selection

    Need Tool Why
    Fastest automation Playwright 0.9s form fill, parallel contexts
    AI agent (CLI) playwright-cli Microsoft official, --session isolation, built-in tracing
    Stay logged in dev-browser Profile persists across restarts
    Your extensions/passwords Playwriter Already unlocked in your browser
    Bulk extraction Crawl4AI Purpose-built, parallel, LLM-ready output
    Self-hosted crawling WaterCrawl Docker deployment, web search, sitemap generation
    CLI/CI/CD playwright-cli or agent-browser No server needed, --session isolation
    iOS mobile testing agent-browser Real Safari in iOS Simulator (macOS only)
    Unknown pages Stagehand Natural language, self-healing
    Performance debugging Chrome DevTools MCP Companion tool, pairs with any browser
    Mobile/tablet emulation Playwright 100+ device presets, viewport, touch, geolocation, locale
    Authenticated one-off scrape curl-copy DevTools "Copy as cURL" → paste to terminal/AI
    Bot detection evasion Anti-detect stack Camoufox (full) or rebrowser-patches (quick)
    Multi-account Browser profiles Persistent fingerprint + proxy per account

    AI Page Understanding

    Agents use lightweight methods instead of expensive vision API calls:

    Method Speed Token Cost Use For
    ARIA snapshot ~0.01s 50-200 tokens Forms, navigation, interactive elements
    Text extraction ~0.002s Text length Reading content
    Element scan ~0.002s ~20/element Form filling, clicking
    Screenshot ~0.05s ~1K tokens (vision) Visual debugging only

    See .agents/tools/browser/browser-automation.md for the full decision tree and browser-benchmark.md for reproducible benchmark scripts.

    Device Emulation

    Test responsive layouts and mobile-specific behavior using Playwright's built-in device emulation. Supports 100+ device presets with viewport, user agent, touch events, device scale factor, geolocation, locale/timezone, permissions, color scheme, offline mode, and network throttling.

    Common device presets:

    Device Viewport Scale Touch
    iPhone 15 393x852 3 Yes
    iPad Pro 11 834x1194 2 Yes
    Pixel 7 412x915 2.625 Yes
    Galaxy S9+ 320x658 4.5 Yes
    Desktop Chrome 1280x720 1 No

    Emulation capabilities:

    Feature Example
    Device presets devices['iPhone 13'] - viewport, UA, touch, scale
    Viewport/HiDPI viewport: { width: 2560, height: 1440 }, deviceScaleFactor: 2
    Geolocation geolocation: { longitude: -74.006, latitude: 40.7128 }
    Locale/timezone locale: 'de-DE', timezoneId: 'Europe/Berlin'
    Color scheme colorScheme: 'dark'
    Offline mode offline: true
    Permissions permissions: ['geolocation', 'notifications']
    Network throttling CDP-based Slow 3G / Fast 3G emulation

    Recipes included: Responsive breakpoint testing, multi-device parallel testing, touch gesture testing, geolocation-dependent features, dark mode visual regression, and network condition emulation.

    See .agents/tools/browser/playwright-emulation.md for complete documentation with code examples.

    Anti-Detect Browser

    Open-source alternative to AdsPower, GoLogin, and OctoBrowser for multi-account automation and bot detection evasion.

    Architecture:

    Layer 4: CAPTCHA Solving    → CapSolver (existing)
    Layer 3: Network Identity   → Proxies (residential/SOCKS5/VPN per profile)
    Layer 2: Browser Identity   → Camoufox (C++ fingerprint injection)
    Layer 1: Automation Stealth → rebrowser-patches (CDP leak prevention)
    Layer 0: Browser Engine     → Playwright (existing)

    Profile Types:

    Type Cookies Fingerprint Use Case
    Persistent Saved Fixed per profile Account management, stay logged in
    Clean None Random each launch Scraping, one-off tasks
    Warm Saved Fixed Pre-warmed accounts (browsing history)
    Disposable None Random Single-use, maximum anonymity

    Quick Start:

    # Setup
    anti-detect-helper.sh setup
    
    # Create profile with proxy
    anti-detect-helper.sh profile create "my-account" --type persistent --os macos
    
    # Launch (Camoufox with auto-generated fingerprint)
    anti-detect-helper.sh launch --profile "my-account" --headless
    
    # Test detection (BrowserScan, SannyBot)
    anti-detect-helper.sh test --profile "my-account"
    
    # Warm up profile with browsing history
    anti-detect-helper.sh warmup "my-account" --duration 30m

    Engine Selection:

    Engine Stealth Level Speed Best For
    Camoufox (Firefox) High (C++ level) Medium Full anti-detect, fingerprint rotation
    rebrowser-patches (Chromium) Medium (CDP patches) Fast Quick stealth on existing Playwright code

    See .agents/tools/browser/anti-detect-browser.md for the full decision tree and subagent index.

    Repomix - AI Context Generation

    Repomix packages your codebase into AI-friendly formats for sharing with AI assistants. This framework includes optimized Repomix configuration for consistent context generation.

    Why Repomix?

    Use Case Tool When to Use
    Interactive coding Augment Context Engine Real-time semantic search during development
    Share with external AI Repomix Self-contained snapshot for ChatGPT, Claude web, etc.
    Architecture review Repomix (compress) 80% token reduction, structure only
    CI/CD integration GitHub Action Automated context in releases

    Quick Usage

    # Pack current repo with configured defaults
    npx repomix
    
    # Compress mode (~80% smaller, structure only)
    npx repomix --compress
    
    # Or use the helper script
    .agents/scripts/context-builder-helper.sh pack      # Full context
    .agents/scripts/context-builder-helper.sh compress  # Compressed

    Configuration Files

    File Purpose
    repomix.config.json Default settings (style, includes, security)
    .repomixignore Additional exclusions beyond .gitignore
    repomix-instruction.md Custom AI instructions included in output

    Key Design Decisions

    • No pre-generated files: Outputs are generated on-demand to avoid staleness
    • Inherits .gitignore: Security patterns automatically respected
    • Secretlint enabled: Scans for exposed credentials before output
    • Symlinks excluded: Avoids duplicating .agents/ content

    MCP Integration

    Repomix runs as an MCP server for direct AI assistant integration:

    {
      "repomix": {
        "type": "local",
        "command": ["repomix", "--mcp"],
        "enabled": true
      }
    }

    Install globally first: bun install -g repomix (done automatically by setup.sh)

    See .agents/tools/context/context-builder.md for complete documentation.

    Augment Context Engine provides semantic codebase retrieval - understanding your code at a deeper level than simple text search. It's the recommended tool for real-time interactive coding sessions.

    Why Augment Context Engine?

    Feature grep/glob Augment Context Engine
    Text matching Exact patterns Semantic understanding
    Cross-file context Manual Automatic
    Code relationships None Understands dependencies
    Natural language No Yes

    Use it to:

    • Find related code across your entire codebase
    • Understand project architecture quickly
    • Discover patterns and implementations
    • Get context-aware code suggestions

    Quick Setup

    # 1. Install Auggie CLI (requires Node.js 22+)
    npm install -g @augmentcode/auggie@prerelease
    
    # 2. Authenticate (opens browser)
    auggie login
    
    # 3. Verify installation
    auggie token print

    MCP Integration

    Add to your AI assistant's MCP configuration:

    OpenCode (~/.config/opencode/opencode.json):

    {
      "mcp": {
        "augment-context-engine": {
          "type": "local",
          "command": ["auggie", "--mcp"],
          "enabled": true
        }
      }
    }

    claude-code CLI:

    claude mcp add-json auggie-mcp --scope user '{"type":"stdio","command":"auggie","args":["--mcp"]}'

    Verification

    Test with this prompt:

    What is this project? Please use codebase retrieval tool to get the answer.

    The AI should provide a semantic understanding of your project architecture.

    Repomix vs Augment Context Engine

    Use Case Tool When to Use
    Interactive coding Augment Context Engine Real-time semantic search during development
    Share with external AI Repomix Self-contained snapshot for ChatGPT, Claude web, etc.
    Architecture review Repomix (compress) 80% token reduction, structure only
    CI/CD integration Repomix GitHub Action Automated context in releases

    See .agents/tools/context/augment-context-engine.md for complete documentation.

    llm-tldr - Semantic Code Analysis

    llm-tldr extracts code structure and semantics, saving ~95% tokens compared to raw code. From the Continuous-Claude project.

    # Install
    pip install llm-tldr
    
    # CLI usage
    tldr tree ./src                    # File structure with line counts
    tldr structure src/auth.py         # Code skeleton (classes, functions)
    tldr context src/auth.py           # Full semantic analysis
    tldr search "authentication" ./src # Semantic code search
    tldr impact src/auth.py validate   # What would change affect?

    MCP Integration:

    {
      "llm-tldr": {
        "command": "tldr-mcp",
        "args": ["--project", "${workspaceFolder}"]
      }
    }
    Feature Token Savings Use Case
    Structure extraction 90% Understanding code layout
    Context analysis 95% Comprehensive code understanding
    Semantic search N/A Finding code by meaning
    Impact analysis N/A Change risk assessment

    See .agents/tools/context/llm-tldr.md for complete documentation.

    Cross-Tool Compatibility

    Agent Skills Standard

    aidevops implements the Agent Skills standard for cross-tool compatibility. Skills are auto-discovered by compatible AI assistants.

    Generated SKILL.md files in ~/.aidevops/agents/ provide skill metadata following the Agent Skills standard. These are discoverable by any compatible tool.

    Claude Code Plugin Marketplace

    aidevops is registered as a Claude Code plugin marketplace. Install with two commands:

    /plugin marketplace add marcusquinn/aidevops
    /plugin install aidevops@aidevops

    This installs the complete framework: 13 primary agents, 900+ subagents, and 390+ helper scripts.

    Importing External Skills

    Import skills from GitHub or ClawdHub using the aidevops skill CLI:

    # Import from GitHub (auto-detects format)
    aidevops skill add owner/repo
    
    # Import from ClawdHub (skill registry with vector search)
    aidevops skill add clawdhub:caldav-calendar
    aidevops skill add https://clawdhub.com/owner/slug
    
    # More examples
    aidevops skill add anthropics/skills/pdf           # Specific skill from multi-skill repo
    aidevops skill add vercel-labs/agent-skills         # All skills from a repo
    aidevops skill add expo/skills --name expo-dev      # Custom name
    aidevops skill add owner/repo --dry-run             # Preview without changes

    Supported sources:

    • GitHub repos (owner/repo or full URL) — fetched via git clone
    • ClawdHub (clawdhub:slug or full URL) — fetched via Playwright browser automation

    Supported formats:

    • SKILL.md - Agent Skills standard (preferred)
    • AGENTS.md - Claude Code agents format
    • .cursorrules - Cursor rules format (auto-converted)

    Features:

    • Auto-detection of skill format and category placement
    • Conflict detection with merge/replace/rename options
    • Upstream commit tracking for update detection (aidevops skill check)
    • Conversion to aidevops subagent format with YAML frontmatter
    • Registry stored in .agents/configs/skill-sources.json
    • Telemetry disabled (no data sent to skills.sh or other services)

    How it differs from npx add-skill:

    aidevops skill add npx add-skill
    Target Converts to aidevops format in .agents/ Copies SKILL.md to agent-specific dirs
    Tracking Git commit-based upstream tracking Lock file with content hashes
    Telemetry Disabled Sends anonymous install counts
    Scope OpenCode-first 22+ agents
    Updates aidevops skill check (GitHub API) npx skills check (Vercel API)

    See .agents/scripts/add-skill-helper.sh for implementation details.

    AI Agents & Subagents

    Agents are specialized AI personas with focused knowledge and tool access. Instead of giving your AI assistant access to everything at once (which wastes context tokens), agents provide targeted capabilities for specific tasks.

    Call them in your AI assistant conversation with a simple @mention

    How Agents Work

    Concept Description
    Main Agent Domain-focused assistant (e.g., WordPress, SEO, DevOps)
    Subagent Specialized assistant for specific services (invoked with @mention)
    MCP Tools Only loaded when relevant agent is invoked (saves tokens)

    Main Agents

    Primary agents as registered in subagent-index.toon (13 total). MCPs are loaded on-demand per subagent, not per primary agent:

    Name File Purpose Model Tier
    Build+ build-plus.md Enhanced Build with context tools (default agent) opus
    Automate automate.md Scheduling, dispatch, monitoring, background orchestration sonnet
    Accounts accounts.md Financial operations opus
    Business business.md Company orchestration via AI runners sonnet
    Content content.md Content creation workflows opus
    Health health.md Health and wellness opus
    Legal legal.md Legal compliance opus
    Marketing marketing.md Marketing strategy, email campaigns, paid ads, CRO opus
    Research research.md Research and analysis tasks gemini/grok
    Sales sales.md Sales operations and CRM pipeline opus
    SEO seo.md SEO optimization and analysis opus
    Social-Media social-media.md Social media management opus
    Video video.md AI video generation and prompt engineering opus

    Specialist subagents (@plan-plus, @aidevops, @wordpress, Build-Agent, Build-MCP, etc.) live under tools/ or as mode: subagent files and are invoked via @mention when domain expertise is needed. See subagent-index.toon for the full listing.

    Example Subagents with MCP Integration

    These are examples of subagents that have supporting MCPs enabled. See .agents/ for the full list of 900+ subagents organized by domain.

    Agent Purpose MCPs Enabled
    @hostinger Hosting, WordPress, DNS, domains hostinger-api
    @hetzner Cloud servers, firewalls, volumes hetzner-* (multi-account)
    @wordpress Local dev, MainWP management localwp, context7
    @seo Search Console, keyword research, domain intelligence gsc, ahrefs, serper, context7
    @dataforseo SERP, keywords, backlinks, on-page analysis (curl subagent)
    @domain-research DNS intelligence: rDNS, subdomains, CNAMEs (THC + Reconeer) (API-based)
    @serper Google Search API (web, images, news, places) serper
    @list-keys List all configured API keys and storage locations (read-only)
    @code-standards Quality standards reference, compliance checking context7
    @browser-automation Testing, scraping, DevTools chrome-devtools, context7
    @performance Core Web Vitals, network analysis, accessibility chrome-devtools
    @git-platforms GitHub, GitLab, Gitea gh_grep, context7
    @sentry Error monitoring, Next.js SDK setup sentry
    @socket Dependency security scanning socket
    @security-analysis AI-powered vulnerability detection (OSV, Ferret, git history) osv-scanner, gemini-cli-security
    @secretlint Detect exposed secrets in code (Docker-based)
    @snyk Security vulnerability scanning (API-based)
    @auditing Code auditing services and security analysis (API-based)
    @agent-review Session analysis, agent improvement (under build-agent/) (read/write only)

    Setup for OpenCode

    # Install aidevops agents for OpenCode
    .agents/scripts/generate-opencode-agents.sh
    
    # Check status
    .agents/scripts/generate-opencode-agents.sh  # Shows status after generation

    Claude Marketplace

    aidevops is available in the Claude marketplace:

    /plugin marketplace add marcusquinn/aidevops
    /plugin install aidevops-all@aidevops

    Agent Skills (SKILL.md): Auto-discovered from ~/.aidevops/agents/ after running setup.sh. Compatible with any tool that supports the Agent Skills standard.

    Continuous Improvement with @agent-review

    End every session by calling @agent-review to analyze what worked and what didn't:

    @agent-review analyze this session and suggest improvements to the agents used

    The review agent will:

    1. Identify which agents were used
    2. Evaluate missing, incorrect, or excessive information
    3. Suggest specific improvements to agent files
    4. Generate ready-to-apply edits
    5. Optionally compose a PR to contribute improvements back to aidevops

    This creates a feedback loop:

    Session → @agent-review → Improvements → Better Agents → Better Sessions
                    ↓
             PR to aidevops repo (optional)

    Contributing improvements:

    @agent-review create a PR for improvement #2

    The agent will create a branch, apply changes, and submit a PR to marcusquinn/aidevops with a structured description. Your real-world usage helps improve the framework for everyone.

    Code quality learning loop:

    The @code-quality agent also learns from issues. After fixing violations from SonarCloud, Codacy, ShellCheck, etc., it analyzes patterns and updates framework guidance to prevent recurrence:

    Quality Issue → Fix Applied → Pattern Identified → Framework Updated → Issue Prevented

    Slash Commands (OpenCode)

    Slash commands provide quick access to common workflows directly from the OpenCode prompt. Type / to see available commands.

    Available Commands

    Planning & Task Management:

    Command Purpose
    /list-todo List tasks with sorting, filtering, and grouping
    /save-todo Save discussion as task or plan (auto-detects complexity)
    /plan-status Check status of plans in TODO.md and todo/PLANS.md
    /create-prd Create a Product Requirements Document for complex features
    /generate-tasks Generate implementation tasks from a PRD
    /log-time-spent Log time spent on a task for tracking
    /ready Show tasks with no open blockers (Beads integration)
    /sync-beads Sync TODO.md/PLANS.md with Beads task graph
    /remember Store knowledge for cross-session recall
    /recall Search memories from previous sessions

    Plans are tracked in TODO.md (all tasks) and todo/PLANS.md (complex execution plans). Task dependencies are visualized with Beads.

    /list-todo options:

    Option Example Purpose
    --priority /list-todo -p Sort by priority (high → low)
    --estimate /list-todo -e Sort by time estimate (shortest first)
    --tag /list-todo -t seo Filter by tag
    --owner /list-todo -o marcus Filter by assignee
    --estimate /list-todo --estimate "<2h" Filter by estimate range
    --group-by /list-todo -g tag Group by tag, owner, status, or estimate
    --plans /list-todo --plans Include full plan details
    --compact /list-todo --compact One-line per task

    Time Tracking: Tasks support time estimates as active session time (excluding AFK gaps) with the format ~4h started:2025-01-15T10:30Z. The session-time-helper.sh analyses real session data to calibrate estimates vs actuals.

    Risk Levels: Tasks support risk:low/med/high indicating human oversight needed:

    Risk Oversight Example
    low Autonomous Docs, formatting, simple refactors
    med Supervised Feature implementation, API changes
    high Engaged Security, data migrations, infrastructure

    Configure time tracking per-repo via .aidevops.json.

    Development Workflow (typical order):

    Command Purpose
    /context Build AI context with Repomix for complex tasks
    /feature Start a new feature branch workflow
    /bugfix Start a bugfix branch workflow
    /hotfix Start an urgent hotfix workflow
    /linters-local Run local linting (shfmt, ShellCheck, secretlint)
    /code-audit-remote Run remote auditing (CodeRabbit, Codacy, SonarCloud)
    /code-standards Check against documented quality standards
    /code-simplifier Simplify and refine code for clarity and maintainability
    /testing-setup Interactive per-repo testing infrastructure setup with bundle-aware defaults
    /list-keys List all configured API keys and their storage locations
    /performance Web performance audit (Core Web Vitals, Lighthouse, PageSpeed)
    /pr Unified PR workflow (orchestrates all checks)
    /cross-review Dispatch prompt to multiple AI models, diff results, auto-score
    /tech-stack Detect technology stacks of URLs or find sites using specific technologies
    /mission Scope a high-level goal into milestones and features for autonomous execution
    /budget-analysis Analyze AI model spend, burn rate, and cost optimization opportunities

    Content Workflow:

    Command Purpose
    /humanise Remove AI writing patterns, make text sound human

    Media:

    Command Purpose
    /yt-dlp Download YouTube videos/audio/playlists, extract transcripts, convert local files

    SEO Workflow:

    Command Purpose
    /keyword-research Seed keyword expansion with volume, CPC, difficulty
    /autocomplete-research Google autocomplete long-tail discovery
    /keyword-research-extended Full SERP analysis with weakness detection
    /webmaster-keywords Keywords from GSC + Bing for your verified sites
    /neuronwriter Content optimization with NLP term recommendations and scoring
    /seo-export Export SEO data from GSC, Bing, Ahrefs, DataForSEO to TOON format
    /seo-analyze Analyze exported data for quick wins, striking distance, low CTR
    /seo-opportunities Combined export + analysis workflow
    /seo-audit Comprehensive SEO audit: technical, on-page, content, E-E-A-T

    SEO Debugging & Auditing (subagents in seo/):

    Subagent Purpose
    @seo-audit Comprehensive SEO audit: technical, on-page, content quality, E-E-A-T
    @debug-opengraph Validate Open Graph meta tags, preview social sharing
    @debug-favicon Validate favicon setup across platforms (ico, apple-touch, manifest)

    Email Deliverability:

    Command Purpose
    /email-health-check Check SPF/DKIM/DMARC/MX records and blacklist status

    Release Workflow (in order):

    Command Purpose
    /preflight Run quality checks before release
    /changelog Update CHANGELOG.md with recent changes
    /version-bump Bump version following semver
    /release Full release workflow (bump, tag, GitHub release, auto-changelog)
    /postflight Verify release health after deployment

    Auto-Task Completion: The release workflow automatically marks tasks as complete when commit messages reference them (e.g., Closes t037, Fixes t042). Tasks in TODO.md are updated with completed: timestamps.

    Documentation:

    Command Purpose
    /readme Create or update README.md (supports --sections for partial updates)

    Meta/Improvement:

    Command Purpose
    /add-skill Import external skills from GitHub repos (SKILL.md, AGENTS.md, .cursorrules)
    /agent-review Analyze session and suggest agent improvements
    /session-review Review session for completeness and capture learnings
    /full-loop End-to-end development loop (task → preflight → PR → postflight → deploy)
    /preflight-loop Run preflight checks iteratively until all pass
    /runners Batch dispatch tasks to parallel agents (task IDs, PR URLs, or descriptions)
    /log-issue-aidevops Report issues with aidevops (gathers diagnostics, checks duplicates, creates GitHub issue)

    AI Model Comparison:

    Command Purpose
    /compare-models Compare AI models by pricing, context, capabilities (with live web data)
    /compare-models-free Compare AI models using offline embedded data only (no web fetches)
    /score-responses Score and compare actual model responses with structured criteria
    /route Suggest optimal model tier for a task description

    Ralph Loop - Iterative AI Development

    The Ralph Loop (named after Ralph Wiggum's persistent optimism) enables autonomous iterative development. The AI keeps working on a task until it's complete, automatically resolving issues that arise.

    How it works:

    Task → Implement → Check → Fix Issues → Re-check → ... → Complete
             ↑                    ↓
             └────────────────────┘ (loop until done)

    Usage:

    # Run quality checks iteratively until all pass
    /preflight-loop --auto-fix --max-iterations <MAX_ITERATIONS>
    
    # Or run linters directly
    .agents/scripts/linters-local.sh

    Note: Store any API credentials securely via environment variables or .env files (never commit credentials to version control).

    Key features:

    • Automatic issue detection and resolution
    • Configurable max iterations (prevents infinite loops)
    • Works with any quality check (linting, tests, builds)
    • Detailed logging of each iteration

    See .agents/workflows/ralph-loop.md for the full workflow guide.

    Full Loop - End-to-End Development Automation

    The Full Loop chains all development phases into a single automated workflow:

    Task Development → Preflight → PR Create → PR Review → Postflight → Deploy

    Usage:

    # Start a full development loop
    /full-loop "Implement feature X with tests"
    
    # With options
    /full-loop "Fix bug Y" --max-task-iterations 30 --skip-postflight

    Options:

    Option Description
    --max-task-iterations N Max iterations for task (default: 50)
    --skip-preflight Skip preflight checks
    --skip-postflight Skip postflight monitoring
    --no-auto-pr Pause for manual PR creation

    The loop pauses for human input at merge approval, rollback decisions, and scope changes.

    See .agents/scripts/commands/full-loop.md for complete documentation.

    Git Worktrees - Parallel Branch Development

    Work on multiple branches simultaneously without stashing or switching. Each branch gets its own directory.

    Recommended: Worktrunk (wt) - Git worktree management with shell integration, CI status, and PR links:

    # Install (macOS/Linux)
    brew install max-sixty/worktrunk/wt && wt config shell install
    # Restart your shell for shell integration to take effect
    
    # Create worktree + cd into it
    wt switch -c feature/my-feature
    
    # Create worktree + start any AI CLI (-x runs command after switch)
    wt switch -c -x claude feature/ai-task
    
    # List worktrees with CI status and PR links
    wt list
    
    # Merge + cleanup (squash/rebase options)
    wt merge

    Fallback (no dependencies):

    ~/.aidevops/agents/scripts/worktree-helper.sh add feature/my-feature
    # Creates: ~/Git/{repo}-feature-my-feature/ (cd there manually)
    ~/.aidevops/agents/scripts/worktree-helper.sh list
    ~/.aidevops/agents/scripts/worktree-helper.sh clean

    Benefits:

    • Run tests on one branch while coding on another
    • Compare implementations side-by-side
    • No context switching or stash management
    • Each AI session can work on a different branch

    Worktree-first workflow: The pre-edit check now enforces worktrees as the default when creating branches, keeping your main directory on main. This prevents uncommitted changes from blocking branch switches and ensures parallel sessions don't inherit wrong branch state.

    See .agents/workflows/worktree.md for the complete guide and .agents/tools/git/worktrunk.md for Worktrunk documentation.

    Session Management - Parallel AI Sessions

    Spawn new AI sessions for parallel work or fresh context. The framework detects natural session completion points and suggests next steps.

    Completion signals:

    Signal Meaning
    PR merged Branch work complete
    Release published Version shipped
    All tests passing Quality gates satisfied
    User says "done" Explicit completion

    Spawning options:

    # Background session (same terminal)
    opencode --non-interactive --prompt "Continue with feature X" &
    
    # New terminal tab (macOS)
    osascript -e 'tell application "Terminal" to do script "cd ~/Git/project && opencode"'
    
    # Worktree-based (isolated branch) - recommended
    wt switch -c -x opencode feature/next-feature  # Worktrunk: create + start AI CLI
    # Or fallback:
    # ~/.aidevops/agents/scripts/worktree-helper.sh add feature/next-feature
    # cd ~/Git/{repo}-feature-next-feature && opencode

    Session handoff pattern:

    When ending a session, the AI provides a continuation prompt for the next session:

    ## Continuation Prompt
    [Copy this to start a new session with full context]

    See .agents/workflows/session-manager.md for the complete guide.

    Cross-Session Memory System

    "Compound, then clear" - Sessions should build on each other. The memory system stores knowledge, patterns, and learnings for future sessions using SQLite FTS5 for fast full-text search, with opt-in semantic similarity search via vector embeddings.

    Slash commands:

    Command Purpose
    /remember {content} Store a memory with AI-assisted categorization
    /recall {query} Search memories by keyword
    /recall --recent Show 10 most recent memories
    /recall --auto-only Search only auto-captured memories
    /recall --stats Show memory statistics
    /memory-log Show recent auto-captured memories
    /patterns {task} Show success/failure patterns for a task type
    /route {task} Suggest optimal model tier for a task

    Memory types:

    Type Use For
    WORKING_SOLUTION Fixes that worked
    FAILED_APPROACH What didn't work (avoid repeating)
    CODEBASE_PATTERN Project conventions
    USER_PREFERENCE Developer preferences
    TOOL_CONFIG Tool setup notes
    DECISION Process, workflow, or policy choices (naming, release cadence, branching)
    CONTEXT Background info
    ARCHITECTURAL_DECISION System-level architecture (service boundaries, data flow, tech stack)
    ERROR_FIX Bug fixes and patches
    OPEN_THREAD Unresolved questions or follow-ups
    SUCCESS_PATTERN Approaches that consistently work
    FAILURE_PATTERN Approaches that consistently fail

    Semantic search (opt-in):

    # Enable semantic similarity search (~90MB model download)
    memory-embeddings-helper.sh setup
    
    # Search by meaning, not just keywords
    memory-helper.sh recall "how to optimize queries" --semantic

    Pattern tracking:

    # Record what worked (via memory system)
    memory-helper.sh store --type SUCCESS_PATTERN --content "Structured debugging found root cause" --tags "bugfix,sonnet"
    
    # Record what failed
    memory-helper.sh store --type FAILURE_PATTERN --content "Blind refactor without tests caused regressions" --tags "refactor"
    
    # Get suggestions for a new task
    memory-helper.sh recall "refactor auth middleware" --semantic

    Auto-capture: AI agents automatically store memories using --auto flag when they detect working solutions, failed approaches, or decisions. Privacy filters strip <private> tags and reject secret patterns.

    CLI usage:

    # Store a memory
    ~/.aidevops/agents/scripts/memory-helper.sh store --type "WORKING_SOLUTION" --content "Fixed CORS with nginx headers" --tags "cors,nginx"
    
    # Store auto-captured memory (from AI agent)
    ~/.aidevops/agents/scripts/memory-helper.sh store --auto --content "Fixed CORS with nginx headers" --type WORKING_SOLUTION
    
    # Recall memories (keyword search - default)
    ~/.aidevops/agents/scripts/memory-helper.sh recall "cors"
    
    # Recall memories (semantic similarity - opt-in)
    ~/.aidevops/agents/scripts/memory-helper.sh recall "cors" --semantic
    
    # Show auto-capture log
    ~/.aidevops/agents/scripts/memory-helper.sh log
    
    # View statistics (includes auto-capture counts)
    ~/.aidevops/agents/scripts/memory-helper.sh stats
    
    # Maintenance
    ~/.aidevops/agents/scripts/memory-helper.sh validate   # Check for stale entries
    ~/.aidevops/agents/scripts/memory-helper.sh prune      # Remove stale memories
    
    # Namespaces (per-runner memory isolation)
    ~/.aidevops/agents/scripts/memory-helper.sh --namespace my-runner store --content "Runner learning"
    ~/.aidevops/agents/scripts/memory-helper.sh --namespace my-runner recall "query" --shared
    ~/.aidevops/agents/scripts/memory-helper.sh namespaces  # List all namespaces

    Storage: ~/.aidevops/.agent-workspace/memory/memory.db (+ optional embeddings.db for semantic search, namespaces/ for per-runner isolation)

    See .agents/memory/README.md for complete documentation.

    Installation

    Slash commands are automatically installed by setup.sh for both OpenCode and Claude Code:

    # OpenCode commands deployed to:
    ~/.config/opencode/command/
    
    # Claude Code commands deployed to:
    ~/.claude/commands/
    
    # Regenerate commands manually:
    .agents/scripts/generate-opencode-commands.sh
    .agents/scripts/generate-claude-commands.sh

    Both generators read from the same source (.agents/scripts/commands/*.md), ensuring command parity across tools.

    Usage

    In OpenCode or Claude Code, type the command at the prompt:

    /preflight
    /release minor
    /feature add-user-authentication

    Commands invoke the corresponding workflow subagent with appropriate context.


    Agent Lifecycle (Three Tiers)

    User-created agents survive aidevops update. Agents progress through tiers as they mature:

    Tier Location Purpose Survives Update
    Draft ~/.aidevops/agents/draft/ R&D, experimental, auto-created by orchestration tasks Yes
    Custom ~/.aidevops/agents/custom/ User's permanent private agents Yes
    Shared .agents/ in repo Open-source, distributed to all users Managed by repo

    Promotion workflow: Draft agents that prove useful can be promoted to custom (private) or shared (open-source via PR). Orchestration agents can create drafts in draft/ for reusable parallel processing context.

    Creating Custom Agents

    Create a markdown file in ~/.config/opencode/agent/ (OpenCode) or reference in your AI's system prompt:

    ---
    description: Short description of what this agent does
    mode: subagent
    temperature: 0.2
    tools:
      bash: true
      specific-mcp_*: true
    ---
    
    # Agent Name
    
    Detailed instructions for the agent...

    See .agents/opencode-integration.md for complete documentation.


    Usage Examples

    Server Management

    # List all servers across providers
    ./.agents/scripts/servers-helper.sh list
    
    # Connect to specific servers
    ./.agents/scripts/hostinger-helper.sh connect example.com
    ./.agents/scripts/hetzner-helper.sh connect main web-server
    
    # Execute commands remotely
    ./.agents/scripts/hostinger-helper.sh exec example.com "uptime"

    Monitoring & Uptime (Updown.io)

    # List all monitors
    ./.agents/scripts/updown-helper.sh list
    
    # Add a new website check
    ./.agents/scripts/updown-helper.sh add https://example.com "My Website"

    Domain & DNS Management

    # Purchase and configure domain
    ./.agents/scripts/spaceship-helper.sh purchase example.com
    ./.agents/scripts/dns-helper.sh cloudflare add-record example.com A 192.168.1.1
    
    # Check domain availability
    ./.agents/scripts/101domains-helper.sh check-availability example.com

    Strategic Keyword Research

    # Basic keyword research with volume, CPC, difficulty
    ./.agents/scripts/keyword-research-helper.sh research "seo tools" --limit 20
    
    # Google autocomplete long-tail discovery
    ./.agents/scripts/keyword-research-helper.sh autocomplete "how to" --provider both
    
    # Extended research with SERP weakness detection
    ./.agents/scripts/keyword-research-helper.sh extended "keywords" --quick
    
    # Competitor keyword research
    ./.agents/scripts/keyword-research-helper.sh extended --competitor ahrefs.com --limit 50
    
    # Keyword gap analysis (find keywords competitor ranks for but you don't)
    ./.agents/scripts/keyword-research-helper.sh extended --gap semrush.com,ahrefs.com
    
    # Domain research (all keywords a domain ranks for)
    ./.agents/scripts/keyword-research-helper.sh extended --domain example.com --limit 100

    Features:

    • 6 Research Modes: Keyword expansion, autocomplete, domain research, competitor research, keyword gap, extended SERP analysis
    • 17 SERP Weaknesses: Low domain score, no backlinks, thin content, UGC-heavy, non-HTTPS, and more
    • KeywordScore Algorithm: 0-100 score based on weakness count, volume, and difficulty
    • Multi-Provider: DataForSEO (primary), Serper (autocomplete), Ahrefs (domain ratings)
    • Locale Support: US/UK/CA/AU/DE/FR/ES with saved preferences
    • Output Formats: Markdown tables (TUI) and CSV export to ~/Downloads

    Quality Control & Performance

    # Run quality analysis with auto-fixes
    bash .agents/scripts/qlty-cli.sh check 10
    bash .agents/scripts/qlty-cli.sh fix
    
    # Run chunked Codacy analysis for large repositories
    bash .agents/scripts/codacy-cli-chunked.sh quick    # Fast analysis
    bash .agents/scripts/codacy-cli-chunked.sh chunked # Full analysis
    
    # AI coding assistance
    bash .agents/scripts/ampcode-cli.sh scan ./src
    bash .agents/scripts/continue-cli.sh review
    
    # Audit website performance
    ./.agents/scripts/pagespeed-helper.sh wordpress https://example.com

    Documentation & Resources

    Wiki Guides:

    Agent Guides (in .agents/):

    • API Integrations - Service APIs
    • Browser Automation - 8 tools + anti-detect stack: decision tree, parallel, extensions, fingerprinting
    • Device Emulation - Mobile/tablet testing: 100+ device presets, viewport, geolocation, locale, dark mode
    • Anti-Detect Browser - Multi-profile management, fingerprint rotation, proxy integration
    • Web Performance - Core Web Vitals, network dependencies, accessibility (Chrome DevTools MCP)
    • PageSpeed - Lighthouse CLI and PageSpeed Insights API
    • Pandoc - Document format conversion
    • Security - Enterprise security standards

    Provider-Specific Guides: Hostinger, Hetzner, Cloudflare, WordPress, Git platforms, Vercel CLI, Coolify CLI, and more in .agents/

    Architecture

    aidevops/
    ├── setup.sh                       # Main setup script
    ├── AGENTS.md                      # AI agent guidance (dev)
    ├── .agents/                        # Agents and documentation
    │   ├── AGENTS.md                  # User guide (deployed to ~/.aidevops/agents/)
    │   ├── *.md                       # 13 primary agents
    │   ├── scripts/                   # 390+ helper scripts
    │   ├── tools/                     # Cross-domain utilities (video, browser, git, etc.)
    │   ├── services/                  # External service integrations
    │   └── workflows/                 # Development process guides
    ├── configs/                       # Configuration templates
    ├── ssh/                           # SSH key management
    └── templates/                     # Reusable templates

    Configuration & Setup

    aidevops is configured through two JSONC/JSON files that control framework behaviour, model routing, quality gates, and user preferences. Both are optional -- sensible defaults apply out of the box.

    File Purpose CLI
    ~/.config/aidevops/config.jsonc Framework config: updates, models, safety, quality, orchestration, paths aidevops config
    ~/.config/aidevops/settings.json User preferences: onboarding, UI, model routing defaults settings-helper.sh

    Quick examples:

    # Disable automatic updates
    aidevops config set updates.auto_update false
    
    # Use opus-tier verification for destructive operations
    aidevops config set safety.verification_tier opus
    
    # View all current config values
    aidevops config list
    
    # Validate config against schema
    aidevops config validate

    Precedence: Environment variables (AIDEVOPS_*) > config file > built-in defaults. See the full configuration reference for every option, type, default, and example.

    Service Credentials

    Service-specific credentials (hosting, DNS, Git platforms, etc.) use a separate template system:

    # 1. Copy and customize configuration templates
    cp configs/hostinger-config.json.txt configs/hostinger-config.json
    cp configs/hetzner-config.json.txt configs/hetzner-config.json
    # Edit with your actual credentials
    
    # 2. Test connections
    ./.agents/scripts/servers-helper.sh list
    
    # 3. Install MCP integrations (optional)
    bash .agents/scripts/setup-mcp-integrations.sh all

    See .agents/reference/configuration.md for details on the template system and credential security.

    Security & Best Practices

    Credential Management:

    • Store API tokens in separate config files (never hardcode)
    • Use Ed25519 SSH keys (modern, secure, fast)
    • Set proper file permissions (600 for configs)
    • Regular key rotation and access audits

    Multi-Tenant Credential Storage

    Manage multiple accounts/clients per service with isolated credential sets:

    # Create a new tenant
    credential-helper.sh create client-acme
    
    # Switch active tenant
    credential-helper.sh switch client-acme
    
    # Set credentials for current tenant
    credential-helper.sh set GITHUB_TOKEN ghp_xxx
    
    # Per-project override (gitignored)
    echo "client-acme" > .aidevops-tenant
    
    # Export for scripts
    source <(credential-helper.sh export)

    Resolution priority: Project .aidevops-tenant → Global active tenant → Default

    See .agents/tools/credentials/multi-tenant.md for complete documentation.

    Quality Assurance:

    • Multi-platform analysis (SonarCloud, CodeFactor, Codacy, CodeRabbit, Qlty, Snyk, Gemini Code Assist)
    • Automated security monitoring and vulnerability detection

    Contributing & License

    Contributing:

    1. Fork the repository
    2. Create feature branch
    3. Add provider support or improvements
    4. Test with your infrastructure
    5. Submit pull request

    License: MIT License - see LICENSE file for details Created by Marcus Quinn - Copyright © 2025-2026


    What This Framework Achieves

    For You:

    • Autonomous project management — dispatch a mission and let AI agents handle milestones, validation, and delivery across days
    • Cross-domain operations — code, automation, business, marketing, legal, sales, content, video, research, SEO, social media, health, and accounts managed through one platform
    • Multi-model safety — destructive operations verified by a second AI provider before execution
    • Enterprise-grade quality — multi-platform analysis, automated security monitoring, continual improvement loops
    • Infrastructure management — 30+ service integrations with standardized commands across all providers

    For Your AI Agents:

    • Autonomous supervisor — pulse runs every 2 minutes, merging PRs, dispatching workers, killing stuck processes, advancing missions
    • Operational intelligence — struggle-ratio detection, orphaned PR recovery, circuit breaker, dynamic concurrency
    • Cost-aware routing — 7-tier model selection (local → haiku → flash → sonnet → pro → opus → grok) with budget tracking
    • Progressive context — 900+ subagents loaded on demand, project bundles auto-configuring quality gates and model tiers
    • Self-improving — session mining extracts learnings, quality findings auto-create tasks, patterns feed back into agent prompts

    Get Started:

    # npm (recommended)
    npm install -g aidevops && aidevops update
    
    # Bun (fast alternative)
    bun install -g aidevops && aidevops update
    
    # Homebrew
    brew install marcusquinn/tap/aidevops && aidevops update
    
    # Direct from source
    bash <(curl -fsSL https://aidevops.sh/install)

    An AI operations platform for launching and managing projects across every business domain — from code to content, infrastructure to invoicing.