JSPM

llmpulse-cli

0.1.0
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 3
  • Score
    100M100P100Q30940F
  • License MIT

CLI client for LLM Pulse — AI visibility analytics platform. Track brand mentions, citations, and sentiment across ChatGPT, Perplexity, Gemini, and more.

Package Exports

    This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (llmpulse-cli) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

    Readme

    llmpulse-cli

    The official CLI for LLM Pulse — the AI visibility analytics platform that tracks how your brand appears across ChatGPT, Perplexity, Gemini, Google AI Overviews, and other AI search engines.

    Monitor brand mentions, citations, sentiment, and share of voice directly from your terminal. Export data, generate reports, and automate AI visibility tracking in your workflows.

    Installation

    # npm
    npm install -g llmpulse-cli
    
    # pnpm
    pnpm add -g llmpulse-cli
    
    # yarn
    yarn global add llmpulse-cli
    
    # bun
    bun add -g llmpulse-cli
    
    # Run without installing
    npx llmpulse-cli status

    Quick Start

    # Interactive setup — validates your key and sets defaults
    llmpulse login
    
    # Quick project dashboard with sparklines
    llmpulse status
    
    # Weekly visibility report
    llmpulse report --range 7
    
    # Compare this week vs last week
    llmpulse diff --range 7 --compare 7
    
    # Live-updating dashboard
    llmpulse watch --interval 30

    Get your API key from the LLM Pulse dashboard (Scale plan or above).

    Commands

    Dashboard & Reports

    Command Description
    login Interactive setup — configure API key and default project
    status Quick dashboard: metrics, sparklines, SoV, top prompts
    diff Compare metrics between two time periods
    report Full visibility report (terminal or --markdown)
    watch Live auto-refreshing dashboard
    export Export full project data to JSON + CSV files

    Data Queries

    Command Description
    dimensions projects List/get projects
    dimensions competitors List/get competitors
    dimensions collections List collections/tags
    dimensions models List AI models (ChatGPT, Perplexity, Gemini, etc.)
    dimensions locales List tracked countries and languages
    dimensions prompts List tracked prompts
    dimensions mentions List brand mentions in AI responses
    dimensions citations List brand citations (URLs) in AI responses
    dimensions competitor-mentions List competitor mentions
    dimensions competitor-citations List competitor citations
    dimensions all-mentions All mentions (brand + competitors)
    dimensions all-citations All citations (brand + competitors)
    metrics timeseries Time series data (mentions, citations, visibility, etc.)
    metrics summary Aggregated metrics summary
    metrics prompt-summary Per-prompt performance breakdown
    metrics sov Share of voice analysis
    metrics top-sources Top citing domains
    answers list List AI model responses
    answers get <id> Full response with mentions, citations, sentiments
    sentiments list Sentiment analysis records

    Configuration

    Command Description
    config set/get/show Manage configuration
    config use <profile> Switch between profiles (e.g., production, staging)
    config profiles List all profiles
    completions bash|zsh|fish Generate shell autocompletions

    Output Formats

    All data commands support three output formats via --format / -f:

    • JSON (default) — pretty-printed on TTY, compact when piped (works with jq)
    • Table — colored, auto-sized columns with sparklines and percent bars
    • CSV — standard CSV for spreadsheets and data pipelines
    # Pipe to jq
    llmpulse metrics summary --metrics mentions,visibility | jq '.summary'
    
    # Export to CSV
    llmpulse dimensions citations --all -f csv > citations.csv
    
    # Auto-paginate all results
    llmpulse dimensions mentions --all -f csv > all-mentions.csv

    Global Flags

    Flag Short Description
    --api-key -k Override API key
    --base-url Override base URL
    --project-id -p Override default project
    --format -f Output: json, table, csv
    --page Page number
    --per-page Items per page (max 100)
    --all Auto-paginate all results
    --verbose -v Show request URL, timing, request ID

    Configuration

    Config is stored in ~/.llmpulse/config.json with optional profile overrides.

    Priority: CLI flag > environment variable > config file > default

    Config Key Environment Variable Default
    api_key LLMPULSE_API_KEY
    base_url LLMPULSE_BASE_URL https://api.llmpulse.ai/api/v1
    default_project_id LLMPULSE_PROJECT_ID

    Profiles

    Switch between multiple accounts or environments:

    llmpulse config use production
    llmpulse config set api_key llmpulse_prod_xxx
    
    llmpulse config use staging
    llmpulse config set api_key llmpulse_staging_xxx
    
    llmpulse config profiles     # List all profiles
    llmpulse config use production  # Switch back

    Shell Completions

    # Bash — add to ~/.bashrc
    eval "$(llmpulse completions bash)"
    
    # Zsh — add to ~/.zshrc
    eval "$(llmpulse completions zsh)"
    
    # Fish
    llmpulse completions fish > ~/.config/fish/completions/llmpulse.fish

    What is LLM Pulse?

    LLM Pulse is an AI visibility analytics platform that monitors how brands appear in AI-generated search results. It tracks brand mentions, citations, and sentiment across leading AI models including ChatGPT, Perplexity, Gemini, Google AI Overviews, and Google AI Mode.

    Key features include:

    Requirements

    • Node.js >= 18
    • LLM Pulse API key (Scale plan or above)

    License

    MIT