JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 9
  • Score
    100M100P100Q50672F
  • License MIT

Recursive Language Model (RLM) code analyzer - Analyze any codebase with AI that can process 100x beyond context limits

Package Exports

  • rlm-analyzer
  • rlm-analyzer/config
  • rlm-analyzer/models
  • rlm-analyzer/types

Readme

RLM Analyzer

AI-powered code analysis using Recursive Language Models

Analyze any codebase with AI that can process 100x beyond context limits. Powered by Gemini 3 Flash and based on MIT CSAIL research on Recursive Language Models.

Features

  • 🔍 Deep Code Analysis - Understands entire codebases, not just snippets
  • 🏗️ Architecture Analysis - Maps structure, patterns, and data flow
  • 🔒 Security Scanning - Identifies vulnerabilities and security patterns
  • Performance Analysis - Finds bottlenecks and optimization opportunities
  • 🔄 Refactoring Suggestions - Identifies code smells and improvements
  • 🔎 Symbol Search - Find all usages of functions, classes, variables
  • Custom Questions - Ask anything about your codebase

Installation

npm install -g rlm-analyzer

Local Installation

npm install rlm-analyzer

Quick Start

1. Configure API Key

Get a free API key from Google AI Studio, then:

# Option 1: Use the config command
rlm config YOUR_GEMINI_API_KEY

# Option 2: Set environment variable
export GEMINI_API_KEY=your_api_key

# Option 3: Create .env file in your project
echo "GEMINI_API_KEY=your_api_key" > .env

2. Analyze Your Code

# Get a codebase summary
rlm summary

# Analyze architecture
rlm arch

# Security analysis
rlm security

# Ask a question
rlm ask "How does authentication work?"

Commands

Command Description
rlm summary Get a comprehensive codebase summary
rlm arch Analyze architecture and structure
rlm deps Analyze dependencies and imports
rlm security Security vulnerability analysis
rlm perf Performance analysis
rlm refactor Find refactoring opportunities
rlm find <symbol> Find all usages of a symbol
rlm explain <file> Explain a specific file
rlm ask "<question>" Ask a custom question
rlm config [key] Configure or check API key

Options

Option Description
--dir, -d <path> Directory to analyze (default: current)
--verbose, -v Show detailed turn-by-turn output
--json Output results as JSON
--help, -h Show help

Examples

# Analyze a specific directory
rlm arch --dir /path/to/project

# Find all usages of a function
rlm find "handleSubmit"

# Explain a specific file
rlm explain src/auth/login.ts

# Ask about the codebase
rlm ask "What design patterns are used in this codebase?"

# Get JSON output for scripting
rlm summary --json > analysis.json

# Verbose mode for debugging
rlm security -v

Programmatic Usage

import {
  analyzeArchitecture,
  analyzeSecurity,
  askQuestion,
  loadFiles,
} from 'rlm-analyzer';

// Analyze architecture
const result = await analyzeArchitecture('/path/to/project');
console.log(result.answer);

// Ask a custom question
const answer = await askQuestion(
  '/path/to/project',
  'How does the authentication system work?'
);
console.log(answer.answer);

// Load and process files manually
const files = loadFiles('/path/to/project', {
  include: ['.ts', '.tsx'],
  exclude: ['node_modules', 'dist'],
});

How It Works

RLM Analyzer uses Recursive Language Models (RLMs) to analyze codebases that exceed traditional context limits:

  1. File Loading - Loads your codebase into a virtual environment
  2. REPL Execution - AI writes and executes Python-like code to explore files
  3. Sub-LLM Calls - Complex analysis tasks are delegated to specialized sub-queries
  4. Iterative Refinement - Multiple turns of analysis until complete
  5. Final Answer - Synthesized analysis based on deep code exploration

This approach enables analysis of codebases 100x larger than traditional context windows.

Configuration

API Key Storage

Your API key can be stored in multiple locations (checked in order):

  1. GEMINI_API_KEY environment variable
  2. RLM_API_KEY environment variable
  3. .env file in current directory
  4. .env.local file in current directory
  5. ~/.rlm-analyzer/config.json
  6. ~/.config/rlm-analyzer/config.json

Global Config File

# Create global config
mkdir -p ~/.rlm-analyzer
echo '{"apiKey": "your_api_key"}' > ~/.rlm-analyzer/config.json

Supported Languages

  • TypeScript / JavaScript
  • Python
  • Java / Kotlin / Scala
  • Go
  • Rust
  • C / C++
  • C#
  • Ruby
  • PHP
  • Swift
  • Vue / Svelte
  • And more...

Security

  • API keys are never logged or transmitted except to the Gemini API
  • Code execution happens in a sandboxed environment
  • Dangerous operations (eval, file writes, network calls) are blocked
  • All analysis is read-only

Troubleshooting

"API key not configured"

# Check if key is set
rlm config

# Set your key
rlm config YOUR_API_KEY

"No files found to analyze"

Make sure you're in a directory with code files, or specify a directory:

rlm summary --dir /path/to/code

Analysis is slow

  • Large codebases take longer to analyze
  • Use --verbose to see progress
  • Consider analyzing specific subdirectories

License

MIT

Credits

Based on research from MIT CSAIL: