Package Exports
- vibesafe-cli
- vibesafe-cli/package.json
Readme
VibeSafe
AI Code Security Auditor — catches the vulnerabilities that LLMs introduce and traditional scanners miss.
VibeSafe is a static analysis CLI purpose-built for AI-generated code. It uses AST-based analysis and taint tracking to detect vulnerability patterns that large language models frequently produce — hardcoded secrets, SQL injection, command injection, missing authentication, and more — then provides educational feedback explaining why the LLM made the mistake and how to fix it.
Why VibeSafe?
AI code generators produce functional code fast, but they also introduce predictable security mistakes:
| Pattern | Why it happens |
|---|---|
| Hardcoded API keys and secrets | LLMs hallucinate realistic-looking credentials from training data |
| SQL injection via string concatenation | LLMs default to simple patterns from tutorials |
Command injection through exec() |
Training data is dominated by quick-start examples |
| API routes without authentication | LLMs focus on business logic, not middleware chains |
Math.random() for tokens |
Training data rarely distinguishes casual vs. secure randomness |
| Missing input validation | Examples overwhelmingly skip validation for brevity |
Traditional scanners weren't designed for these patterns. VibeSafe catches them with deep AST analysis and taint tracking, then teaches you how to fix them.
Features
- 10 Detection Rules — High-precision rules targeting vulnerabilities LLMs commonly introduce
- Educational Feedback — "Why LLMs fail" explanations with every finding, plus OWASP/CWE references
- AST-Based Analysis — Deep code structure analysis using tree-sitter, not regex
- Multi-Language — JavaScript, TypeScript, Python, Java, Go
- Taint Tracking — Traces user input from sources (
req.body) to sinks (SQL, shell commands) - Fast — Scans 1000+ file repos in under 5 seconds
- Plugin Architecture — Extend with custom detection rules via the plugin API
- Multiple Output Formats — Terminal, Markdown, HTML, JSON
- Configurable —
.vibesaferc.jsonfor per-project settings - CI/CD Ready — Exit codes, severity thresholds, and GitHub Action integration
Requirements
- Node.js 18 or higher
Installation
# Install globally
npm install -g vibesafe-cli
# Or use directly with npx
npx vibesafe-cli scan ./my-projectVerify Installation
vibesafe --version
vibesafe --helpQuick Start
# Scan your project
vibesafe scan ./my-project
# Scan a single file
vibesafe scan ./src/auth.ts
# Get a Markdown report
vibesafe scan ./my-project --format markdown -o report.md
# Only show HIGH and CRITICAL findings
vibesafe scan ./my-project --severity HIGH
# Enable educational explanations
vibesafe scan ./my-project --education
# Debug mode with detailed output
vibesafe scan ./my-project --verboseCLI Reference
vibesafe [options] [command]
Global Options:
-v, --version Display the current version
-h, --help Display help
Commands:
scan <path> [options] Scan a directory or file for security vulnerabilitiesScan Command
vibesafe scan <path> [options]
Arguments:
path Path to the file or directory to scan
Options:
-f, --format <format> Output format: terminal (default), markdown, html, json
--severity <level> Minimum severity: LOW, MEDIUM, HIGH, CRITICAL
--ignore <patterns...> Additional glob patterns to ignore
-c, --config <path> Path to config file (default: auto-discover)
-o, --output <file> Write report to file (for markdown, html, json format)
--education Show educational content in output
--explain Show full educational explanations (implies --education)
--no-telemetry Disable anonymous telemetry (persists preference)
--no-update-check Disable update notifications
--api-key <key> VibeSafe Pro API key for unlimited scans
--verbose Show detailed diagnostic output for debugging
-h, --help Display helpExit Codes
| Code | Meaning |
|---|---|
0 |
No issues found (or only below minimum severity) |
1 |
CRITICAL or HIGH severity findings detected, or threshold violations |
2 |
Error (invalid path, config error, scan failure) |
Detection Rules
VibeSafe ships with 10 security detection rules, each targeting patterns that LLMs frequently produce.
Rule 1: Hardcoded Secrets (hardcoded-secrets)
Severity: HIGH–CRITICAL (named patterns like AWS keys, private keys, and database credentials are CRITICAL; generic entropy-based detections are HIGH)
Detects hardcoded API keys, passwords, and tokens using Shannon entropy analysis and pattern matching. Identifies 24 secret formats including AWS keys, Stripe tokens, GitHub PATs, JWTs, private keys, and database connection strings.
Example — Vulnerable:
// ❌ LLM-generated code with hardcoded secret
const apiKey = "sk-1234567890abcdef1234567890abcdef";
const dbUrl = "postgresql://admin:password123@localhost:5432/mydb";Example — Safe:
// ✅ Use environment variables
const apiKey = process.env.API_KEY;
const dbUrl = process.env.DATABASE_URL;Why LLMs fail: LLMs are trained on millions of public repositories containing accidentally committed secrets. They produce realistic-looking credentials in example code — sometimes hallucinated, sometimes memorized from training data.
Rule 2: SQL Injection (sql-injection)
Severity: CRITICAL
Detects SQL query construction using string concatenation or template literal interpolation with user-controlled input. Uses taint analysis to track data from req.body, req.params, and req.query into SQL calls.
Example — Vulnerable:
// ❌ String concatenation in SQL query
app.get("/users", (req, res) => {
const query = "SELECT * FROM users WHERE id = " + req.params.id;
db.query(query);
});Example — Safe:
// ✅ Parameterized query
app.get("/users", (req, res) => {
db.query("SELECT * FROM users WHERE id = ?", [req.params.id]);
});Why LLMs fail: LLMs generate SQL queries using string concatenation because these are the most common patterns in tutorial code and Stack Overflow answers. AI models don't reason about threat models — they optimize for syntactic correctness.
Rule 3: Command Injection (command-injection)
Severity: CRITICAL
Detects unsafe child_process usage (exec, spawn, execSync) with user input, dangerous eval() calls, and new Function() constructors with tainted data.
Example — Vulnerable:
// ❌ User input in shell command
app.post("/convert", (req, res) => {
exec("convert " + req.body.filename + " output.png");
});Example — Safe:
// ✅ Use execFile with argument array
app.post("/convert", (req, res) => {
execFile("convert", [req.body.filename, "output.png"]);
});Why LLMs fail: LLMs default to exec() with string concatenation because it's the simplest pattern for running shell commands. Training data is dominated by quick-start tutorials that don't consider untrusted input flowing into shell interpreters.
Rule 4: Missing Authentication (missing-auth)
Severity: HIGH
Detects Express/Fastify route definitions without authentication middleware. Checks for auth middleware in the route handler chain, app-level middleware, and configurable public route whitelists.
Example — Vulnerable:
// ❌ No auth middleware on sensitive route
app.get("/api/users", (req, res) => {
res.json(users);
});Example — Safe:
// ✅ Auth middleware before handler
app.get("/api/users", isAuthenticated, (req, res) => {
res.json(users);
});Why LLMs fail: LLMs generate route handlers focused on business logic and typically omit authentication middleware because training data is dominated by tutorials that skip security concerns for brevity.
Configuration: You can specify additional public routes and auth middleware names:
{
"rules": {
"missing-auth": {
"enabled": true
}
}
}Default public routes include: /, /health, /login, /signup, /auth/*, /public, /favicon.ico.
Rule 5: Insecure Random (insecure-random)
Severity: HIGH (security contexts) / MEDIUM (general)
Detects Math.random() usage in security-critical contexts where crypto.randomBytes() or crypto.randomUUID() should be used instead. Uses two-tier severity based on context analysis.
Example — Vulnerable:
// ❌ Math.random() for session token
const sessionToken = Math.random().toString(36).substring(2);Example — Safe:
// ✅ Cryptographically secure random
import { randomUUID, randomBytes } from 'crypto';
const sessionToken = randomUUID();Why LLMs fail: LLMs default to Math.random() because it appears in the vast majority of JavaScript tutorials. Training data rarely distinguishes between casual randomness (shuffling arrays) and security-critical randomness (generating tokens).
Rule 6: Missing Input Validation (missing-validation)
Severity: MEDIUM
Detects route handlers that use req.body, req.params, or req.query without input validation. Recognizes validation libraries like Zod, Joi, Yup, express-validator, class-validator, and 10+ others.
Example — Vulnerable:
// ❌ Direct use of req.body without validation
app.post("/users", (req, res) => {
const { name, email } = req.body;
createUser(name, email);
});Example — Safe:
// ✅ Validate with Zod
import { z } from 'zod';
const UserSchema = z.object({ name: z.string(), email: z.string().email() });
app.post("/users", (req, res) => {
const data = UserSchema.parse(req.body);
createUser(data.name, data.email);
});Why LLMs fail: LLMs generate route handlers that directly destructure request properties because training data is dominated by examples that skip validation for brevity.
Rule 7: Unsafe Deserialization (unsafe-deserialization)
Severity: MEDIUM
Detects JSON.parse(), eval(), and new Function() usage on user-controlled input without validation. Uses taint analysis to track data from req.body, req.params, and req.query into deserialization calls.
Example — Vulnerable:
// ❌ Parsing user input without validation
app.post("/api/data", (req, res) => {
const config = JSON.parse(req.body.payload);
applyConfig(config);
});Example — Safe:
// ✅ Validate with schema before parsing
import { z } from 'zod';
const ConfigSchema = z.object({ theme: z.string(), lang: z.string() });
app.post("/api/data", (req, res) => {
const config = ConfigSchema.parse(JSON.parse(req.body.payload));
applyConfig(config);
});Why LLMs fail: LLMs use JSON.parse() directly on user input because training examples rarely demonstrate schema validation before deserialization. They optimize for functionality over defensive coding.
Rule 8: CORS Misconfiguration (cors-misconfiguration)
Severity: MEDIUM
Detects overly permissive CORS configurations including wildcard origins (*), reflecting req.headers.origin without validation, and missing credentials restrictions.
Example — Vulnerable:
// ❌ Wildcard CORS allows any origin
app.use(cors({ origin: '*', credentials: true }));Example — Safe:
// ✅ Whitelist specific origins
app.use(cors({
origin: ['https://myapp.com', 'https://admin.myapp.com'],
credentials: true
}));Why LLMs fail: LLMs default to origin: '*' because it's the quickest way to resolve CORS errors during development. Training data is filled with Stack Overflow answers recommending wildcard origins as a "fix."
Rule 9: Error Leakage (error-leakage)
Severity: LOW
Detects error handlers that expose internal details (stack traces, database errors, file paths) to clients. Checks for err.stack, err.message, and raw error forwarding in Express error middleware.
Example — Vulnerable:
// ❌ Stack trace exposed to client
app.use((err, req, res, next) => {
res.status(500).json({ error: err.stack });
});Example — Safe:
// ✅ Generic message, log internally
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({ error: 'Internal server error' });
});Why LLMs fail: LLMs generate error handlers that pass the full error object to the response because training data shows debugging-oriented examples. They don't distinguish between development and production error handling.
Rule 10: Missing Rate Limiting (missing-rate-limit)
Severity: MEDIUM
Detects Express/Fastify route definitions for sensitive endpoints (login, register, API routes) without rate limiting middleware. Recognizes express-rate-limit, rate-limiter-flexible, and similar libraries.
Example — Vulnerable:
// ❌ No rate limiting on login endpoint
app.post("/auth/login", (req, res) => {
authenticate(req.body.email, req.body.password);
});Example — Safe:
// ✅ Rate limiter middleware applied
import rateLimit from 'express-rate-limit';
const limiter = rateLimit({ windowMs: 15 * 60 * 1000, max: 5 });
app.post("/auth/login", limiter, (req, res) => {
authenticate(req.body.email, req.body.password);
});Why LLMs fail: LLMs focus on core business logic when generating API handlers and omit rate limiting because training data overwhelmingly shows functional examples without infrastructure concerns.
Configuration
VibeSafe looks for a .vibesaferc.json configuration file in your project root (auto-discovered via cosmiconfig).
Example Configuration
{
"rules": {
"hardcoded-secrets": { "enabled": true, "severity": "CRITICAL" },
"sql-injection": { "enabled": true },
"command-injection": { "enabled": true },
"missing-auth": { "enabled": true },
"insecure-random": { "enabled": false },
"missing-validation": { "enabled": true, "severity": "HIGH" }
},
"ignore": ["**/generated/**", "**/vendor/**"],
"output": "terminal",
"severity": "MEDIUM",
"thresholds": {
"critical": 0,
"high": 5
},
"excludeTestFiles": true
}Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
rules |
object |
All enabled | Per-rule overrides keyed by rule ID |
rules.<id>.enabled |
boolean |
true |
Enable or disable a specific rule |
rules.<id>.severity |
string |
Rule default | Override severity: LOW, MEDIUM, HIGH, CRITICAL |
ignore |
string[] |
[] |
Additional glob patterns to ignore |
output |
string |
"terminal" |
Default output format: terminal, markdown, html, json |
severity |
string |
"LOW" |
Minimum severity to report |
thresholds |
object |
{} |
Max allowed findings per severity (scan fails if exceeded) |
thresholds.critical |
number |
∞ | Max CRITICAL findings before scan fails |
thresholds.high |
number |
∞ | Max HIGH findings before scan fails |
thresholds.medium |
number |
∞ | Max MEDIUM findings before scan fails |
thresholds.low |
number |
∞ | Max LOW findings before scan fails |
excludeTestFiles |
boolean |
true |
Skip test files (*.test.ts, *.spec.js, __tests__/) |
CLI Options Override Config
Command-line options take precedence over config file settings:
# Config says severity: LOW, but CLI overrides to HIGH
vibesafe scan ./src --severity HIGH
# Config says output: terminal, but CLI overrides to markdown
vibesafe scan ./src --format markdown
# Add ignore patterns on top of config
vibesafe scan ./src --ignore "**/*.generated.ts" "scripts/**"CI/CD Integration
GitHub Action
The VibeSafe GitHub Action posts inline review comments on vulnerable lines and a summary comment with severity breakdown — directly on your pull requests.
name: Security Scan
on:
pull_request:
branches: [main]
permissions:
contents: read
pull-requests: write
jobs:
vibesafe:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: aviferdman/vibesafe-action@v1
with:
scan-path: './src'
fail-on-severity: 'HIGH'Features:
- Inline review comments on affected lines in the PR diff
- Summary comment with severity breakdown table
- Configurable merge blocking (CRITICAL, HIGH, MEDIUM, LOW, or none)
- Educational feedback explaining why LLMs introduce each vulnerability
GitHub Actions (Standalone)
name: Security Scan
on: [push, pull_request]
jobs:
vibesafe:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 18
- name: Install VibeSafe
run: npm install -g vibesafe-cli
- name: Run security scan
run: vibesafe scan ./src --severity HIGH
- name: Generate report
if: always()
run: vibesafe scan ./src --format markdown -o vibesafe-report.md
- name: Upload report
if: always()
uses: actions/upload-artifact@v4
with:
name: security-report
path: vibesafe-report.mdGitLab CI
vibesafe:
stage: test
image: node:18
script:
- npm install -g vibesafe-cli
- vibesafe scan ./src --severity HIGH
artifacts:
when: always
paths:
- vibesafe-report.mdUsing Thresholds for Quality Gates
Create a .vibesaferc.json with strict thresholds for CI:
{
"severity": "MEDIUM",
"thresholds": {
"critical": 0,
"high": 0
}
}The scan will exit with code 1 if any CRITICAL or HIGH findings are detected, failing the CI pipeline.
Plugin System
VibeSafe supports custom detection rules via the plugin API.
Creating a Custom Rule
import type { SecurityRule, Finding, RulePlugin } from 'vibesafe-cli';
import { Severity } from 'vibesafe-cli';
class MyCustomRule implements SecurityRule {
readonly id = 'my-custom-rule';
readonly name = 'My Custom Rule';
readonly description = 'Detects a custom vulnerability pattern';
readonly severity = Severity.MEDIUM;
check(filePath: string, ast: unknown): Finding[] {
// Your detection logic here
return [];
}
}
// Export as a plugin (preferred convention)
export const plugin: RulePlugin = {
id: 'my-custom-rule',
name: 'My Custom Rule',
version: '1.0.0',
create: () => new MyCustomRule(),
};Plugin Export Conventions
VibeSafe supports three export conventions:
Named
pluginexport (preferred):export const plugin: RulePlugin = { id, name, version, create };
Named
createRulefactory:export function createRule(): SecurityRule { return new MyRule(); }
Default export:
export default { id, name, version, create };
Error Handling
VibeSafe provides user-friendly error messages by default. Use --verbose for detailed diagnostics:
# Normal mode: clean error messages
vibesafe scan ./nonexistent
# Error: ./nonexistent: Path not found
# Verbose mode: includes stack traces and timing
vibesafe scan ./my-project --verbose
# Files discovered: 42
# Files parsed successfully: 41
# Files skipped (parse errors): 1
# Parse error: broken.js — Unexpected token
# Scan completed in 1234msCommon Error Scenarios
| Scenario | Message | Exit Code |
|---|---|---|
| Path not found | Error: ./path: Path not found |
2 |
| Permission denied | Error: ./path: Permission denied |
2 |
| Invalid config | Config error: <details> |
2 |
| Parse errors | ⚠ N file(s) could not be parsed |
0 (continues) |
| Findings detected | Summary with findings | 1 |
| Threshold exceeded | Threshold violations: ... |
1 |
FAQ
What languages does VibeSafe support?
Currently JavaScript and TypeScript (including JSX/TSX), Python, and Java. Go rules are also included but Go file scanning is being finalized. Multi-language support continues to expand.
Does VibeSafe require an API key or internet connection?
No. VibeSafe runs all analysis locally on your machine. Anonymous telemetry is collected by default to help improve the tool, but it contains no PII (no file paths, code, or user identifiers). You can opt out at any time with --no-telemetry or by setting "telemetryEnabled": false in ~/.vibesafe/config.json. See Telemetry & Privacy for details.
How is VibeSafe different from ESLint security plugins?
VibeSafe uses AST-based taint analysis to track data flow from user input to dangerous sinks (SQL queries, shell commands). ESLint security plugins use pattern matching which produces more false positives and misses indirect data flows.
How is VibeSafe different from traditional static analysis tools?
VibeSafe is specifically designed for patterns that AI code generators produce. It includes educational feedback explaining why LLMs make each mistake, and its detection rules are tuned for the specific anti-patterns that AI assistants frequently generate. Traditional SAST tools focus on general code quality and miss AI-specific vulnerability patterns.
Will VibeSafe slow down my CI pipeline?
No. VibeSafe scans 1000+ files in under 5 seconds. AST parsing takes <100ms per file, and rule execution takes <100ms per file per rule.
Can I disable specific rules?
Yes. Use .vibesaferc.json to disable rules:
{
"rules": {
"insecure-random": { "enabled": false }
}
}How do I reduce false positives?
- Use
excludeTestFiles: true(default) to skip test files - Configure
ignorepatterns for generated code - Set
severitytoHIGHto only see high-confidence findings - Disable rules that don't apply to your project
Can I add custom rules?
Yes. See the Plugin System section. Create a module that exports a RulePlugin object with your detection logic.
Telemetry & Privacy
VibeSafe collects anonymous usage data to help improve the tool. No personally identifiable information (PII) is ever collected.
What We Collect
| Data | Example | Purpose |
|---|---|---|
| Files scanned count | 42 |
Understand typical project sizes |
| Findings count by severity | { critical: 1, high: 2 } |
Track detection effectiveness |
| Rules triggered | ["hardcoded-secrets"] |
Prioritize rule development |
| Output format | "terminal" |
Improve report formats |
| Scan duration | 1250ms |
Monitor performance |
| OS platform & arch | "win32", "x64" |
Platform compatibility |
| Node.js version | "v18.17.0" |
Runtime support planning |
| VibeSafe version | "1.0.0" |
Track adoption |
What We NEVER Collect
- ✗ File paths, file names, or code snippets
- ✗ User names, emails, or IP addresses
- ✗ Repository names or project details
- ✗ Git history or commit information
- ✗ Environment variables or secrets
Opting Out
You can disable telemetry at any time using any of these methods:
Method 1: CLI flag (per-scan)
vibesafe scan ./my-project --no-telemetryMethod 2: Persistent opt-out
Using --no-telemetry automatically persists your preference. You can also manually edit ~/.vibesafe/config.json:
{
"telemetryEnabled": false,
"privacyNoticeSeen": true
}First-Run Privacy Notice
On the first scan, VibeSafe displays a privacy notice explaining what data is collected and how to opt out. This notice is shown once and the preference is recorded in ~/.vibesafe/config.json.
GDPR Compliance
- Lawful basis: Legitimate interest (anonymous product analytics)
- No PII: No personal data is collected or processed
- Opt-out: Available at any time via CLI flag or config file
- Data minimization: Only anonymous aggregate metrics are collected
- Transparency: Privacy notice displayed on first run
License
MIT