Package Exports
- vibesafe-cli
- vibesafe-cli/package.json
Readme
VibeSafe — AI Code Security Auditor
Catches the vulnerabilities that LLMs introduce and SonarQube misses.
VibeSafe is a CLI security scanner purpose-built for AI-generated code. It detects common vulnerability patterns that large language models frequently produce — hardcoded secrets, SQL injection, command injection, missing authentication, and more — with educational feedback that teaches developers why LLMs make these mistakes and how to fix them.
Why VibeSafe?
AI code generators (Cursor, Copilot, Claude) produce functional code fast — but they also produce predictable security mistakes:
- 🔑 Hardcoded API keys and secrets (LLMs hallucinate realistic-looking credentials)
- 💉 SQL injection via string concatenation (LLMs prefer simple patterns from tutorials)
- 🐚 Command injection through
exec()with user input - 🔓 API routes without authentication middleware
- 🎲
Math.random()for tokens and session IDs - ✅ Missing input validation on request data
Traditional scanners like SonarQube miss these patterns because they weren't designed for AI-generated code. VibeSafe catches them with AST-based analysis and taint tracking, then teaches you why the LLM made the mistake and how to fix it.
Features
- 🔍 10 AI-Specific Detection Rules — High-precision rules targeting vulnerabilities LLMs commonly introduce
- 📚 Educational Feedback — "Why LLMs fail" explanations with every finding
- 🌳 AST-Based Analysis — Deep code structure analysis using tree-sitter, not just regex
- 🌍 Multi-Language — JavaScript, TypeScript, Python, Java (with Go in progress)
- 🔬 Taint Tracking — Follows user input from
req.bodyto SQL queries and shell commands - ⚡ Fast — Scans 1000+ file repos in under 5 seconds
- 🔌 Plugin Architecture — Add custom detection rules via the plugin API
- 📝 Multiple Output Formats — Terminal (colored), Markdown, HTML, JSON reports
- ⚙️ Configurable —
.vibesaferc.jsonfor per-project settings - 🚦 CI/CD Ready — Exit codes and severity thresholds for pipeline integration
Requirements
- Node.js 18 or higher
Installation
From Source (Recommended)
# Clone the repository
git clone https://github.com/aviferdman/ProjectX-Product.git
cd ProjectX-Product/product
# Install dependencies
npm install
# Build the project
npm run build
# Link for global CLI usage
npm linkVerify Installation
vibesafe --version
# 1.0.0
vibesafe --helpQuick Start
# Scan your project
vibesafe scan ./my-project
# Scan a single file
vibesafe scan ./src/auth.ts
# Get a Markdown report
vibesafe scan ./my-project --format markdown -o report.md
# Only show HIGH and CRITICAL findings
vibesafe scan ./my-project --severity HIGH
# Enable educational explanations
vibesafe scan ./my-project --education
# Debug mode with detailed output
vibesafe scan ./my-project --verboseCLI Reference
vibesafe [options] [command]
Global Options:
-v, --version Display the current version
-h, --help Display help
Commands:
scan <path> [options] Scan a directory or file for security vulnerabilitiesScan Command
vibesafe scan <path> [options]
Arguments:
path Path to the file or directory to scan
Options:
-f, --format <format> Output format: terminal (default), markdown, html, json
--severity <level> Minimum severity: LOW, MEDIUM, HIGH, CRITICAL
--ignore <patterns...> Additional glob patterns to ignore
-c, --config <path> Path to config file (default: auto-discover)
-o, --output <file> Write report to file (for markdown, html, json format)
--education Show educational content in output
--explain Show full educational explanations (implies --education)
--no-telemetry Disable anonymous telemetry (persists preference)
--no-update-check Disable update notifications
--api-key <key> VibeSafe Pro API key for unlimited scans
--verbose Show detailed diagnostic output for debugging
-h, --help Display helpExit Codes
| Code | Meaning |
|---|---|
0 |
No issues found (or only below minimum severity) |
1 |
CRITICAL or HIGH severity findings detected, or threshold violations |
2 |
Error (invalid path, config error, scan failure) |
Detection Rules
VibeSafe ships with 10 security detection rules, each targeting patterns that LLMs frequently produce.
Rule 1: Hardcoded Secrets (hardcoded-secrets)
Severity: HIGH–CRITICAL (named patterns like AWS keys, private keys, and database credentials are CRITICAL; generic entropy-based detections are HIGH)
Detects hardcoded API keys, passwords, and tokens using Shannon entropy analysis and pattern matching. Identifies 24 secret formats including AWS keys, Stripe tokens, GitHub PATs, JWTs, private keys, and database connection strings.
Example — Vulnerable:
// ❌ LLM-generated code with hardcoded secret
const apiKey = "sk-1234567890abcdef1234567890abcdef";
const dbUrl = "postgresql://admin:password123@localhost:5432/mydb";Example — Safe:
// ✅ Use environment variables
const apiKey = process.env.API_KEY;
const dbUrl = process.env.DATABASE_URL;Why LLMs fail: LLMs are trained on millions of public repositories containing accidentally committed secrets. They produce realistic-looking credentials in example code — sometimes hallucinated, sometimes memorized from training data.
Rule 2: SQL Injection (sql-injection)
Severity: CRITICAL
Detects SQL query construction using string concatenation or template literal interpolation with user-controlled input. Uses taint analysis to track data from req.body, req.params, and req.query into SQL calls.
Example — Vulnerable:
// ❌ String concatenation in SQL query
app.get("/users", (req, res) => {
const query = "SELECT * FROM users WHERE id = " + req.params.id;
db.query(query);
});Example — Safe:
// ✅ Parameterized query
app.get("/users", (req, res) => {
db.query("SELECT * FROM users WHERE id = ?", [req.params.id]);
});Why LLMs fail: LLMs generate SQL queries using string concatenation because these are the most common patterns in tutorial code and Stack Overflow answers. AI models don't reason about threat models — they optimize for syntactic correctness.
Rule 3: Command Injection (command-injection)
Severity: CRITICAL
Detects unsafe child_process usage (exec, spawn, execSync) with user input, dangerous eval() calls, and new Function() constructors with tainted data.
Example — Vulnerable:
// ❌ User input in shell command
app.post("/convert", (req, res) => {
exec("convert " + req.body.filename + " output.png");
});Example — Safe:
// ✅ Use execFile with argument array
app.post("/convert", (req, res) => {
execFile("convert", [req.body.filename, "output.png"]);
});Why LLMs fail: LLMs default to exec() with string concatenation because it's the simplest pattern for running shell commands. Training data is dominated by quick-start tutorials that don't consider untrusted input flowing into shell interpreters.
Rule 4: Missing Authentication (missing-auth)
Severity: HIGH
Detects Express/Fastify route definitions without authentication middleware. Checks for auth middleware in the route handler chain, app-level middleware, and configurable public route whitelists.
Example — Vulnerable:
// ❌ No auth middleware on sensitive route
app.get("/api/users", (req, res) => {
res.json(users);
});Example — Safe:
// ✅ Auth middleware before handler
app.get("/api/users", isAuthenticated, (req, res) => {
res.json(users);
});Why LLMs fail: LLMs generate route handlers focused on business logic and typically omit authentication middleware because training data is dominated by tutorials that skip security concerns for brevity.
Configuration: You can specify additional public routes and auth middleware names:
{
"rules": {
"missing-auth": {
"enabled": true
}
}
}Default public routes include: /, /health, /login, /signup, /auth/*, /public, /favicon.ico.
Rule 5: Insecure Random (insecure-random)
Severity: HIGH (security contexts) / MEDIUM (general)
Detects Math.random() usage in security-critical contexts where crypto.randomBytes() or crypto.randomUUID() should be used instead. Uses two-tier severity based on context analysis.
Example — Vulnerable:
// ❌ Math.random() for session token
const sessionToken = Math.random().toString(36).substring(2);Example — Safe:
// ✅ Cryptographically secure random
import { randomUUID, randomBytes } from 'crypto';
const sessionToken = randomUUID();Why LLMs fail: LLMs default to Math.random() because it appears in the vast majority of JavaScript tutorials. Training data rarely distinguishes between casual randomness (shuffling arrays) and security-critical randomness (generating tokens).
Rule 6: Missing Input Validation (missing-validation)
Severity: MEDIUM
Detects route handlers that use req.body, req.params, or req.query without input validation. Recognizes validation libraries like Zod, Joi, Yup, express-validator, class-validator, and 10+ others.
Example — Vulnerable:
// ❌ Direct use of req.body without validation
app.post("/users", (req, res) => {
const { name, email } = req.body;
createUser(name, email);
});Example — Safe:
// ✅ Validate with Zod
import { z } from 'zod';
const UserSchema = z.object({ name: z.string(), email: z.string().email() });
app.post("/users", (req, res) => {
const data = UserSchema.parse(req.body);
createUser(data.name, data.email);
});Why LLMs fail: LLMs generate route handlers that directly destructure request properties because training data is dominated by examples that skip validation for brevity.
Rule 7: Unsafe Deserialization (unsafe-deserialization)
Severity: MEDIUM
Detects JSON.parse(), eval(), and new Function() usage on user-controlled input without validation. Uses taint analysis to track data from req.body, req.params, and req.query into deserialization calls.
Example — Vulnerable:
// ❌ Parsing user input without validation
app.post("/api/data", (req, res) => {
const config = JSON.parse(req.body.payload);
applyConfig(config);
});Example — Safe:
// ✅ Validate with schema before parsing
import { z } from 'zod';
const ConfigSchema = z.object({ theme: z.string(), lang: z.string() });
app.post("/api/data", (req, res) => {
const config = ConfigSchema.parse(JSON.parse(req.body.payload));
applyConfig(config);
});Why LLMs fail: LLMs use JSON.parse() directly on user input because training examples rarely demonstrate schema validation before deserialization. They optimize for functionality over defensive coding.
Rule 8: CORS Misconfiguration (cors-misconfiguration)
Severity: MEDIUM
Detects overly permissive CORS configurations including wildcard origins (*), reflecting req.headers.origin without validation, and missing credentials restrictions.
Example — Vulnerable:
// ❌ Wildcard CORS allows any origin
app.use(cors({ origin: '*', credentials: true }));Example — Safe:
// ✅ Whitelist specific origins
app.use(cors({
origin: ['https://myapp.com', 'https://admin.myapp.com'],
credentials: true
}));Why LLMs fail: LLMs default to origin: '*' because it's the quickest way to resolve CORS errors during development. Training data is filled with Stack Overflow answers recommending wildcard origins as a "fix."
Rule 9: Error Leakage (error-leakage)
Severity: LOW
Detects error handlers that expose internal details (stack traces, database errors, file paths) to clients. Checks for err.stack, err.message, and raw error forwarding in Express error middleware.
Example — Vulnerable:
// ❌ Stack trace exposed to client
app.use((err, req, res, next) => {
res.status(500).json({ error: err.stack });
});Example — Safe:
// ✅ Generic message, log internally
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({ error: 'Internal server error' });
});Why LLMs fail: LLMs generate error handlers that pass the full error object to the response because training data shows debugging-oriented examples. They don't distinguish between development and production error handling.
Rule 10: Missing Rate Limiting (missing-rate-limit)
Severity: MEDIUM
Detects Express/Fastify route definitions for sensitive endpoints (login, register, API routes) without rate limiting middleware. Recognizes express-rate-limit, rate-limiter-flexible, and similar libraries.
Example — Vulnerable:
// ❌ No rate limiting on login endpoint
app.post("/auth/login", (req, res) => {
authenticate(req.body.email, req.body.password);
});Example — Safe:
// ✅ Rate limiter middleware applied
import rateLimit from 'express-rate-limit';
const limiter = rateLimit({ windowMs: 15 * 60 * 1000, max: 5 });
app.post("/auth/login", limiter, (req, res) => {
authenticate(req.body.email, req.body.password);
});Why LLMs fail: LLMs focus on core business logic when generating API handlers and omit rate limiting because training data overwhelmingly shows functional examples without infrastructure concerns.
Configuration
VibeSafe looks for a .vibesaferc.json configuration file in your project root (auto-discovered via cosmiconfig).
Example Configuration
{
"rules": {
"hardcoded-secrets": { "enabled": true, "severity": "CRITICAL" },
"sql-injection": { "enabled": true },
"command-injection": { "enabled": true },
"missing-auth": { "enabled": true },
"insecure-random": { "enabled": false },
"missing-validation": { "enabled": true, "severity": "HIGH" }
},
"ignore": ["**/generated/**", "**/vendor/**"],
"output": "terminal",
"severity": "MEDIUM",
"thresholds": {
"critical": 0,
"high": 5
},
"excludeTestFiles": true
}Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
rules |
object |
All enabled | Per-rule overrides keyed by rule ID |
rules.<id>.enabled |
boolean |
true |
Enable or disable a specific rule |
rules.<id>.severity |
string |
Rule default | Override severity: LOW, MEDIUM, HIGH, CRITICAL |
ignore |
string[] |
[] |
Additional glob patterns to ignore |
output |
string |
"terminal" |
Default output format: terminal, markdown, html, json |
severity |
string |
"LOW" |
Minimum severity to report |
thresholds |
object |
{} |
Max allowed findings per severity (scan fails if exceeded) |
thresholds.critical |
number |
∞ | Max CRITICAL findings before scan fails |
thresholds.high |
number |
∞ | Max HIGH findings before scan fails |
thresholds.medium |
number |
∞ | Max MEDIUM findings before scan fails |
thresholds.low |
number |
∞ | Max LOW findings before scan fails |
excludeTestFiles |
boolean |
true |
Skip test files (*.test.ts, *.spec.js, __tests__/) |
CLI Options Override Config
Command-line options take precedence over config file settings:
# Config says severity: LOW, but CLI overrides to HIGH
vibesafe scan ./src --severity HIGH
# Config says output: terminal, but CLI overrides to markdown
vibesafe scan ./src --format markdown
# Add ignore patterns on top of config
vibesafe scan ./src --ignore "**/*.generated.ts" "scripts/**"CI/CD Integration
GitHub Action (Recommended)
The VibeSafe GitHub Action posts inline review comments on vulnerable lines and a summary comment with severity breakdown — directly on your pull requests.
name: Security Scan
on:
pull_request:
branches: [main]
permissions:
contents: read
pull-requests: write
jobs:
vibesafe:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: aviferdman/ProjectX-Product/action@main
with:
scan-path: './src'
fail-on-severity: 'HIGH'Features:
- 🔍 Inline review comments on affected lines in the PR diff
- 📊 Summary comment with severity breakdown table
- 🚫 Configurable merge blocking (CRITICAL, HIGH, MEDIUM, LOW, or none)
- 📚 Educational feedback explaining why LLMs introduce each vulnerability
See action/README.md for full documentation, inputs, outputs, and advanced examples.
GitHub Actions (Standalone)
name: Security Scan
on: [push, pull_request]
jobs:
vibesafe:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 18
- name: Install VibeSafe
run: |
git clone https://github.com/aviferdman/ProjectX-Product.git /tmp/vibesafe
cd /tmp/vibesafe/product
npm ci && npm run build && npm link
- name: Run security scan
run: vibesafe scan ./src --severity HIGH
- name: Generate report
if: always()
run: vibesafe scan ./src --format markdown -o vibesafe-report.md
- name: Upload report
if: always()
uses: actions/upload-artifact@v4
with:
name: security-report
path: vibesafe-report.mdGitLab CI
vibesafe:
stage: test
image: node:18
script:
- git clone https://github.com/aviferdman/ProjectX-Product.git /tmp/vibesafe
- cd /tmp/vibesafe/product && npm ci && npm run build && npm link
- vibesafe scan ./src --severity HIGH
artifacts:
when: always
paths:
- vibesafe-report.mdUsing Thresholds for Quality Gates
Create a .vibesaferc.json with strict thresholds for CI:
{
"severity": "MEDIUM",
"thresholds": {
"critical": 0,
"high": 0
}
}The scan will exit with code 1 if any CRITICAL or HIGH findings are detected, failing the CI pipeline.
Plugin System
VibeSafe supports custom detection rules via the plugin API.
Creating a Custom Rule
import type { SecurityRule, Finding, RulePlugin } from 'vibesafe';
import { Severity } from 'vibesafe';
class MyCustomRule implements SecurityRule {
readonly id = 'my-custom-rule';
readonly name = 'My Custom Rule';
readonly description = 'Detects a custom vulnerability pattern';
readonly severity = Severity.MEDIUM;
check(filePath: string, ast: unknown): Finding[] {
// Your detection logic here
return [];
}
}
// Export as a plugin (preferred convention)
export const plugin: RulePlugin = {
id: 'my-custom-rule',
name: 'My Custom Rule',
version: '1.0.0',
create: () => new MyCustomRule(),
};Plugin Export Conventions
VibeSafe supports three export conventions:
Named
pluginexport (preferred):export const plugin: RulePlugin = { id, name, version, create };
Named
createRulefactory:export function createRule(): SecurityRule { return new MyRule(); }
Default export:
export default { id, name, version, create };
Error Handling
VibeSafe provides user-friendly error messages by default. Use --verbose for detailed diagnostics:
# Normal mode: clean error messages
vibesafe scan ./nonexistent
# Error: ./nonexistent: Path not found
# Verbose mode: includes stack traces and timing
vibesafe scan ./my-project --verbose
# Files discovered: 42
# Files parsed successfully: 41
# Files skipped (parse errors): 1
# Parse error: broken.js — Unexpected token
# Scan completed in 1234msCommon Error Scenarios
| Scenario | Message | Exit Code |
|---|---|---|
| Path not found | Error: ./path: Path not found |
2 |
| Permission denied | Error: ./path: Permission denied |
2 |
| Invalid config | Config error: <details> |
2 |
| Parse errors | ⚠ N file(s) could not be parsed |
0 (continues) |
| Findings detected | Summary with findings | 1 |
| Threshold exceeded | Threshold violations: ... |
1 |
Development
Setup
npm install # Install dependencies
npm run build # Compile TypeScript
npm link # Link CLI globallyCommon Commands
npm run build # Compile TypeScript
npm run build:watch # Watch mode for development
npm test # Run all tests
npm run test:unit # Run unit tests only
npm run test:integration # Run integration tests only
npm run test:coverage # Run tests with coverage report
npm run lint # Check for linting errors
npm run lint:fix # Auto-fix linting errors
npm run format # Format code with Prettier
npm run typecheck # Type-check without emittingProject Structure
product/
├── src/
│ ├── cli/ # CLI entry point and command handlers
│ ├── config/ # Configuration loading, schema, thresholds
│ ├── education/ # Educational content for rule findings
│ ├── engine/ # Rule engine, plugin loader, taint analysis
│ ├── output/ # Report formatters (terminal, Markdown, HTML, JSON)
│ ├── parser/ # AST parsing via tree-sitter (JS/TS/JSX/TSX)
│ ├── rules/ # Security detection rules (10 built-in, multi-language)
│ ├── scanLimit/ # Scan limit and usage tracking
│ ├── scanner/ # File system scanner with .gitignore support
│ ├── telemetry/ # Anonymous telemetry collection
│ ├── types/ # Shared TypeScript interfaces
│ ├── utils/ # Error classes, logger, test file detection
│ └── index.ts # Package entry point
├── tests/
│ ├── unit/ # Unit tests for all modules
│ ├── integration/ # End-to-end CLI tests
│ └── fixtures/ # Vulnerable and secure code samples
├── action/ # GitHub Action for PR integration
├── benchmarks/ # Detection benchmark suite
├── docs/ # Documentation and guides
├── CHANGELOG.md
├── LICENSE
├── package.json
├── tsconfig.json
├── eslint.config.mjs
├── vitest.config.ts
└── .prettierrcArchitecture
CLI Input → File Scanner → AST Parser → Rule Engine → Output Formatter
↓ ↓ ↓ ↓
.gitignore tree-sitter Plugin System Terminal/Markdown/
fast-glob JS/TS/JSX Taint Analysis HTML/JSON- File Scanner — Recursively discovers code files, respects
.gitignoreand custom ignore patterns - AST Parser — Parses JS/TS/JSX/TSX files into abstract syntax trees using tree-sitter
- Rule Engine — Executes registered rules against each parsed file, collects findings
- Plugin System — Loads built-in and custom rules via the
RulePlugininterface - Output Formatter — Formats findings for terminal display, Markdown, HTML, or JSON reports
Benchmark Results
VibeSafe is validated against a curated benchmark of 52 real-world vulnerable AI-generated code samples spanning 10 vulnerability categories. These samples represent common patterns produced by LLMs like ChatGPT, Copilot, and Claude.
Overall Detection Rate
| Metric | Value |
|---|---|
| Sample Detection Rate | 98.1% (51/52 samples) |
| Finding Detection Rate | 351.0% (537/153 annotated findings) |
| Total Rules Active | 10 |
Note: The finding rate exceeds 100% because VibeSafe's 10 cross-cutting rules detect additional vulnerabilities beyond the primary category annotated in each sample. For example, a SQL injection sample may also have missing authentication and missing input validation — VibeSafe catches all of them.
Detection by Category
| Category | Sample Detection | Finding Rate | Expected | Actual |
|---|---|---|---|---|
| ✅ Hardcoded Secrets | 6/6 (100%) | 100% | 24 | 24 |
| ✅ SQL Injection | 6/6 (100%) | 385% | 20 | 77 |
| ✅ Command Injection | 5/5 (100%) | 467% | 15 | 70 |
| ✅ CORS Misconfiguration | 5/5 (100%) | 354% | 13 | 46 |
| ✅ Error Leakage | 5/5 (100%) | 407% | 14 | 57 |
| ✅ Insecure Random | 5/5 (100%) | 686% | 14 | 96 |
| ✅ Missing Auth | 5/5 (100%) | 471% | 14 | 66 |
| ✅ Missing Rate Limit | 5/5 (100%) | 408% | 12 | 49 |
| ✅ Missing Validation | 5/5 (100%) | 214% | 14 | 30 |
| 🟡 Unsafe Deserialization | 4/5 (80%) | 169% | 13 | 22 |
False Negatives
1 sample was not detected (unsafe-deserialization/data-migration.js). This CLI-based data migration script uses JSON.parse() and yaml.load() on file-read data without going through web framework request objects. The current unsafe deserialization rule focuses on web-context taint patterns (e.g., req.body → JSON.parse()). A future enhancement will expand taint tracking to cover CLI/file-system input flows.
Running the Benchmark
npm run benchmarkResults are written to:
benchmarks/BENCHMARK-RESULTS.md— Full Markdown report with per-sample detailsbenchmarks/benchmark-results.json— Machine-readable JSON results
FAQ
What languages does VibeSafe support?
Currently JavaScript and TypeScript (including JSX/TSX), Python, and Java. Go rules are also included but Go file scanning is being finalized. Multi-language support continues to expand.
Does VibeSafe require an API key or internet connection?
No. VibeSafe runs all analysis locally on your machine. Anonymous telemetry is collected by default to help improve the tool, but it contains no PII (no file paths, code, or user identifiers). You can opt out at any time with --no-telemetry or by setting "telemetryEnabled": false in ~/.vibesafe/config.json. See Telemetry & Privacy for details.
How is VibeSafe different from ESLint security plugins?
VibeSafe uses AST-based taint analysis to track data flow from user input to dangerous sinks (SQL queries, shell commands). ESLint security plugins use pattern matching which produces more false positives and misses indirect data flows.
How is VibeSafe different from SonarQube?
VibeSafe is specifically designed for patterns that AI code generators produce. It includes educational feedback explaining why LLMs make each mistake, and its detection rules are tuned for the specific anti-patterns that ChatGPT, Copilot, and Claude frequently generate.
Will VibeSafe slow down my CI pipeline?
No. VibeSafe scans 1000+ files in under 5 seconds. AST parsing takes <100ms per file, and rule execution takes <100ms per file per rule.
Can I disable specific rules?
Yes. Use .vibesaferc.json to disable rules:
{
"rules": {
"insecure-random": { "enabled": false }
}
}How do I reduce false positives?
- Use
excludeTestFiles: true(default) to skip test files - Configure
ignorepatterns for generated code - Set
severitytoHIGHto only see high-confidence findings - Disable rules that don't apply to your project
Can I add custom rules?
Yes! See the Plugin System section. Create a module that exports a RulePlugin object with your detection logic.
Telemetry & Privacy
VibeSafe collects anonymous usage data to help improve the tool. No personally identifiable information (PII) is ever collected.
What We Collect
| Data | Example | Purpose |
|---|---|---|
| Files scanned count | 42 |
Understand typical project sizes |
| Findings count by severity | { critical: 1, high: 2 } |
Track detection effectiveness |
| Rules triggered | ["hardcoded-secrets"] |
Prioritize rule development |
| Output format | "terminal" |
Improve report formats |
| Scan duration | 1250ms |
Monitor performance |
| OS platform & arch | "win32", "x64" |
Platform compatibility |
| Node.js version | "v18.17.0" |
Runtime support planning |
| VibeSafe version | "1.0.0" |
Track adoption |
What We NEVER Collect
- ✗ File paths, file names, or code snippets
- ✗ User names, emails, or IP addresses
- ✗ Repository names or project details
- ✗ Git history or commit information
- ✗ Environment variables or secrets
Opting Out
You can disable telemetry at any time using any of these methods:
Method 1: CLI flag (per-scan)
vibesafe scan ./my-project --no-telemetryMethod 2: Persistent opt-out
Using --no-telemetry automatically persists your preference. You can also manually edit ~/.vibesafe/config.json:
{
"telemetryEnabled": false,
"privacyNoticeSeen": true
}First-Run Privacy Notice
On the first scan, VibeSafe displays a privacy notice explaining what data is collected and how to opt out. This notice is shown once and the preference is recorded in ~/.vibesafe/config.json.
GDPR Compliance
- Lawful basis: Legitimate interest (anonymous product analytics)
- No PII: No personal data is collected or processed
- Opt-out: Available at any time via CLI flag or config file
- Data minimization: Only anonymous aggregate metrics are collected
- Transparency: Privacy notice displayed on first run
License
MIT — see LICENSE for details.
Contributing
Contributions are welcome! Please see the rule authoring guide for creating custom detection rules, and review the existing code for style guidelines and testing requirements.