Package Exports
- @szcn/sentinelreview
- @szcn/sentinelreview/out/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@szcn/sentinelreview) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
SentinelReview
AI-powered code review — security analysis, code quality, standards enforcement, and custom rules.
Reviews your code like a senior engineer: OWASP Top 10, clean code, error handling, naming, architecture.
Works with MCP (Cursor, Windsurf, VS Code, Claude Desktop), CLI, and Node.js API.
6 providers: Ollama (local/free) · Gemini · Groq · DeepSeek · OpenAI · Anthropic
npm install -g @szcn/sentinelreview┌──────────────────────────────────────────────────────────────────────┐
│ │
│ ███████╗███████╗███╗ ██╗████████╗██╗███╗ ██╗███████╗██╗ │
│ ██╔════╝██╔════╝████╗ ██║╚══██╔══╝██║████╗ ██║██╔════╝██║ │
│ ███████╗█████╗ ██╔██╗ ██║ ██║ ██║██╔██╗ ██║█████╗ ██║ │
│ ╚════██║██╔══╝ ██║╚██╗██║ ██║ ██║██║╚██╗██║██╔══╝ ██║ │
│ ███████║███████╗██║ ╚████║ ██║ ██║██║ ╚████║███████╗███████╗ │
│ ╚══════╝╚══════╝╚═╝ ╚═══╝ ╚═╝ ╚═╝╚═╝ ╚═══╝╚══════╝╚══════╝ │
│ R E V I E W │
│ │
│ OWASP Top 10 · Code Quality · Standards · Custom Rules │
│ Cursor · Windsurf · VS Code · Claude Desktop · CLI · Node API │
│ Ollama · Gemini · Groq · DeepSeek · OpenAI · Anthropic │
│ │
└──────────────────────────────────────────────────────────────────────┘Screenshots
Review Flow in Cursor
┌──────────────────────────────────────────────────────────────┐
│ Cursor AI Chat │
│ │
│ 🧑 You: │
│ review my staged code for security issues │
│ │
│ 🤖 AI: │
│ Running review_staged tool... │
│ [🔧 review_staged] ✓ │
│ │
│ ## 🔒 OWASP Security Analysis │
│ │
│ ### A01 – Broken Access Control │
│ - 🔴 CRITICAL | src/main.go:45 │
│ **Issue:** getUserHandler: no auth check, IDOR vuln │
│ **Fix:** │
│ ```go │
│ claims, err := validateJWT(r.Header.Get("Authorization")) │
│ if claims.Username != username && !claims.IsAdmin { │
│ http.Error(w, "forbidden", 403) │
│ } │
│ ``` │
│ │
│ ### A02 – Cryptographic Failures │
│ - 🔴 CRITICAL | src/main.go:10 │
│ **Issue:** DB password hardcoded │
│ │
│ ### A03 – Injection │
│ - 🔴 CRITICAL | src/main.go:45 │
│ **Issue:** SQL injection — string concatenation │
│ **Fix:** Use prepared statements │
│ ... │
│ │
│ ## 🧹 Code Quality │
│ - 🟡 MEDIUM | src/main.go:33 │
│ **Issue:** Function too long (85 lines), extract helpers │
│ - 🔵 LOW | src/main.go:12 │
│ **Issue:** Magic string "secret-key" — use named constant │
│ │
│ ## 📋 Custom Rule Compliance │
│ - 🟠 VIOLATED | src/main.go:55 │
│ **Rule:** "All endpoints must have rate limiting" │
│ deleteUserHandler has no rate limit middleware │
│ - ✅ PASS | "DB queries must use prepared statements" │
│ │
│ ## 💰 Token & Cost Summary │
│ | Metric | Value | │
│ |-----------------|--------------------------| │
│ | Provider | ollama | │
│ | Model | qwen2.5-coder:14b | │
│ | Total tokens | 4,713 | │
│ | Est. cost | $0.00 (local) | │
│ | Duration | 3.5 min | │
│ │
└──────────────────────────────────────────────────────────────┘Provider Status (provider_info)
┌──────────────────────────────────────────────────────────────┐
│ 📊 SentinelReview Config │
│ │
│ Active Provider : ollama │
│ Active Model : qwen2.5-coder:14b │
│ Language : both │
│ Severity : strict │
│ Custom rules : 5 │
│ │
│ Provider Status: │
│ Ollama → qwen2.5-coder:14b | Local — no API key │
│ Gemini → gemini-2.0-flash | Key: ✅ set │
│ Groq → llama-3.3-70b | Key: ✅ set │
│ OpenAI → gpt-4o | Key: ❌ missing │
│ DeepSeek → deepseek-chat | Key: ❌ missing │
│ Anthropic → claude-opus-4-6 | Key: ✅ set │
│ │
└──────────────────────────────────────────────────────────────┘Branch Comparison
┌──────────────────────────────────────────────────────────────┐
│ 🧑 You: │
│ review_branch base=main head=feature/auth │
│ │
│ 🤖 AI: │
│ Running review_branch tool... │
│ [🔧 review_branch] ✓ │
│ │
│ ## 🔒 OWASP Security Analysis │
│ ### A01 – Broken Access Control │
│ ✅ No issues found │
│ ### A02 – Cryptographic Failures │
│ - 🟡 MEDIUM | src/auth/jwt.go:12 │
│ **Issue:** JWT expiry set to 72h — too long │
│ **Fix:** Use 24h or shorter token lifetime │
│ ... │
└──────────────────────────────────────────────────────────────┘Real Report Examples
Example 1 — Go project (Ollama qwen2.5-coder:14b, local, free)
## 🔒 OWASP Security Analysis
### A01 – Broken Access Control
- 🔴 CRITICAL | src/main.go:45
Issue: getUserHandler — no auth check, IDOR vulnerability
Fix: Add JWT validation and user-scoped access control
- 🔴 CRITICAL | src/main.go:86
Issue: deleteUserHandler — no admin check
Fix: Add isValidAdminToken() guard
### A02 – Cryptographic Failures
- 🔴 CRITICAL | src/main.go:10
Issue: DB password hardcoded
- 🔴 CRITICAL | src/main.go:15
Issue: JWT secret hardcoded
- 🔴 CRITICAL | src/main.go:106
Issue: Using HTTP instead of HTTPS
### A03 – Injection
- 🔴 CRITICAL | src/main.go:45, 57
Issue: SQL injection — string concatenation in query
Fix: Use prepared statements ($1, $2)
## 🧹 Code Quality
- 🟡 MEDIUM | src/main.go:33-85
Issue: getUserHandler is 52 lines long — extract DB query and
response formatting into separate functions
- 🟡 MEDIUM | src/main.go:92
Issue: Error silently ignored (err from db.Close())
Fix: Log or return the error
- 🔵 LOW | src/main.go:12
Issue: Magic string "super-secret-key" — use a named constant
- 🔵 LOW | src/main.go:1-5
Issue: No package-level documentation comment
## 📋 Custom Rule Compliance
- 🟠 VIOLATED | src/main.go:55
Rule: "All API endpoints must include rate limiting"
deleteUserHandler has no rate limit middleware
- 🟠 VIOLATED | src/main.go:33
Rule: "All API endpoints must include authentication"
getUserHandler is publicly accessible without any auth
- ✅ PASS | "Sensitive data must not be written to logs"
- ✅ PASS | "Database queries must use parameterized statements"
(Violation already reported under A03)
💰 Token & Cost
| Provider | ollama |
| Model | qwen2.5-coder:14b |
| Tokens | 4,713 |
| Est. cost | $0.00 (local) |
| Duration | 3.5 min |Example 2 — Go project (Anthropic Claude Opus, cloud)
## 🔒 OWASP Security Analysis
### A01 – Broken Access Control
- 🔴 CRITICAL | main.go:33-52
Issue: getUserHandler — anonymous users can access all personal
data (password_hash, SSN). IDOR + data leak.
Fix: Validate JWT, restrict to own data, remove sensitive fields
```go
claims, err := validateJWT(r.Header.Get("Authorization"))
if claims.Username != username && !claims.IsAdmin {
http.Error(w, "forbidden", 403)
}
query := "SELECT id, username, email FROM users WHERE username = $1"- 🔴 CRITICAL | main.go:55-66 Issue: deleteUserHandler — admin endpoint accessible without auth
A03 – Injection
- 🔴 CRITICAL | main.go:37
Issue: SQL injection — query built with fmt.Sprintf
// ❌ Dangerous query := fmt.Sprintf("SELECT ... WHERE username = '%s'", username) // ✅ Safe query := "SELECT ... WHERE username = $1" rows, err := db.Query(query, username)
A10 – SSRF
- 🔴 CRITICAL | main.go:81-93 Issue: proxyHandler — user URL fetched without validation
🧹 Code Quality
- 🟠 HIGH | main.go:33-85 Issue: getUserHandler violates Single Responsibility — handles HTTP parsing, DB query, serialization, and error response in one function. Fix: Extract into handler → service → repository layers
- 🟡 MEDIUM | main.go:92
Issue: db.Close() error discarded — can mask resource leaks
if err := db.Close(); err != nil { log.Printf("db close error: %v", err) } - 🟡 MEDIUM | main.go:55-66 Issue: deleteUserHandler returns plain text error messages — inconsistent with JSON responses elsewhere. Standardize error format.
- 🔵 LOW | main.go:5 Issue: Import "fmt" used only for Fprintf — prefer json.NewEncoder for structured responses
- 🔵 LOW | main.go:12-15 Issue: Global variables (db, jwtSecret) — consider dependency injection or a config struct for testability
📋 Custom Rule Compliance
- 🟠 VIOLATED | main.go:33, 55, 68, 81 Rule: "All endpoints must have rate limiting" None of the 4 handlers implement rate limiting
- 🟠 VIOLATED | main.go:33, 81 Rule: "All endpoints must include authentication" getUserHandler and proxyHandler are public
- 🟠 VIOLATED | main.go:95-100 Rule: "All errors must be logged with request context" Errors in searchHandler are returned to client but not logged
- ✅ PASS | "External URLs must be validated against allowlist" (Violation already covered under A10 – SSRF)
- ✅ PASS | "Passwords must be hashed with BCrypt, MD5/SHA1 forbidden"
💰 Token & Cost | Provider | anthropic | | Model | claude-opus-4-6 | | Tokens | 10,971 | | Est. cost | ~$0.66 * | | Duration | 1.9 min |
- Estimated based on provider pricing at time of testing.
### Model Comparison (Real Test Results)
┌────────────────────────────────────────────────────────────────────────┐ │ 3 Projects × 5 Ollama Models + Claude │ ├──────────────────────┬──────────┬─────────┬────────────┬──────────────┤ │ Model │ Findings │ Tokens │ Est. Cost │ Time/proj │ ├──────────────────────┼──────────┼─────────┼────────────┼──────────────┤ │ qwen2.5-coder:14b │ 81 │ 14,349 │ $0.00 │ ~3.5 min │ │ deepseek-coder:6.7b │ 17 │ 10,004 │ $0.00 │ ~1.5 min │ │ codellama:13b │ 0* │ 12,443 │ $0.00 │ ~1.8 min │ │ llama3.1:8b │ 11 │ 11,987 │ $0.00 │ ~1.5 min │ │ llama3.2 │ 0 │ 11,535 │ $0.00 │ ~40s │ ├──────────────────────┼──────────┼─────────┼────────────┼──────────────┤ │ claude-sonnet-4 │ — │ 4,421 │ ~$0.044 │ ~30s │ │ claude-opus-4-6 │ 141 │ 31,500 │ ~$1.84 │ ~1.9 min │ └──────────────────────┴──────────┴─────────┴────────────┴──────────────┘
- codellama produced prose output, did not follow structured format. Costs are estimates based on provider pricing at time of testing. Actual costs may vary — check your provider dashboard for exact billing. Ollama models are always free (local inference).
---
## Why SentinelReview?
| Feature | Details |
|---------|---------|
| **Expert-Level Review** | Not just a scanner — reviews like a senior engineer (security + quality + standards) |
| **OWASP Top 10 Analysis** | Line-level findings + fix suggestions for A01–A10 |
| **Code Quality Analysis** | Architecture, error handling, naming, documentation, maintainability |
| **Custom Rules Engine** | Enforce your team's rules — auth policies, coding standards, naming conventions |
| **6 AI Providers** | Ollama (local, free), Gemini, Groq, DeepSeek, OpenAI, Anthropic |
| **Multi-Tool MCP** | Works in Cursor, Windsurf, VS Code (Copilot), Claude Desktop, and any MCP client |
| **CLI + Node API** | Terminal CLI (`sentinelreview review`) and programmatic Node.js API |
| **Multi-Language Reports** | Turkish, English, or both in a single report |
| **Token & Cost Tracking** | Input/output token count and USD cost in every report |
| **HTML Report Output** | Auto-generates styled HTML reports to `.sentinelreview/reports/` (toggle on/off) |
| **Branch Diff Support** | Review diffs between `main...feature/x` |
| **Local/Internal Models** | Works offline with Ollama on your LAN or internal server |
---
## Table of Contents
1. [Quick Start](#1-quick-start)
2. [What is SentinelReview?](#2-what-is-sentinelreview)
- [How MCP Works — The Full Flow](#how-mcp-works--the-full-flow)
- [Architecture & Token Optimization](#architecture--token-optimization)
3. [Provider Options](#3-provider-options)
4. [Prerequisites](#4-prerequisites)
5. [Node.js Installation](#5-nodejs-installation)
6. [SentinelReview Installation](#6-sentinelreview-installation)
7. [Configuration](#7-configuration)
8. [MCP Integration (Cursor, Windsurf, VS Code, Claude Desktop)](#8-mcp-integration-cursor-windsurf-vs-code-claude-desktop)
9. [First Use](#9-first-use)
10. [Usage Outside MCP](#10-usage-outside-mcp)
11. [MCP Tool Reference](#11-mcp-tool-reference)
12. [Custom Rules](#12-custom-rules)
13. [Config Reference](#13-config-reference)
14. [Project Structure](#14-project-structure)
15. [FAQ](#15-faq)
**Turkish documentation:** [README.tr.md](docs/README.tr.md) · [Cursor Step-by-Step Guide (HTML)](docs/cursor-kullanim-rehberi.html)
---
## 1. Quick Start
```bash
# 1. Install
npm install -g @szcn/sentinelreviewType in Cursor chat:
setupThe wizard walks you through provider selection and API key setup. That's it.
2. What is SentinelReview?
SentinelReview reviews your code like a senior engineer — not just security scanning, but a complete code review covering vulnerabilities, code quality, standards, and your team's custom rules. It runs inside Cursor (or any MCP client); when you say "review this code," it takes the git diff, sends it to your chosen AI provider, and produces a structured, actionable report.
What does it find?
| Category | Example Findings | |
|---|---|---|
| Security | ||
| 🔑 | Authentication | Hardcoded passwords, weak hashing (MD5), token lifetime |
| 💉 | Injection | SQL injection, OS command injection, XSS |
| 🔒 | Access Control | IDOR, missing authorization, privilege escalation |
| 🌐 | SSRF | User-input-driven URL requests without allowlist |
| ⚙️ | Misconfiguration | Debug mode, verbose errors, CORS wildcard |
| 📦 | Components | Insecure libraries, deprecated APIs |
| 📝 | Logging | Logging passwords, missing audit trail |
| Code Quality | ||
| 🏗️ | Architecture | Functions too long, SRP violations, god classes |
| 🧩 | Error Handling | Silently ignored errors, missing error propagation |
| 📛 | Naming & Style | Magic numbers/strings, inconsistent conventions |
| 📖 | Documentation | Missing public API docs, unclear function signatures |
| ♻️ | Maintainability | Code duplication, tight coupling, dead code |
| Custom Rules | ||
| 📋 | Your team rules | Rate limiting, auth decorators, BCrypt policy, etc. |
How it works:
[Say "review_staged" in Cursor] → [git diff --staged captured]
→ [Sent to chosen provider] → [Full review report returned]
├── 🔒 OWASP Security Analysis (A01–A10)
├── 🧹 Code Quality (architecture, errors, naming)
├── 📋 Custom Rule Compliance (your rules)
└── 💰 Token & Cost SummaryHow MCP Works — The Full Flow
SentinelReview is an MCP (Model Context Protocol) server. When you add it to Cursor (or any MCP client), here's what happens step by step:
┌─────────────────────────────────────────────────────────────────────────┐
│ MCP Lifecycle │
│ │
│ 1. CONFIG │
│ You add sentinelreview to ~/.cursor/mcp.json │
│ ┌──────────────────────────────────────────┐ │
│ │ { "mcpServers": { "sentinelreview": { │ │
│ │ "command": "npx", │ │
│ │ "args": ["-y","@szcn/sentinelreview"] │ │
│ │ }}} │ │
│ └──────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ 2. LAUNCH (automatic on Cursor start) │
│ Cursor spawns: npx -y @szcn/sentinelreview │
│ Process starts → StdioServerTransport (stdin/stdout) │
│ 16+ tools registered → Cursor sees them in chat toolbar │
│ │ │
│ ▼ │
│ 3. IDLE — Tools are passively available │
│ Nothing happens until YOU ask the AI something. │
│ No automatic scanning, no background processes. │
│ │ │
│ ▼ │
│ 4. TRIGGER — You type in chat │
│ "review my staged code for security issues" │
│ │ │
│ ▼ │
│ 5. AI DECIDES which tool to call │
│ Cursor AI reads your message → picks review_staged │
│ Sends JSON-RPC call over stdin to the MCP process │
│ │ │
│ ▼ │
│ 6. EXECUTION │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ review_staged handler: │ │
│ │ a) git diff --staged → capture diff │ │
│ │ b) chunkDiff(diff) → split if too large │ │
│ │ c) Build prompt (OWASP + severity + custom rules) │ │
│ │ d) Send to AI provider (Ollama/Gemini/Groq/...) │ │
│ │ e) Receive response → format report │ │
│ │ f) Save HTML to .sentinelreview/reports/ (if on) │ │
│ │ g) Return report + token/cost summary │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ 7. RESPONSE — Report displayed in Cursor chat │
│ OWASP findings + Code Quality + Custom Rules + Cost │
│ │
└─────────────────────────────────────────────────────────────────────────┘Key points:
- Reviews never run automatically — you must ask the AI in chat.
- The MCP process stays alive as long as Cursor is open. No restart needed between reviews.
- Config changes (
set_provider,set_severity, etc.) take effect immediately — just run the next review. - The AI decides which tool to call based on your natural language request.
Architecture & Token Optimization
SentinelReview is designed to minimize token consumption and cost while maximizing review quality. Here's how:
Smart Chunking — Large Diff Handling
Large diffs (hundreds of files, thousands of lines) can easily exceed a model's context window, causing silent truncation or API errors. SentinelReview automatically splits large diffs into manageable chunks:
┌──────────────────────────────────────────────────────────────────┐
│ Diff Chunking Pipeline │
│ │
│ Input: 50,000 char diff (≈12,500 tokens) │
│ │ │
│ ▼ │
│ chunkDiff(diff, maxTokens=6000) │
│ Splits on file boundaries (diff --git lines) │
│ Each chunk ≤ 24,000 chars (≈6,000 tokens) │
│ │ │
│ ┌───────────┼───────────┐ │
│ ▼ ▼ ▼ │
│ Chunk 1 Chunk 2 Chunk 3 │
│ (files A-D) (files E-H) (files I-K) │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ reviewSingle reviewSingle reviewSingle │
│ (LLM call) (LLM call) (LLM call) │
│ │ │ │ │
│ └───────────┼───────────┘ │
│ ▼ │
│ Merged Report: │
│ ## 📦 Part 1/3 — [OWASP + Quality findings] │
│ --- │
│ ## 📦 Part 2/3 — [OWASP + Quality findings] │
│ --- │
│ ## 📦 Part 3/3 — [OWASP + Quality findings] │
│ │
│ Token totals and costs are summed across all chunks. │
└──────────────────────────────────────────────────────────────────┘Why this matters:
- Without chunking: a 50K char diff sent to a 8K context model → truncated output, missed vulnerabilities
- With chunking: each chunk gets full attention → complete coverage, accurate token/cost tracking
For small diffs (under ~6000 tokens), no splitting occurs — single LLM call, zero overhead.
Severity Filtering — Control Token Spend
The severity setting doesn't just filter the report — it's injected directly into the LLM prompt, telling the model what to focus on:
| Severity | Prompt Instruction | Token Impact |
|---|---|---|
strict |
Report ALL findings: CRITICAL, HIGH, MEDIUM, LOW. Skip nothing. | Highest — full analysis |
normal |
Report only CRITICAL and HIGH. Skip MEDIUM and LOW. | ~40-60% fewer output tokens |
light |
Report only CRITICAL. No code quality comments. | ~60-80% fewer output tokens |
Estimated token savings (same 200-line diff, Claude Sonnet):
strict → ~3,200 output tokens → ~$0.048
normal → ~1,800 output tokens → ~$0.027
light → ~800 output tokens → ~$0.012
Estimated daily cost for teams running 50 reviews/day:
strict ≈ $2.40/day vs light ≈ $0.60/day (4× savings)
* All costs are estimates. Actual costs depend on diff size, model,
and provider pricing which may change. Check your provider dashboard.Recommendation: Use strict for pre-release/security audits, normal for daily development, light for quick CI checks.
Security Hardening
All git operations use execFileAsync (array arguments) instead of execAsync (shell string interpolation). This prevents command injection through crafted file paths or branch names:
Safe: execFileAsync('git', ['diff', 'HEAD', '--', filepath])
Unsafe: execAsync(`git diff HEAD -- ${filepath}`)
↑ injection pointBranch names and file paths are validated with allowlist patterns before execution.
3. Provider Options
SentinelReview supports six AI providers. Ollama (free, local) is the default.
Provider Comparison
| Provider | Est. Cost | Quality | Speed | Privacy | Requirement |
|---|---|---|---|---|---|
| Ollama | 🆓 Free | Good* | Medium | ✅ Fully local | Ollama installed + running |
| Gemini | 🆓 Free | Good | Fast | ☁️ Cloud | API key (free) |
| Groq | 🆓 Free | Good | ⚡ Very fast | ☁️ Cloud | API key (free) |
| DeepSeek | 💲 Very cheap | Very Good | Fast | ☁️ Cloud | API key (paid) |
| OpenAI | 💰 Paid | Very Good | Fast | ☁️ Cloud | API key (paid) |
| Anthropic | 💰 Paid | Best | Fast | ☁️ Cloud | API key (paid) |
Cost estimates are based on provider pricing pages as of Feb 2026. Prices may change — always check your provider's dashboard for current rates.
* With Ollama, qwen2.5-coder:14b (default) provides the best local security review quality. Lighter alternatives: llama3.2, deepseek-coder:6.7b.
🆓 Ollama — Free + Fully Local
Your code never leaves your machine — runs entirely locally. Ideal for privacy-sensitive projects.
Setup:
# 1. Install Ollama (macOS)
brew install ollama
# Windows / Linux: https://ollama.com/download
# 2. Pull a model (once)
ollama pull qwen2.5-coder:14b # recommended — best security review quality (~9 GB)
# or lighter:
ollama pull llama3.2 # ~2 GB
ollama pull deepseek-coder:6.7b # code-focused, ~4 GB
# 3. Start Ollama server
ollama serveSelect in SentinelReview:
The default Ollama model is qwen2.5-coder:14b. To change:
set_provider ollama
set_model qwen2.5-coder:14bLocal and Internal (on-premise) Config
Ollama can run on the same machine or on a server within your network. No API key needed.
1. Local — Ollama on same machine (default)
Works without any config file (defaults: http://localhost:11434, qwen2.5-coder:14b). You can explicitly set it:
{
"provider": "ollama",
"ollamaBaseUrl": "http://localhost:11434",
"ollamaModel": "qwen2.5-coder:14b",
"language": "en",
"severity": "strict",
"customRules": [
"All API endpoints must include authentication and authorization where needed",
"Database queries must use parameterized statements, no string concatenation",
"Sensitive data (passwords, tokens) must not be written to logs or error messages",
"External service URLs and API keys must be read from environment variables or secure config",
"User input must be sanitized/escaped in all contexts (SQL, commands, redirects)"
]
}2. Internal — Ollama on remote/corporate server
If Ollama runs on a different machine (Docker, Kubernetes, internal network), just change ollamaBaseUrl:
{
"provider": "ollama",
"ollamaBaseUrl": "http://ollama.company.internal:11434",
"ollamaModel": "qwen2.5-coder:14b",
"language": "en",
"severity": "strict",
"customRules": [
"All API endpoints must include authentication and authorization where needed",
"Database queries must use parameterized statements, no string concatenation",
"Sensitive data (passwords, tokens) must not be written to logs or error messages",
"External service URLs and API keys must be read from environment variables or secure config",
"User input must be sanitized/escaped in all contexts (SQL, commands, redirects)"
]
}Example addresses:
http://192.168.1.100:11434— local network IPhttp://ollama.internal:11434— corporate DNShttp://localhost:11434— same machine (default)
The model name on that server (ollama list) must match ollamaModel. If Ollama is already your active provider, just save the config and run a review.
🆓 Gemini — Free Cloud
Google's Gemini model. gemini-2.0-flash is currently free and fast.
Get an API key:
- Go to aistudio.google.com (Google account is enough)
- "Get API key" → "Create API key"
- Copy the key starting with
AIza...
Select in SentinelReview:
set_api_key provider="gemini" key="AIzaSy..."
set_provider gemini🆓 Groq — Free + Fastest Cloud
The fastest inference thanks to LPU hardware. llama-3.3-70b-versatile delivers quality results.
Get an API key:
- Go to console.groq.com
- "API Keys" → "Create API Key"
- Copy the key starting with
gsk_...
Select in SentinelReview:
set_api_key provider="groq" key="gsk_..."
set_provider groq💲 DeepSeek — Very Cheap Cloud
DeepSeek-V3 excels in quality/price ratio. deepseek-chat is powerful; deepseek-reasoner for deep analysis.
Get an API key:
- Go to platform.deepseek.com
- "API Keys" → "Create new API key"
- Copy the key starting with
sk-...
Select in SentinelReview:
set_api_key provider="deepseek" key="sk-..."
set_provider deepseekFor deep analysis, use the reasoner model:
set_model provider="deepseek" model="deepseek-reasoner"💰 OpenAI (ChatGPT) — GPT-4o and Above
OpenAI's GPT models. gpt-4o-mini is budget-friendly.
Get an API key:
- Go to platform.openai.com
- "API Keys" → "Create new secret key"
- Copy the key starting with
sk-...
Select in SentinelReview:
set_api_key provider="openai" key="sk-..."
set_provider openaiFor a more economical model:
set_model provider="openai" model="gpt-4o-mini"💰 Anthropic Claude — Highest Quality
The most comprehensive security analysis results.
Get an API key:
- Go to console.anthropic.com
- "API Keys" → "Create Key"
- Copy the key starting with
sk-ant-...
Select in SentinelReview:
set_api_key provider="anthropic" key="sk-ant-..."
set_provider anthropicView active provider
provider_infoExample output:
📊 SentinelReview Config
Active Provider : groq
Active Model : llama-3.3-70b-versatile
Language : en
Severity : strict
Custom rules : 0
Provider Status:
Ollama → qwen2.5-coder:14b | Local — no API key needed
Gemini → gemini-2.0-flash | Key: ✅ set
Groq → llama-3.3-70b-versatile | Key: ✅ set
OpenAI → gpt-4o | Key: ❌ missing
DeepSeek → deepseek-chat | Key: ❌ missing
Anthropic → claude-opus-4-6 | Key: ✅ set4. Prerequisites
Before starting, make sure you have:
| Tool | Minimum Version | Check Command |
|---|---|---|
| Node.js | v20 or later | node --version |
| npm | v8 or later | npm --version |
| Git | any | git --version |
| MCP client | Cursor, Windsurf, VS Code, or Claude Desktop | — |
Note: Any MCP-compatible tool works. Cursor (late 2024+), Windsurf, VS Code with Copilot, and Claude Desktop all support MCP.
5. Node.js Installation
Skip this step if Node.js is already installed. Check with
node --version.
macOS
Option A — Official installer (recommended for beginners):
Go to nodejs.org.
Click "LTS" (Long Term Support) — the more stable version.
┌─────────────────────────────────────────┐ │ nodejs.org │ │ ┌──────────────┐ ┌─────────────────┐ │ │ │ 20.x LTS │ │ 22.x Current │ │ │ │ [DOWNLOAD ✓] │ │ [DOWNLOAD] │ │ │ └──────────────┘ └─────────────────┘ │ │ ↑ Pick this │ └─────────────────────────────────────────┘Open the downloaded
.pkgfile and follow the installer.Open Terminal and verify:
node --version # should output v20.x.x npm --version # should output 8.x or later
Option B — Homebrew (if available):
brew install node@20Windows
Download the LTS
.msifrom nodejs.org.Run the installer; leave all options at default.
Open Command Prompt or PowerShell as administrator and verify:
node --version npm --version
Linux (Ubuntu/Debian)
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
node --version6. SentinelReview Installation
Global install (recommended)
Single command in Terminal or Command Prompt:
npm install -g @szcn/sentinelreviewExpected terminal output:
───────────────────────────────────────────
added 98 packages in 8s
+ @szcn/sentinelreview@1.0.0
───────────────────────────────────────────Verify:
sentinelreview --version # should not throw an error
# or
node -e "require('@szcn/sentinelreview')"From source (developer)
git clone https://github.com/szcn/sentinel-review.git
cd sentinel-review
npm install
npm run build7. Configuration
SentinelReview stores settings in ~/.sentinelreview/config.json.~ refers to your home directory:
- macOS / Linux:
/Users/yourname/.sentinelreview/config.json - Windows:
C:\Users\yourname\.sentinelreview\config.json
The config file is optional: without it, defaults are used (Ollama + qwen2.5-coder:14b, language: en, severity: strict). For cloud providers, use the setup wizard in Cursor or the examples below.
Auto-create (Ollama — default, no API key needed)
macOS / Linux:
mkdir -p ~/.sentinelreview
cat > ~/.sentinelreview/config.json << 'EOF'
{
"provider": "ollama",
"ollamaModel": "qwen2.5-coder:14b",
"language": "en",
"severity": "strict",
"customRules": []
}
EOFWindows (PowerShell):
New-Item -ItemType Directory -Force -Path "$HOME\.sentinelreview"
@'{"provider":"ollama","ollamaModel":"qwen2.5-coder:14b","language":"en","severity":"strict","customRules":[]}'@ | Set-Content "$HOME\.sentinelreview\config.json"Cloud provider config example
For Groq or Anthropic:
{
"provider": "groq",
"groqApiKey": "gsk_...",
"groqModel": "llama-3.3-70b-versatile",
"language": "en",
"severity": "strict",
"customRules": []
}API keys can also be saved via set_api_key in Cursor chat.
Verification
Test that your config is read correctly:
node -e "
const { loadConfig } = require('@szcn/sentinelreview/out/config');
const c = loadConfig();
console.log('Provider:', c.provider);
console.log('Model:', c.ollamaModel || c.anthropicModel || c.groqModel || '-');
console.log('Language:', c.language);
"Expected output (defaults):
Provider: ollama
Model: qwen2.5-coder:14b
Language: en8. MCP Integration (Cursor, Windsurf, VS Code, Claude Desktop)
SentinelReview works with any MCP-compatible tool. Pick your editor below and add the config.
Quick Reference
| Tool | Config File | Docs |
|---|---|---|
| Cursor | ~/.cursor/mcp.json |
cursor.sh/docs |
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
docs.codeium.com |
| VS Code (Copilot) | .vscode/mcp.json (project) |
code.visualstudio.com |
| Claude Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) |
modelcontextprotocol.io |
Cursor
Method 1 — Via Settings (easiest):
- Open Cursor → Settings (gear icon, bottom-left).
- Search "MCP" → Click "Open MCP Config".
- Paste the config below and save.
Method 2 — Edit file directly:
File: ~/.cursor/mcp.json (global) or PROJECT/.cursor/mcp.json (project-only)
{
"mcpServers": {
"sentinelreview": {
"command": "npx",
"args": ["-y", "@szcn/sentinelreview"]
}
}
}With API keys (cloud providers):
{
"mcpServers": {
"sentinelreview": {
"command": "npx",
"args": ["-y", "@szcn/sentinelreview"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-...",
"GEMINI_API_KEY": "AIza...",
"GROQ_API_KEY": "gsk_..."
}
}
}
}- Restart Cursor (fully close and reopen).
- Open a new chat — you should see sentinelreview tools:
┌──────────────────────────────────────────────────────────┐
│ Cursor AI Chat │
│ │
│ Tools: 🔧 sentinelreview (9 tools) ← Should appear │
│ ├── review_staged │
│ ├── review_branch │
│ ├── review_file │
│ └── ... │
└──────────────────────────────────────────────────────────┘Windsurf (Codeium)
File: ~/.codeium/windsurf/mcp_config.json
{
"mcpServers": {
"sentinelreview": {
"command": "npx",
"args": ["-y", "@szcn/sentinelreview"]
}
}
}With API keys:
{
"mcpServers": {
"sentinelreview": {
"command": "npx",
"args": ["-y", "@szcn/sentinelreview"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-...",
"GROQ_API_KEY": "gsk_..."
}
}
}
}Restart Windsurf. In Cascade chat, SentinelReview tools will appear.
VS Code (GitHub Copilot Chat)
VS Code supports MCP servers via Copilot Chat (requires Copilot extension).
File: .vscode/mcp.json (project root) — or User Settings for global
{
"servers": {
"sentinelreview": {
"command": "npx",
"args": ["-y", "@szcn/sentinelreview"],
"env": {}
}
}
}With API keys:
{
"servers": {
"sentinelreview": {
"command": "npx",
"args": ["-y", "@szcn/sentinelreview"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-...",
"GEMINI_API_KEY": "AIza..."
}
}
}
}In Copilot Chat, use @sentinelreview or ask Copilot to use the review tools.
Claude Desktop
File (macOS): ~/Library/Application Support/Claude/claude_desktop_config.json
File (Windows): %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"sentinelreview": {
"command": "npx",
"args": ["-y", "@szcn/sentinelreview"]
}
}
}With API keys:
{
"mcpServers": {
"sentinelreview": {
"command": "npx",
"args": ["-y", "@szcn/sentinelreview"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-...",
"GROQ_API_KEY": "gsk_..."
}
}
}
}Restart Claude Desktop. Ask Claude: "Use review_staged to review my code."
Global Install Alternative
For any tool above, if you installed globally (npm install -g @szcn/sentinelreview), replace the command:
{
"command": "sentinelreview",
"args": []
}Note: Environment variables in MCP config override keys in
~/.sentinelreview/config.json. For Ollama (local), noenvblock is needed.
9. First Use
Preparation: Open your project
- Open a project folder in Cursor (must be a Git repo).
- Make a change in a file.
- Stage the change:
git add filename.py
# or all changes:
git add .Start a review
Type in Cursor chat:
use review_staged to review my staged changesor simply:
review my staged code for security issuesWhat can I review?
| What to review | What to type in Cursor | Description |
|---|---|---|
| Changes I'm about to commit (staged) | review_staged or "review my staged code" |
Diff from git add |
| Diff between two branches | review_branch tool, base=main head=feature/login |
Diff between main and feature/login |
| Single file change (vs HEAD) | review_file tool with src/api/users.py |
Only that file's working tree + staged diff |
| Diff from another source | review_diff tool with raw diff text |
CI output, patch file, etc. |
Example — Between any branches:
To review all changes between main and feature/xyz:
use review_branch: base=main, head=feature/xyzIt fetches origin first; if branches aren't remote, local branches are used. Same settings (language, provider, custom rules) apply.
Manual Triggering
Reviews never run automatically; they only run when you trigger them.
| Where | How to trigger |
|---|---|
| Cursor | Type in chat: "review my staged code" or "review_branch base=main head=feature/xyz" |
| Terminal | After git add ., run sentinelreview review |
| Node / CI | reviewDiff(diff, rules) or sentinelreview review in a script |
Pre-commit hook — to automatically trigger a review before each commit:
# .git/hooks/pre-commit (make executable: chmod +x .git/hooks/pre-commit)
#!/bin/sh
sentinelreview review > review-output.txt 2>&1 || true
# Review output is saved; hook does not block the commit
exit 0To block commits based on review results: replace exit 0 with sentinelreview review || exit 1 and write a wrapper that sets the exit code based on pass/fail criteria.
┌─────────────────────────────────────────────────────────┐
│ Cursor AI Chat │
│ │
│ You: review my staged code for security issues │
│ │
│ AI: Running review_staged tool... │
│ [🔧 review_staged] ✓ │
│ │
│ ## 🔒 OWASP Security Analysis │
│ │
│ ### A01 – Broken Access Control │
│ ✅ No issues found │
│ │
│ ### A02 – Cryptographic Failures │
│ 🔴 CRITICAL | src/auth.py:23 │
│ **Issue:** Password hashed with MD5... │
│ ... │
└─────────────────────────────────────────────────────────┘Report Format
Every review produces a structured report with three sections:
## 🔒 OWASP Security Analysis ← Security vulnerabilities (A01–A10)
### A01 – Broken Access Control
✅ No issues found
### A02 – Cryptographic Failures
🔴 CRITICAL | src/auth.py:23
**Issue:** Password hashed with MD5 — cryptographically broken
**Fix:**
```python
from bcrypt import hashpw, gensalt
hashed = hashpw(password.encode(), gensalt(rounds=12))A03 – Injection
...
🧹 Code Quality ← Architecture, errors, naming, docs
- 🟠 HIGH | src/handlers/user.go:33-85 Issue: Function violates SRP — handles parsing, DB, and response Fix: Extract into handler → service → repository layers
- 🟡 MEDIUM | src/handlers/user.go:92 Issue: Error from db.Close() silently discarded
- 🔵 LOW | src/config/constants.go:5 Issue: Magic string "secret-key" — use a named constant
📋 Custom Rule Compliance ← Your team's rules checked
- 🟠 VIOLATED | src/handlers/user.go:55 Rule: "All endpoints must have rate limiting" deleteUserHandler has no rate limit middleware
- ✅ PASS | "Passwords must be hashed with BCrypt"
💰 Token & Cost Summary
| Provider | ollama | Model | qwen2.5-coder:14b | | Tokens | 4,713 | Est. cost | $0.00 (local) |
**Severity levels:**
| Icon | Level | Where | Meaning |
|------|-------|-------|---------|
| 🔴 | CRITICAL | Security | Fix immediately; production risk |
| 🟠 | HIGH / VIOLATED | Security / Quality / Rules | High-priority issue or rule violation |
| 🟡 | MEDIUM | Security / Quality | Medium risk, address soon |
| 🔵 | LOW | Quality | Improvement suggestion |
| ✅ | PASS | All | No issues / rule satisfied |
**Get reports in both languages:** Use `set_language "both"` — the report will contain a Turkish section (## Türkçe Rapor) followed by the same content in English (## English Report).
**HTML Report Output:** Every review automatically generates a styled HTML report saved to `.sentinelreview/reports/` in your project directory. The file path is shown at the end of each review. To disable:
set_html_report enabled=false
Reports are named by source and timestamp: `review-staged-2026-02-26_14-30-45.html`, `review-branch-main-vs-feature_auth-...html`, etc.
---
## 10. Usage Outside MCP
SentinelReview can be used **from the terminal** or **from your own Node.js scripts** without Cursor or MCP. The same config (`~/.sentinelreview/config.json`) and provider/model settings apply.
### 10.1 Terminal (CLI)
Review staged changes:
```bash
# In a git repo, after staging changes
git add .
sentinelreview reviewOutput goes directly to stdout (OWASP report + token/cost summary). To save to a file:
sentinelreview review > review.md
# or
sentinelreview review | tee review.mdRequirement: Must be installed via npm install -g @szcn/sentinelreview; config and provider (Ollama server or cloud API key) must be ready.
10.2 Programmatic Node.js Usage
Call the review in your own scripts (CI, bots, automation):
const { reviewDiff } = require('@szcn/sentinelreview/out/reviewer');
const { getStagedDiff, getBranchDiff, getFileDiff } = require('@szcn/sentinelreview/out/git');
const { loadConfig } = require('@szcn/sentinelreview/out/config');
async function main() {
const config = loadConfig();
const diff = await getStagedDiff();
// or: getBranchDiff('main', 'feature'), getFileDiff('src/api.js')
if (diff.includes('No changes found')) {
console.log(diff);
return;
}
const result = await reviewDiff(diff, config.customRules);
console.log(result.report);
console.log('Tokens:', result.totalTokens, 'Cost:', result.costUsd);
// result.report → Markdown text
// result.inputTokens, result.outputTokens, result.costUsd, result.durationMs, result.provider, result.model
}
main().catch(err => { console.error(err); process.exit(1); });Note: The package must be installed globally or locally; out/ must exist (run npm run build if installing from source). In CI, npx @szcn/sentinelreview review is often more practical.
10.3 Summary
| Usage | Command / Method |
|---|---|
| Cursor (MCP) | review_staged tool in Cursor chat |
| Terminal | sentinelreview review (staged diff → stdout) |
| Node script | require('@szcn/sentinelreview/out/reviewer').reviewDiff(diff, rules) |
| Save to file | sentinelreview review > review.md |
11. MCP Tool Reference
Setup & Info:
| Command | Description |
|---|---|
setup |
First-time setup wizard — provider selection and API key steps |
provider_info |
Show all provider statuses, models, and config summary |
config_show |
Show full config file contents (API keys are masked) |
Provider Configuration:
| Command | Parameters | Description |
|---|---|---|
set_provider |
provider |
Active provider: ollama / gemini / groq / deepseek / openai / anthropic |
set_api_key |
provider, key |
Save provider API key |
set_model |
provider, model |
Change provider model name |
set_language |
language |
Report language: tr (Turkish), en (English), both (both languages) |
set_severity |
severity |
Sensitivity: strict / normal / light |
set_html_report |
enabled |
Enable/disable automatic HTML report generation (default: true) |
Review Tools:
| Command | Parameters | Description |
|---|---|---|
review_staged |
— | Captures git diff --staged and reviews it |
review_branch |
base, head |
Reviews diff between two branches |
review_file |
filepath |
Reviews a single file's changes vs HEAD |
review_diff |
diff |
Reviews raw diff text directly |
Custom Rule Tools:
| Command | Parameters | Description |
|---|---|---|
add_custom_rule |
rule |
Add a custom review rule |
list_rules |
— | List all custom rules with their indices |
remove_custom_rule |
index |
Remove a custom rule by index |
Usage Examples (in Cursor chat)
# Initial setup
setup
# See all provider status
provider_info
# Switch to free Groq
set_api_key provider="groq" key="gsk_..."
set_provider groq
# Switch to DeepSeek (very cheap)
set_api_key provider="deepseek" key="sk-..."
set_provider deepseek
set_model provider="deepseek" model="deepseek-reasoner"
# OpenAI gpt-4o-mini (budget option)
set_api_key provider="openai" key="sk-..."
set_provider openai
set_model provider="openai" model="gpt-4o-mini"
# Local Ollama
set_provider ollama
set_model provider="ollama" model="codellama:7b"
# Review operations
review my staged code
use review_branch, base=main, head=feature/auth
review src/api/users.py with review_file
# Language and severity (tr | en | both)
set_language "en"
set_language "both" # get report in both Turkish and English
set_severity "strict"
# HTML report toggle
set_html_report enabled=true # auto-save HTML to .sentinelreview/reports/ (default)
set_html_report enabled=false # disable HTML, Markdown only
# Custom rules
add_custom_rule rule="All API endpoints must include rate limiting"
list_rules12. Custom Rules
You can configure SentinelReview with project-specific rules. These are evaluated in addition to the OWASP analysis.
Example rule set — Python/Flask project
All Flask routes must include @login_required decorator
Flask SECRET_KEY must be read from environment variable, not hardcoded
subprocess.run with shell=True is forbidden
External URL requests must have allowlist validationExample rule set — Java/Spring project
All endpoints must include @PreAuthorize annotation
Password hashing must use BCrypt with minimum 12 rounds
PreparedStatement required, Statement.executeQuery(String) is forbidden
JWT token expiry must not exceed 24 hoursAdding rules
Via Cursor chat:
use add_custom_rule to add:
"All database queries must be parameterized"Directly in config.json:
{
"provider": "anthropic",
"anthropicApiKey": "sk-ant-...",
"language": "en",
"anthropicModel": "claude-opus-4-6",
"severity": "normal",
"customRules": [
"All API endpoints must include JWT validation",
"User input must always be sanitized",
"Passwords must be hashed with BCrypt, MD5/SHA1 forbidden"
]
}13. Config Reference
Config file: ~/.sentinelreview/config.json
Provider settings:
| Field | Type | Default | Description |
|---|---|---|---|
provider |
"ollama" / "gemini" / "groq" / "openai" / "deepseek" / "anthropic" |
"ollama" |
Active AI provider |
anthropicApiKey |
string |
"" |
Anthropic API key. Env: ANTHROPIC_API_KEY |
geminiApiKey |
string |
"" |
Gemini API key. Env: GEMINI_API_KEY |
groqApiKey |
string |
"" |
Groq API key. Env: GROQ_API_KEY |
openaiApiKey |
string |
"" |
OpenAI API key. Env: OPENAI_API_KEY |
deepseekApiKey |
string |
"" |
DeepSeek API key. Env: DEEPSEEK_API_KEY |
anthropicModel |
string |
"claude-opus-4-6" |
Anthropic model |
ollamaModel |
string |
"qwen2.5-coder:14b" |
Ollama model |
ollamaBaseUrl |
string |
"http://localhost:11434" |
Ollama server URL |
geminiModel |
string |
"gemini-2.0-flash" |
Gemini model |
groqModel |
string |
"llama-3.3-70b-versatile" |
Groq model |
openaiModel |
string |
"gpt-4o" |
OpenAI model |
deepseekModel |
string |
"deepseek-chat" |
DeepSeek model |
General settings:
| Field | Type | Default | Description |
|---|---|---|---|
language |
"tr" / "en" / "both" |
"tr" |
Report language. both = Turkish first, then English in the same report |
severity |
"strict" / "normal" / "light" |
"strict" |
"strict": all minor issues reported. "light": critical only |
htmlReport |
boolean |
true |
Auto-generate HTML report to .sentinelreview/reports/ after each review |
customRules |
string[] |
[] |
Project-specific review rules |
Full config example (Groq with all features):
{
"provider": "groq",
"groqApiKey": "gsk_...",
"groqModel": "llama-3.3-70b-versatile",
"language": "en",
"severity": "strict",
"htmlReport": true,
"customRules": [
"All API endpoints must include authentication and authorization where needed",
"Database queries must use parameterized statements",
"Sensitive data must not be written to logs or error messages",
"API keys and secrets must be read from environment variables"
]
}Local / Internal (Ollama) config examples:
| Use Case | ollamaBaseUrl |
Note |
|---|---|---|
| Same machine | http://localhost:11434 |
Default; used even without config |
| Corporate server | http://ollama.company.internal:11434 |
Internal network Ollama |
| LAN IP | http://192.168.1.100:11434 |
Static IP example |
Full internal config (all features):
{
"provider": "ollama",
"ollamaBaseUrl": "http://192.168.1.100:11434",
"ollamaModel": "qwen2.5-coder:14b",
"language": "en",
"severity": "strict",
"customRules": [
"All API endpoints must include authentication and authorization where needed",
"Database queries must use parameterized statements, no string concatenation",
"Sensitive data (passwords, tokens) must not be written to logs or error messages",
"External service URLs and API keys must be read from environment variables or secure config",
"User input must be sanitized/escaped in all contexts (SQL, commands, redirects)"
]
}Config file: ~/.sentinelreview/config.json. After changes, just open a new chat in Cursor and run a review — no MCP restart needed.
Severity explained
The severity setting is injected directly into the LLM prompt, controlling what the model reports. This affects both review quality and estimated token cost:
| Value | What gets reported | Prompt instruction | Best for |
|---|---|---|---|
"strict" |
CRITICAL + HIGH + MEDIUM + LOW | "Report all findings, skip nothing" | Security audits, pre-release, fintech/healthcare |
"normal" |
CRITICAL + HIGH only | "Skip MEDIUM and LOW" | Daily development, PR reviews |
"light" |
CRITICAL only | "No code quality comments" | Quick CI checks, large diffs, cost-sensitive teams |
Token impact: light typically uses 60-80% fewer output tokens than strict for the same diff. All cost figures shown in reports are estimates based on provider pricing tables embedded in the tool — actual costs may differ. See Architecture & Token Optimization for details.
Environment variable precedence
Environment variables override config file values:
ANTHROPIC_API_KEY > config.anthropicApiKey
GEMINI_API_KEY > config.geminiApiKey
GROQ_API_KEY > config.groqApiKey
OPENAI_API_KEY > config.openaiApiKey
DEEPSEEK_API_KEY > config.deepseekApiKey14. Project Structure
sentinelreview/
├── package.json # Dependencies and npm scripts
├── tsconfig.json # TypeScript configuration
├── .npmignore # Files excluded from npm publish
├── bin/
│ └── sentinelreview.js # Global binary entry point (#!/usr/bin/env node)
├── src/
│ ├── index.ts # MCP server — all tools registered here
│ ├── reviewer.ts # Multi-provider AI review + chunk orchestrator
│ ├── owasp.ts # OWASP Top 10 prompt templates
│ ├── config.ts # ~/.sentinelreview/config.json read/write
│ ├── git.ts # git diff commands (execFileAsync) + chunkDiff
│ └── html.ts # HTML report generator (dynamic provider footer)
├── out/ # Compiled JavaScript (npm run build output)
├── test-projects/ # Example projects with intentional vulnerabilities
│ ├── go-project/ # → Go REST API
│ ├── java-project/ # → Java controller
│ └── python-project/ # → Python Flask app
└── test-results/ # Test review reports (after npm run test:review)Developer commands
npm run build # Compile TypeScript → out/
npm run dev # Watch mode — auto-compile on changes
npm start # Start MCP server (stdio)
npm run test:review # Run review on 3 test projects
npm publish # Publish to npm (build runs automatically)15. FAQ
"Cannot find module" error
# Run build first:
npm run build
# Then try again"API key not found" error
- Check that
~/.sentinelreview/config.jsonexists. - Ensure the relevant API key field is not empty.
- Alternatively, use an environment variable:
export ANTHROPIC_API_KEY="sk-ant-..."
SentinelReview tools not visible in Cursor
Verify
~/.cursor/mcp.jsonis valid JSON.Fully close and reopen Cursor.
Check server status in Cursor Settings → MCP.
┌──────────────────────────────────────────────┐ │ Settings → MCP │ │ │ │ sentinelreview ● Connected ← Should be │ │ green │ │ ○ Error ← If red, │ │ check mcp.json │ └──────────────────────────────────────────────┘
"Not a git repo" error
review_staged and other git tools only work in a Git repository:
git init # create a new repo
git add . # stage files
# then request a reviewDiff is too large, review is slow
SentinelReview automatically splits large diffs into ~6000-token chunks and reviews each separately (see Architecture & Token Optimization). However, very large diffs still mean multiple LLM calls. To speed things up:
- For a single file: use the
review_filetool instead of reviewing the whole branch. - Set
severity: "light"— fewer findings = fewer output tokens = faster + cheaper. - Use a faster provider: Groq (LPU hardware) or Gemini Flash for quick turnaround.
Which model should I use?
| Provider | Recommended Model | Notes |
|---|---|---|
| Ollama | qwen2.5-coder:14b |
Default; best local security review quality. Lighter: llama3.2, deepseek-coder:6.7b |
| Gemini | gemini-2.0-flash |
Free, fast |
| Groq | llama-3.3-70b-versatile |
Free, very fast |
| DeepSeek | deepseek-chat |
Cheap; deep analysis: deepseek-reasoner |
| OpenAI | gpt-4o / gpt-4o-mini |
Budget option: gpt-4o-mini |
| Anthropic | claude-opus-4-6 |
Highest quality; daily use: claude-sonnet-4 or claude-haiku-3-5 |
Change via set_provider and set_model in Cursor, or update the relevant *Model field in config.
License
MIT — Free to use, modify, and distribute.