Privacy-first AI code reviews with multi-LLM support
Open-source · MIT Licensed · No telemetry · No SaaS
~/.claude/sessions/, ~/.gemini/google_accounts.json, ~/.codex/auth.json (to detect login state); never transmits credentialsprovider.base_url (single-LLM fallback); your account, your key, your quotamajor / critical findings so hooks can block the commit/review or /simplify — those are single-prompt commands; this is multi-LLM diff orchestration with merge and storageNo SaaS signup, no telemetry, no auto-update calls. Your diff goes only to the LLM(s) you authenticate — point at Ollama for a fully-offline review.
Run reviews with Claude, Gemini, or Codex in parallel. Get consolidated findings with deduplication.
Works with free tiers from Claude and Gemini. No credit card required.
Single binary, no Docker/Node required. Works on macOS, Linux, and Windows.
Switch between LLM providers freely. Works with any OpenAI-compatible API.
Catch issues before they're committed. Perfect for team workflows.
Install
brew install mshykov/tap/local-review
Set up — picks a provider, writes .local-review.yml
local-review init
Review staged changes (init tells you which env var to export first)
local-review staged
Free tier via the Claude CLI
✓ FREEDefault enabled
Free API key from Google
✓ FREEDefault enabled
ChatGPT Plus or OpenAI API key
$ OpenAIEnabled when authenticated
You pay OpenAI. local-review is 100% free.
Works on any language the LLM understands. Specialized prompt packs add language-specific idiom checks, security patterns, and pitfalls.
default pack
Universal review rules
.rs
Specialized pack
.go
Specialized pack
.ts / .tsx
Specialized pack
.py
Specialized pack
More language packs on the way — add yours →
Every rule local-review applies is published as a human-readable checklist — OWASP 2025-aligned, with severity tiers, concrete measurables, and the specialist-review prompts you don't get from generic checklists.
Use it for manual reviews, paste it into your team wiki, or run local-review review to get the same rules executed by an LLM in seconds.