local-review

Privacy-first AI code reviews with multi-LLM support

Open-source · MIT Licensed · No telemetry · No SaaS

What it is, what it isn't

What it is

  • Reads your auth state from local files only — env vars and parsed contents of ~/.claude/sessions/, ~/.gemini/google_accounts.json, ~/.codex/auth.json (to detect login state); never transmits credentials
  • Each LLM call goes against the endpoint you configured — vendor CLI subprocess (multi-LLM mode) or HTTP to your provider.base_url (single-LLM fallback); your account, your key, your quota
  • A local CLI that reviews a git diff using LLMs you've already authenticated
  • An orchestrator that runs Claude / Gemini / Codex CLIs in parallel and merges findings into one report
  • BYOK — your API key, requests go direct to the vendor (no middleman server)
  • A pre-commit gate — exits non-zero on major / critical findings so hooks can block the commit
  • A single Go binary — no Node, no Python, no Docker, no telemetry

What it isn't

  • A keychain scraper or credential-exfil tool — auth files are read locally to determine readiness, never sent anywhere
  • A proxy, mirror, or reseller running between you and the LLM — no local-review-operated relay, no shared capacity
  • A replacement for Claude's /review or /simplify — those are single-prompt commands; this is multi-LLM diff orchestration with merge and storage
  • "Code never leaves your machine" — the diff still goes to whichever LLM you authenticate (run Ollama for true offline)
  • A SaaS — no hosted dashboard, no account, no team collaboration features
  • A linter or static analyzer — it's LLM-based, with the heuristic tradeoffs that implies
  • A chat interface — reads a diff, prints findings, exits

Why local-review?

Privacy First

No SaaS signup, no telemetry, no auto-update calls. Your diff goes only to the LLM(s) you authenticate — point at Ollama for a fully-offline review.

Multi-LLM

Run reviews with Claude, Gemini, or Codex in parallel. Get consolidated findings with deduplication.

Free Options

Works with free tiers from Claude and Gemini. No credit card required.

Fast Setup

Single binary, no Docker/Node required. Works on macOS, Linux, and Windows.

No Lock-In

Switch between LLM providers freely. Works with any OpenAI-compatible API.

Pre-Commit Hooks

Catch issues before they're committed. Perfect for team workflows.

Quick Start

1

Install

brew install mshykov/tap/local-review
2

Set up — picks a provider, writes .local-review.yml

local-review init
3

Review staged changes (init tells you which env var to export first)

local-review staged

Supported LLMs

Claude

Free tier via the Claude CLI

✓ FREE

Default enabled

Gemini

Free API key from Google

✓ FREE

Default enabled

Supported Languages

Works on any language the LLM understands. Specialized prompt packs add language-specific idiom checks, security patterns, and pitfalls.

Any language

default pack

Universal review rules

Rust

.rs

Specialized pack

Go

.go

Specialized pack

TypeScript

.ts / .tsx

Specialized pack

Python

.py

Specialized pack

More language packs on the way — add yours →

Want the checklist behind the tool?

Every rule local-review applies is published as a human-readable checklist — OWASP 2025-aligned, with severity tiers, concrete measurables, and the specialist-review prompts you don't get from generic checklists.

Use it for manual reviews, paste it into your team wiki, or run local-review review to get the same rules executed by an LLM in seconds.

Installation

📦 Download Binary

Download from GitHub Releases

Available for macOS, Linux, Windows — no dependencies required

🔧 Go Install

go install github.com/mshykov/local-review/cmd/local-review@latest