Diff Hound is an automated AI-powered code review tool that posts intelligent, contextual comments directly on pull requests across supported platforms.
Supports GitHub today. GitLab and Bitbucket support are planned.
- π§ Automated code review using OpenAI or Ollama (Upcoming: Claude, DeepSeek, Gemini)
- π¬ Posts inline or summary comments on pull requests
- π Plug-and-play architecture for models and platforms
- βοΈ Configurable with JSON/YAML config files and CLI overrides
- π οΈ Designed for CI/CD pipelines and local runs
- π§ Tracks last reviewed commit to avoid duplicate reviews
- π₯οΈ Local diff mode β review local changes without a remote PR
npm install -g diff-houndgit clone https://github.com/runtimebug/diff-hound.git
cd diff-hound
npm install
npm run build
npm linkCopy the provided .env.example to .env and fill in your credentials:
cp .env.example .envThen modify with your keys / tokens:
# Platform tokens
GITHUB_TOKEN=your_github_token # Requires 'repo' scope
# AI Model API keys (set one depending on your provider)
OPENAI_API_KEY=your_openai_keyπ
GITHUB_TOKENis used to fetch PRs and post comments β get it here πOPENAI_API_KEYis used to generate code reviews via GPT β get it here π‘ Using Ollama? No API key needed β just have Ollama running locally. See Ollama (Local Models) below.
You can define your config in .aicodeconfig.json or .aicode.yml:
{
"provider": "openai",
"model": "gpt-4o", // Or any other openai model
"endpoint": "", // Optional: custom endpoint
"gitProvider": "github",
"repo": "your-username/your-repo",
"dryRun": false,
"verbose": false,
"rules": [
"Prefer const over let when variables are not reassigned",
"Avoid reassigning const variables",
"Add descriptive comments for complex logic",
"Remove unnecessary comments",
"Follow the DRY (Don't Repeat Yourself) principle",
"Use descriptive variable and function names",
"Handle errors appropriately",
"Add type annotations where necessary"
],
"ignoreFiles": ["*.md", "package-lock.json", "yarn.lock", "LICENSE", "*.log"],
"commentStyle": "inline",
"severity": "suggestion"
}provider: openai
model: gpt-4o # Or any other openai model
endpoint: "" # Optional: custom endpoint
gitProvider: github
repo: your-username/your-repo
dryRun: false
verbose: false
commentStyle: inline
severity: suggestion
ignoreFiles:
- "*.md"
- package-lock.json
- yarn.lock
- LICENSE
- "*.log"
rules:
- Prefer const over let when variables are not reassigned
- Avoid reassigning const variables
- Add descriptive comments for complex logic
- Remove unnecessary comments
- Follow the DRY (Don't Repeat Yourself) principle
- Use descriptive variable and function names
- Handle errors appropriately
- Add type annotations where necessarydiff-houndOr override config values via CLI:
diff-hound --repo=owner/repo --provider=openai --model=gpt-4o --dry-runAdd
--dry-runto print comments to console instead of posting them.
Review local git changes without a remote PR or GitHub token. Only an LLM API key is needed.
# Review changes between current branch and main
diff-hound --local --base main
# Review last commit
diff-hound --local --base HEAD~1
# Review changes between two specific refs
diff-hound --local --base main --head feature-branch
# Review a patch file directly
diff-hound --patch changes.patchLocal mode always runs in dry-run β output goes to your terminal. If --base is omitted, it defaults to the upstream tracking branch or HEAD~1.
Run fully offline code reviews using Ollama β no API key, no cloud, zero cost.
Prerequisites: Install and start Ollama, then pull a model:
# Install Ollama (see https://ollama.com/download)
ollama serve # Start the Ollama server
ollama pull llama3 # Pull a model (one-time)Run a review with Ollama:
# Review local changes using Ollama
diff-hound --provider ollama --model llama3 --local --base main
# Use a code-specialized model
diff-hound --provider ollama --model codellama --local --base main
# Point to a remote Ollama instance
diff-hound --provider ollama --model llama3 --model-endpoint http://my-server:11434 --local --base main
# Increase timeout for large diffs on slower models (default: 120000ms)
diff-hound --provider ollama --model llama3 --request-timeout 300000 --local --base mainOr set it in your config file (.aicodeconfig.json):
{
"provider": "ollama",
"model": "llama3",
"endpoint": "http://localhost:11434"
}π‘ Ollama's default endpoint is
http://localhost:11434. You only need--model-endpoint/endpointif running Ollama on a different host or port.
== Comments for PR #42: Fix input validation ==
src/index.ts:17 β
Prefer `const` over `let` since `userId` is not reassigned.
src/utils/parse.ts:45 β
Consider refactoring to reduce nesting.| Flag | Short | Description |
|---|---|---|
--provider |
-p |
AI model provider (openai, ollama) |
--model |
-m |
AI model (e.g. gpt-4o, llama3) |
--model-endpoint |
-e |
Custom API endpoint for the model |
--git-provider |
-g |
Repo platform (default: github) |
--repo |
-r |
GitHub repo in format owner/repo |
--comment-style |
-s |
inline or summary |
--dry-run |
-d |
Don't post comments, only print |
--verbose |
-v |
Enable debug logs |
--config-path |
-c |
Custom config file path |
--local |
-l |
Review local git diff (always dry-run) |
--base |
Base ref for local diff (branch/commit) | |
--head |
Head ref for local diff (default: HEAD) | |
--patch |
Path to a patch file (implies --local) |
|
--request-timeout |
Request timeout in ms (default: 120000) |
diff-hound/
βββ bin/ # CLI entrypoint
βββ src/
β βββ cli/ # CLI argument parsing
β βββ config/ # JSON/YAML config handling
β βββ core/ # Diff parsing, formatting
β βββ models/ # AI model adapters (OpenAI, Ollama)
β βββ platforms/ # GitHub, local git, etc.
β βββ schemas/ # Structured output types and validation
β βββ types/ # TypeScript types
βββ .env
βββ README.md
Create a new class in src/models/ that implements the CodeReviewModel interface.
Create a new class in src/platforms/ that implements the CodeReviewPlatform interface.
π§ Structured logging (pino) π GitLab and Bitbucket platform adapters π Anthropic and Gemini model adapters π€ Webhook server mode and GitHub Action π¦ Docker image for self-hosting π§© Plugin system with pipeline hooks π§ Repo indexing and context-aware reviews
We welcome contributions! See CONTRIBUTING.md for:
- Branching and commit conventions (Angular style)
- PR workflow (squash-merge)
- How to add new platform and model adapters
MIT β Use freely, contribute openly.