Remote OpenClaw Blog
How to Set Up OpenClaw for Code Review Automation
6 min read ·
Manual code reviews are a bottleneck. Reviewers get fatigued, style issues slip through, and security vulnerabilities hide in large diffs. OpenClaw skills can automate the repetitive parts of code review so your human reviewers focus on architecture, logic, and design instead of formatting and common mistakes.
This guide walks through setting up a complete automated code review pipeline using OpenClaw skills. You will configure PR analysis, style enforcement, security scanning, and automated comment generation — all driven by skills from the OpenClaw Bazaar skills directory.
Why Automate Code Reviews
Every pull request needs a baseline level of scrutiny. Are the imports organized? Did someone leave a console.log in production code? Is there a SQL injection vulnerability in that new query? These checks are important, but they are also mechanical. A human reviewer should not spend 20 minutes checking indentation when an AI agent can do it in seconds.
Automated code review does not replace human review. It augments it. The agent handles the checklist items while your senior engineers focus on whether the approach makes sense, whether the abstraction is right, and whether the code will be maintainable six months from now.
Setting Up the Base Configuration
Start by installing the core code review skill:
openclaw skill install code-review-agent
This skill teaches your agent to analyze pull request diffs, identify common issues, and generate structured review feedback. It handles the fundamentals: unused variables, missing error handling, inconsistent naming, and dead code detection.
Next, create a review configuration file in your project root:
# .openclaw/review-config.yaml
review:
auto_trigger: true
trigger_on:
- pull_request.opened
- pull_request.synchronize
severity_levels:
- critical # Security vulnerabilities, data loss risks
- warning # Performance issues, anti-patterns
- suggestion # Style improvements, readability
ignore_paths:
- "*.lock"
- "dist/**"
- "node_modules/**"
- "*.generated.ts"
This configuration tells OpenClaw when to run reviews and what to skip. Lock files and generated code create noise in review output, so excluding them keeps feedback focused.
Automated Style Checking
Style debates waste review cycles. Tabs versus spaces, trailing commas, import ordering — these should be settled once and enforced automatically. Install the style enforcement skill:
openclaw skill install style-enforcer
Configure it to match your project standards:
# .openclaw/style-rules.yaml
style:
language: typescript
rules:
naming:
variables: camelCase
constants: UPPER_SNAKE_CASE
types: PascalCase
files: kebab-case
imports:
order:
- builtin
- external
- internal
- relative
enforce_type_imports: true
formatting:
max_line_length: 100
trailing_commas: all
semicolons: always
When the agent reviews a PR, it checks every changed file against these rules. If someone introduces a variable named UserData in a context where camelCase is expected, the agent flags it with a specific comment on the offending line.
The style-enforcer skill integrates with existing tools too. If your project uses ESLint or Prettier, the skill reads those configs and aligns its feedback accordingly, avoiding duplicate or contradictory suggestions.
Security Scanning
Security issues are the highest-value catches in automated review. Install the security scanning skill:
openclaw skill install security-review
This skill teaches your agent to scan for common vulnerability patterns:
// The agent flags patterns like these:
// SQL injection risk — string concatenation in queries
const query = \`SELECT * FROM users WHERE id = \${userId}\`;
// XSS vulnerability — unsanitized user input in HTML
element.innerHTML = userInput;
// Hardcoded secrets
const apiKey = "sk-live-abc123def456";
// Insecure randomness for security-sensitive operations
const token = Math.random().toString(36);
// Path traversal risk
const filePath = path.join(uploadDir, req.params.filename);
For each finding, the agent generates a review comment that explains the vulnerability, rates its severity, and suggests a fix. A SQL injection finding might produce:
**[CRITICAL] SQL Injection Risk**
This query uses string interpolation to include user input directly
in the SQL statement. An attacker could manipulate \`userId\` to execute
arbitrary SQL commands.
**Fix:** Use parameterized queries instead:
\`\`\`typescript
const result = await db.query(
"SELECT * FROM users WHERE id = $1",
[userId]
);
\`\`\`
The security-review skill covers the OWASP Top 10 categories and includes language-specific checks for JavaScript, TypeScript, Python, Go, Java, and Ruby. Browse additional security skills in the OpenClaw Bazaar skills directory for framework-specific coverage.
Marketplace
Free skills and AI personas for OpenClaw — browse the marketplace.
Browse the Marketplace →Generating Review Comments
Raw findings are useful, but structured review comments are better. Install the review comments skill:
openclaw skill install review-comments
This skill formats the agent's findings into GitHub-compatible review comments. It supports inline comments on specific lines, summary comments on the PR itself, and threaded discussions.
Configure the comment format:
# .openclaw/comment-config.yaml
comments:
format: github
inline: true
summary: true
summary_template: |
## Automated Review Summary
**Files reviewed:** {{files_count}}
**Issues found:** {{issues_count}}
### Critical ({{critical_count}})
{{#critical_issues}}
- [ ] {{file}}:{{line}} — {{message}}
{{/critical_issues}}
### Warnings ({{warning_count}})
{{#warning_issues}}
- {{file}}:{{line}} — {{message}}
{{/warning_issues}}
### Suggestions ({{suggestion_count}})
{{#suggestion_issues}}
- {{file}}:{{line}} — {{message}}
{{/suggestion_issues}}
batch_comments: true
max_comments_per_review: 25
The batch_comments option groups all findings into a single review submission rather than posting them one at a time, which keeps your PR timeline clean. The max_comments_per_review cap prevents overwhelming authors on large PRs — the agent prioritizes by severity and groups lower-severity items into the summary.
CI/CD Integration
To run automated reviews on every pull request, add OpenClaw to your CI pipeline:
# .github/workflows/openclaw-review.yaml
name: OpenClaw Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
review:
runs-on: ubuntu-latest
permissions:
pull-requests: write
contents: read
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install OpenClaw
run: curl -fsSL https://openclaw.dev/install.sh | bash
- name: Run automated review
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPENCLAW_API_KEY: ${{ secrets.OPENCLAW_API_KEY }}
run: |
openclaw review \
--pr ${{ github.event.pull_request.number }} \
--repo ${{ github.repository }} \
--post-comments
This workflow triggers on every new PR and every push to an existing PR. The agent checks out the code, analyzes the diff, and posts review comments directly on the pull request.
Customizing Review Priorities
Different teams care about different things. A fintech team might prioritize security findings above all else. A design system team might weight style consistency higher. Customize your review priorities:
# .openclaw/review-priorities.yaml
priorities:
- category: security
weight: 10
block_merge: true
severity_threshold: critical
- category: performance
weight: 7
block_merge: false
- category: style
weight: 3
block_merge: false
- category: documentation
weight: 2
block_merge: false
With block_merge: true on security issues, the agent sets a failing check status on PRs with critical security findings. This prevents merging until a human reviews and resolves the finding.
Testing Your Review Setup
Before going live, test the setup on a sample PR. Create a branch with intentional issues:
git checkout -b test/review-setup
Add a file with known problems — a hardcoded secret, an unused import, a style violation, and a SQL injection pattern. Push the branch, open a PR, and verify the agent catches each issue and posts appropriate comments.
Review the agent's output for false positives. If it flags something incorrectly, adjust your configuration or add an ignore rule. A good automated review should have a low false-positive rate — developers stop reading comments from a bot that cries wolf.
Combining Review Skills for Full Coverage
The skills covered here work independently, but they are most effective as a set. A complete review stack might look like:
openclaw skill install code-review-agent
openclaw skill install style-enforcer
openclaw skill install security-review
openclaw skill install review-comments
openclaw skill install performance-analyzer
This gives your agent the ability to catch style issues, security vulnerabilities, performance anti-patterns, and general code quality problems — then format everything into clean, actionable review comments.
The result is faster review cycles, fewer bugs reaching production, and happier reviewers who spend their time on the parts of code review that actually require human judgment.
Browse the Skills Directory
Find the right skill for your workflow. The OpenClaw Bazaar skills directory has over 2,300 community-rated skills — searchable, sortable, and free to install.
Try a Pre-Built Persona
Don't want to configure everything from scratch? OpenClaw personas come pre-loaded with skills, memory templates, and workflows designed for specific roles. Compare personas →