
I built an AI PR reviewer and it already caught bugs I missed
I've been doing code review the same way for years. Read the diff, run it locally if something looks tricky, leave a comment or two, merge. Works fine until you're tired, distracted, or too close to the code to see the obvious problem. So I built claude-pr-reviewer . It's a GitHub Action that feeds your PR diff to Claude and posts structured feedback as a comment. Not a linter. Not a style checker. Actual reasoning about what the code does, what it gets wrong, and whether it should merge. The setup is minimal: name : Claude PR Review on : pull_request : types : [ opened , synchronize ] permissions : pull-requests : write contents : read jobs : review : runs-on : ubuntu-latest steps : - uses : indoor47/claude-pr-reviewer@v1 with : anthropic_api_key : ${{ secrets.ANTHROPIC_API_KEY }} One secret. One workflow file. Every PR gets reviewed automatically. What the output actually looks like The review posts as a PR comment, structured into sections. Here's a real example from a rate-limiting
Continue reading on Dev.to
Opens in a new tab
