Back to articles
The Review Gap

The Review Gap

via Dev.tothesythesis.ai

AI-generated pull requests wait 4.6 times longer for human review than code written by colleagues. The bottleneck in software development has shifted from writing code to reading it. AI-generated pull requests wait 4.6 times longer for human code review than pull requests written by colleagues. The data comes from LinearB's analysis of 8.1 million pull requests across 4,800 engineering teams. Teams with high AI adoption complete twenty-one percent more tasks and merge ninety-eight percent more pull requests. Review time increases ninety-one percent. The bottleneck in software development has moved. It used to be writing the code. Now it is reading it. The Numbers The acceptance rate for AI-generated code is 32.7 percent. For human-written code, it is 84.4 percent. AI-generated pull requests average 10.83 issues per review. Human-written code averages 6.45 — 1.7 times fewer problems per submission. Logic errors in AI code are up seventy-five percent. Security vulnerabilities are 1.5 to

Continue reading on Dev.to

Opens in a new tab

Read Full Article
7 views

Related Articles