The peer-review crisis: how to fix an overloaded system
The peer-review system, long considered the cornerstone of scientific integrity, is under growing strain. With an explosion in research output—especially after the COVID-19 pandemic—journals and funding agencies are struggling to find enough qualified reviewers. The result is a crisis of capacity, quality, and fairness, prompting a wave of innovation and debate over how to fix the system. One of the most visible signs of the strain comes from major research facilities like the European Southern Observatory (ESO), which operates the Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope in Chile. For its next observing cycle, over 3,000 hours of telescope time were requested—far more than the 379 nights available. To manage this, ESO shifted from relying solely on expert panels to a distributed peer-review model, where applicants must also evaluate competing proposals. This approach, now used by other funders like the Volkswagen Foundation, aims to reduce the burden on individual scientists while speeding up decision-making. The broader peer-review system faces similar challenges. Data from Publons and surveys by IOP Publishing show a steady rise in the number of review requests, with many researchers reporting increased workloads. Turnaround times for manuscript reviews have lengthened—averaging 149 days in 2024, up from 140 in 2014. A 2024 survey found that half of researchers said they received more review invitations over the past three years, though fewer than one in six felt overwhelmed, suggesting a growing fatigue. To combat this, journals and funders are experimenting with incentives. Some now publicly track and publish review times, leading to modest improvements, especially among senior researchers. Others offer awards or recognition for active reviewers. But evidence suggests these rewards may backfire—some reviewers complete fewer reviews afterward, possibly feeling they’ve fulfilled their duty. The most controversial idea is paying reviewers. A 2021 study estimated that peer reviewers contributed over 100 million hours of work in 2020—worth billions in economic value. While some argue this is fair compensation, others warn it could introduce bias or conflicts of interest. Still, pilot programs are showing promise. Critical Care Medicine offered $250 per review and saw a small increase in acceptance rates and faster turnaround times, without sacrificing quality. The Company of Biologists, a non-profit, paid £220 per review and achieved an average decision time of just 4.6 days—down from 38—while maintaining review quality. Another key solution is expanding the reviewer pool. Currently, a small group of senior academics in Western countries handles the vast majority of reviews. A 2016 study found that 20% of reviewers did 69% to 94% of the work. To diversify this, publishers are using AI-powered tools to search databases like Scopus for subject-matched reviewers, including early-career researchers and scientists from underrepresented regions. Structured peer review—where reviewers answer a set of specific, predefined questions—has also proven effective. A pilot by Elsevier showed that reviewers agreed more often on key issues like data validity and experimental design, rising from 31% to 41% agreement. This format not only improves consistency but also helps identify gaps in reviewers’ expertise, prompting them to suggest additional checks. Transparency is another growing trend. Some journals, including Nature, are planning to publish peer-review reports alongside articles, with reviewers named. Advocates argue this boosts accountability, improves the quality of reviews, and may encourage more participation by elevating the status of reviewing work. Ultimately, fixing peer review will require systemic change. While demand management—limiting submissions per institution—can ease pressure, it merely shifts the burden. The most sustainable path lies in scaling the reviewer pool, improving efficiency through structure and technology, and revaluing the work reviewers do. As Stephen Pinfield notes, the real challenge isn’t just reviewing more papers—it’s rethinking how we distribute the labor and reward the contributions that keep science trustworthy and advancing.