Scholarly peer review has been subject to several criticisms, and various proposals for reforming the system have been suggested over the years. Many studies have emphasized the problems inherent to the process of peer review. Moreover, Ragone et al. have shown that there is a low correlation between peer review outcomes and the future impact measured by citations. Various biomedical editors in particular have expressed criticism of peer review. A Cochrane review found little empirical evidence that peer review ensures quality in biomedical research, while a second systematic review and
meta-analysis found a need for evidence-based peer review in biomedicine given the paucity of assessment of the interventions designed to improve the process. To an outsider, the anonymous, pre-publication peer review process is opaque. Certain journals are accused of not carrying out stringent peer review in order to more easily expand their customer base, particularly in journals where authors pay a fee before publication. Richard Smith, MD, former editor of the
British Medical Journal, has claimed that peer review is "ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant; Several studies have shown that peer review is biased against the provincial and those from low- and middle-income countries; Many journals take months and even years to publish and the process wastes researchers' time. As for the cost, the Research Information Network estimated the global cost of peer review at £1.9 billion in 2008."
Brezis and Birukou have further argued that that the process is weakened by the fact that reviewers are not investing the same amount of time to analyze the projects. This heterogeneity among referees, the two argue, will lead to seriously affect the whole peer review process, and will lead to main arbitrariness in the results of the process. A 2024 review focused on economics identified several recurring concerns in the field’s peer-review process, including referee overreach, strategic refereeing and conflicts of interest, prestige bias, and noisy review outcomes. Peer review publication is a common requirement for
academic tenure. This requirement has been criticised on cultural grounds. In 2011,
University of British Columbia assistant law professor, Lorna McCue, argued that emphasis on peer review publication was culturally inappropriate as it did not recognize the importance of Indigenous oral traditions. In 2018, the
British Columbia Human Rights Tribunal found that this complaint was not justified . There is an ongoing discussion about a peer-review crisis. In 2022
Inside Higher Ed reported a serious shortage of scholars to review submitted articles and bigger structural problems amplified by the COVID-19 pandemic.
Tendency to discourage innovative projects Brezis and Birukou have argued that a major issues in the peer process is that referees display
homophily in their taste and perception of innovative ideas. This means that reviewers who are developing conventional ideas tend to give low grades to more innovative projects, while reviewers who develop innovative ideas tend, by homophily, to give higher grades to innovative projects. "A common informal view is that it is easier to obtain funds for conventional projects. Those who are eager to get funding are not likely to propose radical or unorthodox projects. Since you don't know who the referees are going to be, it is best to assume that they are middle-of-the-road. Therefore, the middle-of-the-road application is safer". The main goal of this practice is to improve the relevance and accuracy of scientific discussions. Even though experts often criticize peer review for a number of reasons, the process is still often considered the "gold standard" of science. Occasionally however, peer review approves studies that are later found to be wrong and rarely deceptive or fraudulent results are discovered prior to publication. Thus, there seems to be an element of discord between the ideology behind and the practice of peer review. By failing to effectively communicate that peer review is imperfect, the message conveyed to the wider public is that studies published in peer-reviewed journals are "true" and that peer review protects the literature from flawed science. A number of well-established criticisms exist of many elements of peer review. In the following we describe cases of the wider impact inappropriate peer review can have on public understanding of scientific literature. Multiple examples across several areas of science find that scientists elevated the importance of peer review for research that was questionable or corrupted. For example,
climate change deniers have published studies in the
Energy and Environment journal, attempting to undermine the body of research that shows how human activity impacts the Earth's climate. Politicians in the United States who reject the established science of climate change have then cited this journal on several occasions in speeches and reports. At times, peer review has been exposed as a process that was orchestrated for a preconceived outcome.
The New York Times gained access to confidential peer review documents for studies sponsored by the
National Football League (NFL) that were cited as scientific evidence that brain injuries do not cause long-term harm to its players. During the peer review process, the authors of the study stated that all NFL players were part of a study, a claim that the reporters found to be false by examining the database used for the research. Furthermore,
The Times noted that the NFL sought to legitimize the studies" methods and conclusion by citing a "rigorous, confidential peer-review process" despite evidence that some peer reviewers seemed "desperate" to stop their publication. Recent research has also demonstrated that widespread industry funding for published medical research often goes undeclared and that such conflicts of interest are not appropriately addressed by peer review. Conflict of interest is less likely to be picked up in double-blinded reviews since the reviewer does not know the identity of the authors. Another problem that peer review fails to catch is
ghostwriting, a process by which companies draft articles for academics who then publish them in journals, sometimes with little or no changes. These studies can then be used for political, regulatory and marketing purposes. In 2010, the US Senate Finance Committee released a report that found this practice was widespread, that it corrupted the scientific literature and increased prescription rates. Ghostwritten articles have appeared in dozens of journals, involving professors at several universities. Just as experts in a particular field have a better understanding of the value of papers published in their area, scientists are considered to have better grasp of the value of published papers than the general public and to see peer review as a human process, with human failings, and that "despite its limitations, we need it. It is all we have, and it is hard to imagine how we would get along without it". But these subtleties are lost on the general public, who are often misled into thinking that being published in a journal with peer review is the "gold standard" and can erroneously equate published research with the truth. This will be needed as the scholarly publishing system has to confront wider issues such as retractions and replication or reproducibility "crises".
Views of peer review Peer review is often considered integral to
scientific discourse in one form or another. Its gatekeeping role is supposed to be necessary to maintain the quality of the scientific literature and avoid a risk of unreliable results, inability to separate signal from noise, and slow scientific progress. Shortcomings of peer review have been met with calls for even stronger filtering and more gatekeeping. A common argument in favor of such initiatives is the belief that this filter is needed to maintain the integrity of the scientific literature. Calls for more oversight have at least two implications that are counterintuitive of what is known to be true scholarship. puts it, the "extra type of integrity that is beyond not lying, but bending over backwards to show how you're maybe wrong, that you ought to have when acting as a scientist." If anything, the current peer review process and academic system could penalize, or at least fail to incentivize, such integrity. Instead, the credibility conferred by the "peer-reviewed" label could diminish what Feynman calls the
culture of doubt necessary for science to operate a self-correcting, truth-seeking process. The effects of this can be seen in the ongoing
replication crisis, hoaxes, and widespread outrage over the inefficacy of the current system. Here again more oversight only adds to the impression that peer review ensures quality, thereby further diminishing the culture of doubt and counteracting the spirit of scientific inquiry. Quality researcheven some of our most fundamental scientific discoveriesdates back centuries, long before peer review took its current form. Such modern technology includes posting results to
preprint servers,
preregistration of studies,
open peer review, and other open science practices. In all these initiatives, the role of gatekeeping remains prominent, as if a necessary feature of all scholarly communication, but critics argue In addition to concerns about the quality of work produced by well-meaning researchers, there are concerns that a truly open system would allow the literature to be populated with junk and propaganda by those with a vested interest in certain issues. A counterargument is that the conventional model of peer review diminishes the healthy skepticism that is a hallmark of scientific inquiry, and thus confers credibility upon subversive attempts to infiltrate the literature. Some
sociologists of science argue that peer review makes the ability to publish susceptible to control by
elites and to personal jealousy. The peer review process may sometimes impede progress and may be biased against novelty. A linguistic analysis of review reports suggests that reviewers focus on rejecting the applications by searching for weak points, and not on finding the high-risk/high-gain groundbreaking ideas that may be in the proposal. Reviewers tend to be especially critical of conclusions that contradict their own
views, and lenient towards those that match them. At the same time, established scientists are more likely than others to be sought out as referees, particularly by high-prestige journals/publishers. As a result, ideas that harmonize with the established experts' are more likely to see print and to appear in premier journals than are iconoclastic or revolutionary ones. This accords with
Thomas Kuhn's well-known observations regarding
scientific revolutions. A theoretical model has been established whose simulations imply that peer review and over-competitive research funding foster mainstream opinion to monopoly. Criticisms of traditional anonymous peer review allege that it lacks accountability, can lead to abuse by reviewers, and may be biased and inconsistent. There have also been suggestions of
gender bias in peer review, with male authors being likely to receive more favorable treatment. A 2021 study found in some respects bias in favor of female authors and found no evidence for bias in favor of male authors.
Political bias can be found in reviewer evaluations.
Exploitation of free work Most academic publishers do not financially compensate reviewers for their participation in the peer-review process, which has been criticized by the academic community. Whereas some publishers have contended that it is economically not feasible to pay reviewers,
Open access journals and peer review Some critics of open access (OA) journals have argued that, compared to traditional subscription journals, open access journals might utilize substandard or less formal peer review practices, and, as a consequence, the quality of scientific work in such journals will suffer. In a study published in 2012, this hypothesis was tested by evaluating the relative "impact" (using citation counts) of articles published in open access and subscription journals, on the grounds that members of the scientific community would presumably be less likely to cite substandard work, and that citation counts could therefore act as one indicator of whether or not the journal format indeed impacted peer review and the quality of published scholarship. This study ultimately concluded that "OA journals indexed in Web of Science and/or Scopus are approaching the same scientific impact and quality as subscription journals, particularly in biomedicine and for journals funded by article processing charges," and the authors consequently argue that "there is no reason for authors not to choose to publish in OA journals just because of the 'OA' label.
Failures Peer review fails when a peer-reviewed article contains fundamental errors that undermine at least one of its main conclusions and that could have been identified by more careful reviewers. Many journals have no procedure to deal with peer review failures beyond publishing letters to the editor. Peer review in scientific journals assumes that the article reviewed has been honestly prepared. The process occasionally detects fraud, but is not designed to do so. When peer review fails and a paper is published with fraudulent or otherwise irreproducible data, the paper may be
retracted. A 1998 experiment on peer review with a fictitious manuscript found that peer reviewers failed to detect some manuscript errors and the majority of reviewers may not notice that the conclusions of the paper are unsupported by its results.
Fake peer review There have been instances where peer review was claimed to be performed but in fact was not; this has been documented in some
predatory open access journals (e.g., the "
Who's Afraid of Peer Review?" affair) or in the case of
sponsored Elsevier journals. In November 2014, an article in
Nature exposed that some academics were submitting fake contact details for recommended reviewers to journals, so that if the publisher contacted the recommended reviewer, they were the original author reviewing their own work under a fake name. The
Committee on Publication Ethics issued a statement warning of the fraudulent practice. In March 2015,
BioMed Central retracted 43 articles and Springer retracted 64 papers in 10 journals in August 2015.
Tumor Biology journal is another example of peer review fraud.
Plagiarism Reviewers generally lack access to raw data, but do see the full text of the manuscript, and are typically familiar with recent publications in the area. Thus, they are in a better position to detect
plagiarism of prose than fraudulent data. A few cases of such textual plagiarism by historians, for instance, have been widely publicized. On the scientific side, a poll of 3,247 scientists funded by the U.S.
National Institutes of Health found 0.3% admitted faking data and 1.4% admitted plagiarism. Additionally, 4.7% of the same poll admitted to
self-plagiarism or autoplagiarism, in which an author republishes the same material, data, or text, without citing their earlier work. • The
Soon and Baliunas controversy involved the publication in 2003 of a review study written by aerospace engineer
Willie Soon and astronomer
Sallie Baliunas in the journal
Climate Research, which was quickly taken up by the
G.W. Bush administration as a basis for amending the first
Environmental Protection Agency Report on the Environment. The paper was strongly criticized by numerous scientists for its methodology and for its misuse of data from previously published studies, prompting concerns about the peer review process of the paper. The controversy resulted in the resignation of several editors of the journal and the admission by its publisher
Otto Kinne that the paper should not have been published as it was. • The
trapezoidal rule, in which the method of
Riemann sums for numerical integration was republished as "
Tai's model" in a Diabetes research journal,
Diabetes Care. The method is almost always taught in high school calculus, and was thus considered an example of an extremely well known idea being re-branded as a new discovery. • A conference organized by the
Wessex Institute of Technology was the target of an exposé by three researchers who wrote nonsensical papers (including one that was composed of random phrases). They reported that the papers were "reviewed and provisionally accepted" and concluded that the conference was an attempt to "sell" publication possibilities to less experienced or naive researchers. This may however be better described as a lack of any actual peer review, rather than peer review having failed. • In the humanities, one of the most infamous cases of plagiarism undetected by peer review involved
Martin Stone, formerly professor of medieval and Renaissance philosophy at the
Hoger Instituut voor Wijsbegeerte of the
KU Leuven. Martin Stone managed to publish at least forty articles and book chapters that were almost entirely stolen from the work of others. Most of these publications appeared in highly rated peer-reviewed journals and book series. • The controversial
Younger Dryas impact hypothesis, which evolved directly from pseudoscience and now forms the basis for the pseudoarchaeology of
Graham Hancock's
Ancient Apocalypse, was first published in the peer-reviewed journal
PNAS using a nonstandard review system, according to a comprehensive refutation by Holliday et al. (2023). According to this 2023 review, "Claiming evidence where none exists and providing misleading citations may be accidental, but when conducted repeatedly, it becomes negligent and undermines scientific advancement as well as the credibility of science itself. Also culpable is the failure of the peer review process to prevent such errors of fact from entering the literature. The Proceedings of the National Academy of Sciences 'contributed review' system for National Academy members...is at least partially responsible. The 'pal reviews' (as some refer to them) were significantly curtailed in 2010, in part due to the YDIH controversy."
Proposed alternatives Other attempts to reform the peer review process originate among others from the fields of
metascience and
journalology. Reformers seek to increase the reliability and efficiency of the peer review process and to provide it with a scientific foundation. Alternatives to common peer review practices have been put to the test, in particular
open peer review, where the comments are visible to readers, generally with the identities of the peer reviewers disclosed as well, e.g.,
F1000,
eLife,
BMJ, and
BioMed Central. In the case of eLife, peer review is used not for deciding whether to publish an article, but for assessing its importance and reliability. ==In popular culture==