A particular concern in peer review is "role duality" as people are in parallel in the role of being an evaluator and being evaluated. Research illustrates that taking on both roles in parallel biases people in their role as evaluators as they engage in strategic actions to increase the chance of being evaluated positively themselves. Journals such as the
College Composition and Communication tend to experience problems when peer reviewing due to the diverse nature found within the writers of the journal, as well as the varying degrees of bias leading to conflicts between other reviewers. Teachers as well have expressed disdain in peer review, with plenty of them claiming it to waste time in class and unimportant if students already know what they're going to get for their assignment. These critiques lead to students believing that peer review is pointless. This is also particularly evident in university classrooms, where the most common source of writing feedback during student years often comes from teachers, whose comments are often highly valued. Students may become influenced to provide research in line with the professor's viewpoints, because of the teacher's position of high authority. The effectiveness of feedback largely stems from its high authority. Benjamin Keating, in his article "A Good Development Thing: A Longitudinal Analysis of Peer Review and Authority in Undergraduate Writing," conducted a longitudinal study comparing two groups of students (one majoring in writing and one not) to explore students' perceptions of authority. This research, involving extensive analysis of student texts, concludes that students majoring in non-writing fields tend to undervalue mandatory peer review in class, while those majoring in writing value classmates' comments more. This reflects that peer review feedback has a certain threshold, and effective peer review requires a certain level of expertise. For non-professional writers, peer review feedback may be overlooked, thereby affecting its effectiveness. Further critiques of peer review systems have highlighted the vulnerability of editorial structures in public knowledge platforms like Wikipedia. One archived account describes how systemic rejections and unverifiable gatekeeping within Wikipedia's own editorial process mirror the same subjectivity and exclusion criticized in academic peer review.Elizabeth Ellis Miller, Cameron Mozafari, Justin Lohr and Jessica Enoch state, "While peer review is an integral part of writing classrooms, students often struggle to effectively engage in it." The authors illustrate some reasons for the inefficiency of peer review based on research conducted during peer review sessions in university classrooms: • Lack of Training: Students and even some faculty members may not have received sufficient training to provide constructive feedback. Without proper guidance on what to look for and how to provide helpful comments, peer reviewers may find it challenging to offer meaningful insights. • Limited Engagement: Students may participate in peer review sessions with minimal enthusiasm or involvement, viewing them as obligatory tasks rather than valuable learning opportunities. This lack of investment can result in superficial feedback that fails to address underlying issues in the writing. • Time Constraints: Instructors often allocate limited time for peer review activities during class sessions, which may not be adequate for thorough reviews of peers' work. Consequently, feedback may be rushed or superficial, lacking the depth required for meaningful improvement. This research demonstrates that besides issues related to expertise, numerous objective factors contribute to students' poor performance in peer review sessions, resulting in feedback from peer reviewers that may not effectively assist authors. Additionally, this study highlights the influence of emotions in peer review sessions, suggesting that both peer reviewers and authors cannot completely eliminate emotions when providing and receiving feedback. This can lead to peer reviewers and authors approaching the feedback with either positive or negative attitudes towards the text, resulting in selective or biased feedback and review, further impacting their ability to objectively evaluate the article. It implies that subjective emotions may also affect the effectiveness of peer review feedback. Pamela Bedore and Brian O'Sullivan also hold a skeptical view of peer review in most writing contexts. The authors conclude, based on comparing different forms of peer review after systematic training at two universities, that "the crux is that peer review is not just about improving writing but about helping authors achieve their writing vision." Feedback from the majority of non-professional writers during peer review sessions often tends to be superficial, such as simple grammar corrections and questions. This precisely reflects the implication in the conclusion that the focus is only on improving writing skills. Meaningful peer review involves understanding the author's writing intent, posing valuable questions and perspectives, and guiding the author to achieve their writing goals. The (possibly not declared) use of
artificial intelligence to assist or perform the process of peer review has been confirmed by interviews in a survey by Nature. There are a few documented cases of scholars who inserted human-invisible prompts in their preprints in order to favour a positive review in case of an automated refereeing process. == Alternatives ==