Peer Review: Is interactive collaboration the future?
By Anna Badner
Peer review is at the heart of scientific publishing. It is the primary mechanism for evaluating articles prior to publication and serves as a quality control system for completed research. In its current form, peer review involves the assessment of submitted manuscripts by independent experts in the field. This process is double blind, where both the reviewer and the author are anonymous, or single blind, with only reviewer anonymity. Although peer review is widely accepted as an effective means of research validation, it is not without disadvantages and has been widely scrutinized within the scientific community. For these reasons, the IMS Magazine decided to review the history of peer review, comment on current limitations, and describe the innovative steps taken to improve the process.
The history of peer review
Philosophical Transactions, one of the first scientific journals, was founded by Henry Oldenburg in 1665. Although an official journal of the Royal Society of London, article selection and review was largely at the hands of the editor. It was not until 1752, when the Society took over editorial responsibility, that manuscripts were assessed by a selected group of members familiar with the subject matter. By 1854, the Society established a monthly periodical entitled Proceedings of the Royal Society, which included the publication of these peer review reports. It is interesting to note that, to date, the Proceedings of the Royal Society continues to publish in physical (Proceedings A) and biological (Proceedings B) science.
Following public exposure, mainly at the hands of the Royal Society, peer review practice was gradually adopted by other academic communities and later by independent journals. Yet, it was not until after World War II, with increased subject specialization and technological advances, that peer review became recognized as an important standard in scientific publishing. While some journals were quick to adapt, others, such as the Lancet, were resistant and did not implement peer review until 1976. Moreover, without the establishment of evidence-based guidelines and regulation, peer review quality was, and continues to be, prone to bias, inconsistency, and consequently, ineffectuality.
The good, the bad, and the ugly
In general, with present day competition and pressure to publish, peer review functions as a critical filter for manuscript publication. Although important, the process also has several limitations, with referee bias as a leading example. A recent Journal of the American Medical Association (JAMA) study demonstrated that author, as well as institutional prestige, influenced manuscript acceptance, when comparing recommendations in single blind versus double blind review. A similar bias exists against studies with negative results, as journals are not willing to publish manuscripts that fail to reproduce or refute the work of others. This is especially problematic, as it leads to exaggerated effects in later meta-analyses.
Related to referee bias is the issue of review inconsistency. Plenty of studies on peer review have reported low inter-rater reliability, highlighting the poor level of agreement between reviewers when evaluating manuscripts., In one study of review reproducibility, the authors determined that agreement between reviewers was only slightly greater than would be expected by chance alone, comparable to a coin toss. While diversity of opinion may be beneficial, considering that referees have unique expertise and specialization, this inconsistency presents a major barrier to measuring the quality of science. Moreover, a poorly defined evaluation criteria involving the assessment of novelty, soundness, and significance (which are largely open to interpretation), may also decrease inter-rater reliability. The implementation of standardized assessment forms has been just one of many suggestions to increase referee agreement and improve overall review consistency.
Expanding on the limitations, peer review is also slow, costly, and without incentives. The presence of progressively complex data, in a competitive environment, requires considerable time to review. This time could be spent securing grants and completing research, presenting a significant cost to reviewers. Therefore, with little incentive, referees are increasingly likely to decline participation in favor of spending their time more productively. So, it is not surprising that there is a growing demand for qualified and committed reviewers. With these issues in mind, metrics of academic recognition, such as the R-index, have been proposed to encourage referee participation. Yet, it is unclear if these measures can be adopted by the whole academic community.
Innovation in peer review: interactive collaboration
Beyond standardized assessment forms and academic recognition, journals are looking to replace the traditional system of peer review with a more modern approach. Interactive collaboration during peer review aims to provide timely, consolidated, constructive and fair feedback. One of the most successful examples of this practice has been through the online open-access journal, eLife. Funded by the Howard Hughes Medical Institute, the Max Planck Society, and the Wellcome Trust, eLife was established in 2012 with the goal of revolutionizing scientific publishing. Uniquely, eLife assigns a reviewing editor to serve as one of the referees, who initiates online discussion once the independent reviews have been received. In this process, the referees and their comments are made known to each other. Following discussion, the reviewers generate a unified decision letter with directive comments. The consolidated decision letter is meant to provide clear guidelines for manuscript improvement that, if followed, will ensure publication. The success of this system was further evaluated in a recent F1000 research study, where the authors found that having editors assist with peer review significantly shortened decision times, without affecting submission acceptance or rejection. Other publishing companies, like Elsevier, have also been experimenting with alternative review. In a trial, three Elsevier journals, Cell, Neuron, and Molecular Cell, used the Mendeley reference manager for anonymous referee collaboration. While there were concerns of increased workload for referee and editors, the overall responses were positive and the interactions were thought to improve overall review quality. The current results are promising and will hopefully gain momentum in the next few years.
Since its conception in the 18th century, peer review has remained largely unchanged. Now, with significant technological advances and ease of communication, it is time to fix the broken system. Through interactive collaboration, peer review can be more consistent, less biased; through open discourse with colleagues, referees can be recognized for their efforts. Most importantly, these innovations in peer review can produce better revisions and yield better publications.