“Unfortunately, the ideas that underpin open science meet most resistance within universities at the level of individual researchers. This is because cultural shifts in non-commercial environments take some time to accomplish and academia is notorious for its lack of change agility and inertia….
Some journals have now adopted a model of open review in which the authors and the reviewers are made known to each other from the start. This is proposed to encourage a civil debate about the work and improve its quality, as well as to enhance reviewer performance. However, there is a risk that a relatively junior reviewer may feel too intimidated to openly criticise the work of a senior researcher in the field (and who they may want to work with in future) and there are concerns that reviewers may not wish to review on those terms, making life difficult for editors to secure the necessary level of scrutiny for papers. Transparent peer review removes some of this concern. With this approach, anonymity can be preserved during the review process but, after the paper is accepted, the reviews and author responses are published along with the paper, for open scrutiny. The identity of the reviewer can remain concealed during the review process but, in a fully transparent review, their identity would be made public after paper acceptance….
The Journal of Human Nutrition and Dietetics has operated with double-blind peer review for many years. Recently, the journal has joined the Wiley Transparent Peer Review pilot scheme. This brings together the publisher with Publons and ScholarOne (part of Clarivate Web of Science) and enables the entire peer review process associated with a paper to be published alongside the accepted paper. Our papers now have an Open Research section, which provides a link to the digital object identifier and allow readers to see the peer review content. The peer review and author responses are in themselves citable materials. Our transparent peer review is a voluntary process for both authors and reviewers. Authors can opt to keep the peer review comments unpublished and reviewers can remain anonymous but still have their comments published….
Despite our push for openness through the transparent peer review scheme, there seems to be a reluctance to participate….
I would like to finish this editorial with an exhortation to take part in the revolution. Let us make research in the area of nutrition and dietetics more open! The advantages are clear. Open science is more interesting science, more collaborative science and kinder science. Transparent peer review is not something to be feared and should instead prompt constructive dialogues between authors, editors and peer reviewers. If there are some dinosaurs out there who still want to use peer review as a platform for bullying their junior colleagues, they will be in for a shock as the growth of more healthy research environments and communities leaves them behind. Transparent peer review is certainly not a panacea, but it is a great step forward to put right some of the historical problems that lie in the peer review system.”
“Preprint reviews hold the potential to build trust in preprints and drive innovation in peer review. However, the variety of platforms available to contribute comments and reviews on preprints means that it can be difficult for readers to gain a clear picture of the process that led to the reviews linked to a particular preprint.
To address this, ASAPbio organized a working group to develop a set of features that could describe preprint review processes in a way that is simple to implement. We are proud to share Preprint Review Features (PReF) in an OSF Preprint. PReF consists of 8 key-value pairs, describing the key elements of preprint review. The white paper includes detailed definitions for each feature, an implementation guide, and an overview of how the characteristics of active preprint review projects map to PReF. We also developed a set of graphic icons (below) that we encourage the preprint review community to reuse alongside PReF.
While the Peer Review Terminology developed by the STM working group and the Open Peer Review taxonomy provided useful background for our discussions, they were designed with a focus on journal-based peer review, and do not capture all the possible elements that can be part of preprint review. We acknowledge that there are nuances and different views as to what constitutes “peer review,” “feedback,” and “commenting;” rather than create strict definitions, our aim was to parse out important aspects of the process involved in any form of review on preprints, and to do so in a format that could be used by platforms that host, coordinate, or aggregate such activities. Therefore, we are glad to see that PReF is already implemented on ReimagineReview and on review aggregators like Early Evidence Base and Sciety. We hope that our efforts in the development and adoption of PReF will promote better visibility and discoverability of preprint review….”
We are delighted to announce that COAR has been awarded a US$4 million grant from Arcadia, a charitable fund of Lisbet Rausing and Peter Baldwin. The 4 year grant will go towards the COAR Notify Project, which is developing and implementing a standard protocol for connecting the content in the distributed repository network with peer reviews and assessments in external services, using linked data notifications.
“Refocusing journal peer review on the study design phase exerts more and greater downstream changes. Peer review that focuses on evaluating significance of the research question, the methods and analytical approach before work begins, has the power to shape stronger, more rigorous and more creative research. Making an editorial decision while results are still unknown minimizes the potential impacts confirmation bias and impact bias, taking science communication back to its roots, with an emphasis on quality, rigor, and a pure intellectual curiosity. As Kiermer explains, “Preregistration and peer review of the study protocol with a journal is a way to tackle publication bias. As long as the protocol is followed, or any deviations explained, it’s a guarantee for the author that the results will be published, even if they don’t confirm their hypothesis.”
In combination, all of these factors contribute to a more complete and efficient scientific record, replete with studies exploring important hypotheses, performed to the very highest technical standards, and free from the distorting influence of impact-chasing, ego, and bias. A scientific record that is both demonstrably trustworthy, and widely trusted. And with that, there is no telling where science might go, or how quickly….”
The LIBER Citizen Science Working Group is embarking on the design of an open peer review process for the guidebook series being published on the topic of citizen science for research libraries. The LIBER working group in collaboration with COPIM is looking for input and feedback on the design of the open peer review workflow. COPIM is supporting the working group by contributing its experience and knowledge of open access book publishing, with respect to collaborative post-publication input, community peer review processes, and reuse. The first section of the guide Citizen Science Skilling for Library Staff, Researchers, and the Public has already been published with three more sections to follow.
Abstract: Previous studies of the use of peer review for the allocation of competitive funding agencies have concentrated on questions of efficiency and how to make the ‘best’ decision, by ensuring that successful applicants are also the more productive or visible in the long term. This paper examines the function of peer review by examining how it can be used as a participatory research governance tool by focusing on the function feedback plays in assisting in the development of ECR applicants. Using a combination of survey, interviews and linguistic-based coding of reviewer reports, this study explores how reviewer reports provided to unsuccessful applicants as an artefact of the peer-review decision making process, can be considered as a method of feedback. Specifically, it examines which components of this feedback underpinned their decisions to re-submit their grant applications following first-failure; change their research topics or withdraw from academia entirely. Peer review feedback, we argue, sends signals to applicants to encourage them to persist (continue) or switch (not continue) even when the initial application has failed. The results lead to identification of standards of feedback for funding agencies and peer-reviewers to promote when providing reviewer feedback to applicants as part of their peer review process. The results also highlight a function of peer review overlooked by current research which is not concentrated solely on the development of an outcome, to one that can be used effectively to support the development of individuals and their future research plans.
Abstract: The moral panic over the impact of so-called predatory publishers continues unabated. It is important, however, to resist the urge to simply join in this crusade without pausing to examine the assumptions upon which such concerns are based. It is often assumed that established journals are almost sacrosanct, and that their quality, secured by peer review, is established. It is also routinely presumed that such journals are immune to the lure of easy money in return for publication. Rather than looking at the deficits that may be apparent in the practices and products of predatory publishers, this commentary invites you to explore the weaknesses that have been exposed in traditional academic journals but are seldom discussed in the context of predatory publishing. The inherent message for health and medical services staff, researchers, academics, and students is, as always, to critically evaluate all sources of information, whatever their provenance.
“Most will also agree that editorial work should also be done in the service of up-and-coming OA journals rather than to prop up the reputations of those that remain paywalled. But withholding peer-review from non-open journals is more controversial. Even OA campaigners sometimes raise objections. These I now propose to rebut….”
“Following our trial last year, ASAPbio is running further preprint crowd review activities in 2022. Our goal is to provide an engaging environment for researchers to participate in providing feedback on preprints and support public reviews for preprints.
In 2022, we will be coordinating public reviews for different disciplines. We are pleased to say that we are collaborating with SciELO Preprints to also coordinate the review of preprints in Portuguese. This year we will cover the following disciplines:
– Cell biology preprints from bioRxiv (English)
– Biochemistry preprints from bioRxiv (English)
– Infectious diseases preprints from SciELO Preprints (Portuguese)
**This form is for reviewers who will participate in the review of preprints from bioRXiv, to sign up for the review of SciELO Preprints in Portuguese, please complete this form: https://docs.google.com/forms/d/e/1FAIpQLSd0wrAa7FLrw8I1j5p9mysWrstehPqDqsn9UPjUbqrwRnQU-A/viewform
We invite researchers in the disciplines above to join our crowd preprint review activities, and particularly encourage early career researchers to participate. The activities will run for three months, from mid May to August 2022….”
“In a major step toward promoting preprint peer review as a means of increasing transparency and efficiency in scientific publishing, Review Commons is updating its policy: as of 1 June 2022, peer reviews and the authors’ response will be posted by Review Commons to bioRxiv or medRxiv when authors transfer their refereed preprint to the first affiliate journal….”
Peer review is central to the evaluation of research, but surprisingly little is known about its inner workings. This column presents the results of a survey of over 1,400 economists asking about their experiences with the system. The findings suggest that there are opportunities for improvement in the allocation and production of referee reports, as well as in the length of the process. The authors consider an assortment of proposals to address these issues, some of which command broad support from our respondents.
Over the last month, we have added two new groups, GigaScience and GigaByte, from the journals of the same name, increasing the number of specialist teams displaying their evaluations on Sciety.
GigaScience and GigaByte are part of GigaScience Press. With a decade-long history of open-science publishing, they aim to revolutionise publishing by promoting reproducibility of analyses and data dissemination, organisation, understanding, and use. As open-access and open-data journals, they publish all research objects (publishing data, software and workflows) from ‘big data’ studies across the life and biomedical sciences. These resources are managed using the FAIR Principles for scientific data management and stewardship, which state that research data should be Findable, Accessible, Interoperable and Reusable. They also follow the practices of transparency and openness in science publishing, and as such, they embrace open peer review (which is mandated for both journals) and preprints (which are strongly encouraged in GigaScience and mandated for GigaByte). The opportunities for combining both are covered by GigaScience in its video on open science and preprint peer review for Peer Review Week.