The potential butterfly effect of preregistered peer-reviewed research – The Official PLOS Blog

“Refocusing journal peer review on the study design phase exerts more and greater downstream changes. Peer review that focuses on evaluating significance of the research question, the methods and analytical approach before work begins, has the power to shape stronger, more rigorous and more creative research. Making an editorial decision while results are still unknown minimizes the potential impacts confirmation bias and impact bias, taking science communication back to its roots, with an emphasis on quality, rigor, and a pure intellectual curiosity. As Kiermer explains, “Preregistration and peer review of the study protocol with a journal is a way to tackle publication bias. As long as the protocol is followed, or any deviations explained, it’s a guarantee for the author that the results will be published, even if they don’t confirm their hypothesis.”

In combination, all of these factors contribute to a more complete and efficient scientific record, replete with studies exploring important hypotheses, performed to the very highest technical standards, and free from the distorting influence of impact-chasing, ego, and bias. A scientific record that is both demonstrably trustworthy, and widely trusted. And with that, there is no telling where science might go, or how quickly….”

Designing an Open Peer Review Process for Open Access Guides | Community-led Open Publication Infrastructures for Monographs (COPIM)

by Simon Worthington

The LIBER Citizen Science Working Group is embarking on the design of an open peer review process for the guidebook series being published on the topic of citizen science for research libraries. The LIBER working group in collaboration with COPIM is looking for input and feedback on the design of the open peer review workflow. COPIM is supporting the working group by contributing its experience and knowledge of open access book publishing, with respect to collaborative post-publication input, community peer review processes, and reuse. The first section of the guide Citizen Science Skilling for Library Staff, Researchers, and the Public has already been published with three more sections to follow.


SocArXiv Papers | Targeted, actionable and fair: reviewer reports as feedback and its effect on ECR career choices

Abstract:  Previous studies of the use of peer review for the allocation of competitive funding agencies have concentrated on questions of efficiency and how to make the ‘best’ decision, by ensuring that successful applicants are also the more productive or visible in the long term. This paper examines the function of peer review by examining how it can be used as a participatory research governance tool by focusing on the function feedback plays in assisting in the development of ECR applicants. Using a combination of survey, interviews and linguistic-based coding of reviewer reports, this study explores how reviewer reports provided to unsuccessful applicants as an artefact of the peer-review decision making process, can be considered as a method of feedback. Specifically, it examines which components of this feedback underpinned their decisions to re-submit their grant applications following first-failure; change their research topics or withdraw from academia entirely. Peer review feedback, we argue, sends signals to applicants to encourage them to persist (continue) or switch (not continue) even when the initial application has failed. The results lead to identification of standards of feedback for funding agencies and peer-reviewers to promote when providing reviewer feedback to applicants as part of their peer review process. The results also highlight a function of peer review overlooked by current research which is not concentrated solely on the development of an outcome, to one that can be used effectively to support the development of individuals and their future research plans.


SurveyMonkey Powered Online Survey

“Thank you in advance for taking the time to respond to this survey about eLife. It should take no more than 10 minutes to complete. 


We seek to transform research communication and we’d love to hear your thoughts related to initiatives we’ve got underway.

All questions are optional. Your feedback is anonymous and it will help us better understand the expectations of the community and drive change and innovation in scientific and medical publishing….”

Keep calm and carry on: moral panic, predatory publishers, peer review, and the emperor’s new clothes | Journal of the Medical Library Association

Abstract:  The moral panic over the impact of so-called predatory publishers continues unabated. It is important, however, to resist the urge to simply join in this crusade without pausing to examine the assumptions upon which such concerns are based. It is often assumed that established journals are almost sacrosanct, and that their quality, secured by peer review, is established. It is also routinely presumed that such journals are immune to the lure of easy money in return for publication. Rather than looking at the deficits that may be apparent in the practices and products of predatory publishers, this commentary invites you to explore the weaknesses that have been exposed in traditional academic journals but are seldom discussed in the context of predatory publishing. The inherent message for health and medical services staff, researchers, academics, and students is, as always, to critically evaluate all sources of information, whatever their provenance.


I Don’t Peer-Review for Non-Open Journals, and Neither Should You

“Most will also agree that editorial work should also be done in the service of up-and-coming OA journals rather than to prop up the reputations of those that remain paywalled. But withholding peer-review from non-open journals is more controversial. Even OA campaigners sometimes raise objections. These I now propose to rebut….”

ASAPbio Crowd preprint review 2022 sign-up form

“Following our trial last year, ASAPbio is running further preprint crowd review activities in 2022. Our goal is to provide an engaging environment for researchers to participate in providing feedback on preprints and support public reviews for preprints.

In 2022, we will be coordinating public reviews for different disciplines. We are pleased to say that we are collaborating with SciELO Preprints to also coordinate the review of preprints in Portuguese. This year we will cover the following disciplines:

– Cell biology preprints from bioRxiv (English)
– Biochemistry preprints from bioRxiv (English)
– Infectious diseases preprints from SciELO Preprints (Portuguese)

**This form is for reviewers who will participate in the review of preprints from bioRXiv, to sign up for the review of SciELO Preprints in Portuguese, please complete this form:

We invite researchers in the disciplines above to join our crowd preprint review activities, and particularly encourage early career researchers to participate. The activities will run for three months, from mid May to August 2022….”

New policy: Review Commons makes preprint review fully transparent – ASAPbio

“In a major step toward promoting preprint peer review as a means of increasing transparency and efficiency in scientific publishing, Review Commons is updating its policy: as of 1 June 2022, peer reviews and the authors’ response will be posted by Review Commons to bioRxiv or medRxiv when authors transfer their refereed preprint to the first affiliate journal….”

Economists want to see changes to their peer review system. Let’s do something about it. | VOX, CEPR Policy Portal

Peer review is central to the evaluation of research, but surprisingly little is known about its inner workings. This column presents the results of a survey of over 1,400 economists asking about their experiences with the system. The findings suggest that there are opportunities for improvement in the allocation and production of referee reports, as well as in the length of the process. The authors consider an assortment of proposals to address these issues, some of which command broad support from our respondents.

GigaScience and GigaByte Groups Join Sciety

Over the last month, we have added two new groups, GigaScience and GigaByte, from the journals of the same name, increasing the number of specialist teams displaying their evaluations on Sciety.

GigaScience and GigaByte are part of GigaScience Press. With a decade-long history of open-science publishing, they aim to revolutionise publishing by promoting reproducibility of analyses and data dissemination, organisation, understanding, and use. As open-access and open-data journals, they publish all research objects (publishing data, software and workflows) from ‘big data’ studies across the life and biomedical sciences. These resources are managed using the FAIR Principles for scientific data management and stewardship, which state that research data should be Findable, Accessible, Interoperable and Reusable. They also follow the practices of transparency and openness in science publishing, and as such, they embrace open peer review (which is mandated for both journals) and preprints (which are strongly encouraged in GigaScience and mandated for GigaByte). The opportunities for combining both are covered by GigaScience in its video on open science and preprint peer review for Peer Review Week.


Frontiers | Key Factors for Improving Rigor and Reproducibility: Guidelines, Peer Reviews, and Journal Technical Reviews | Cardiovascular Medicine

Abstract:  To respond to the NIH’s policy for rigor and reproducibility in preclinical research, many journals have implemented guidelines and checklists to guide authors in improving the rigor and reproducibility of their research. Transparency in developing detailed prospective experimental designs and providing raw data are essential premises of rigor and reproducibility. Standard peer reviews and journal-specific technical and statistical reviews are critical factors for enhancing rigor and reproducibility. This brief review also shares some experience from Arteriosclerosis, Thrombosis, and Vascular Biology, an American Heart Association journal, that has implemented several mechanisms to enhance rigor and reproducibility for preclinical research….

Cite-seeing and Reviewing: A Study on Citation Bias in Peer Review

Citations play an important role in researchers’ careers as a key factor in evaluation of scientific impact. Many anecdotes advice authors to exploit this fact and cite prospective reviewers to try obtaining a more positive evaluation for their submission. In this work, we investigate if such a citation bias actually exists: Does the citation of a reviewer’s own work in a submission cause them to be positively biased towards the submission? In conjunction with the review process of two flagship conferences in machine learning and algorithmic economics, we execute an observational study to test for citation bias in peer review. In our analysis, we carefully account for various confounding factors such as paper quality and reviewer expertise, and apply different modeling techniques to alleviate concerns regarding the model mismatch. Overall, our analysis involves 1,314 papers and 1,717 reviewers and detects citation bias in both venues we consider. In terms of the effect size, by citing a reviewer’s work, a submission has a non-trivial chance of getting a higher score from the reviewer: an expected increase in the score is approximately 0.23 on a 5-point Likert item. For reference, a one-point increase of a score by a single reviewer improves the position of a submission by 11% on average.

To ArXiv or not to ArXiv: A Study Quantifying Pros and Cons of Posting Preprints Online

Double-blind conferences have engaged in debates over whether to allow authors to post their papers online on arXiv or elsewhere during the review process. Independently, some authors of research papers face the dilemma of whether to put their papers on arXiv due to its pros and cons. We conduct a study to substantiate this debate and dilemma via quantitative measurements. Specifically, we conducted surveys of reviewers in two top-tier double-blind computer science conferences — ICML 2021 (5361 submissions and 4699 reviewers) and EC 2021 (498 submissions and 190 reviewers). Our two main findings are as follows. First, more than a third of the reviewers self-report searching online for a paper they are assigned to review. Second, outside the review process, we find that preprints from better-ranked affiliations see a weakly higher visibility, with a correlation of 0.06 in ICML and 0.05 in EC. In particular, papers associated with the top-10-ranked affiliations had a visibility of approximately 11% in ICML and 22% in EC, whereas the remaining papers had a visibility of 7% and 18% respectively.