Amending the literature through version control | Biology Letters

Abstract:  The ideal of self-correction in science is not well served by the current culture and system surrounding amendments to published literature. Here we describe our view of how amendments could and should work by drawing on the idea of an author-led version control system. We report a survey (n = 132) that highlights academics’ dissatisfaction with the status quo and their support for such an alternative approach. Authors would include a link in their published manuscripts to an updatable website (e.g. a GitHub repository) that could be disseminated in the event of any amendment. Such a system is already in place for computer code and requires nothing but buy-in from the scientific community—a community that is already evolving towards open science frameworks. This would remove a number of frictions that discourage amendments leading to an improved scientific literature and a healthier academic climate.

 

Ouvrir la Science – Open Science library

“The guide explains the rights retention strategy, its benefits for the researcher and the operational details of its application. It also provides an FAQ that addresses the main questions about choosing licenses, the options available at the various stages of publication, and how to manage relationships with publishers….”

Checklist for Artificial Intelligence in Medical Imaging Reporting Adherence in Peer-Reviewed and Preprint Manuscripts With the Highest Altmetric Attention Scores: A Meta-Research Study – Umaseh Sivanesan, Kay Wu, Matthew D. F. McInnes, Kiret Dhindsa, Fateme Salehi, Christian B. van der Pol, 2022

 

 

Abstract:  Purpose: To establish reporting adherence to the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) in diagnostic accuracy AI studies with the highest Altmetric Attention Scores (AAS), and to compare completeness of reporting between peer-reviewed manuscripts and preprints. Methods: MEDLINE, EMBASE, arXiv, bioRxiv, and medRxiv were retrospectively searched for 100 diagnostic accuracy medical imaging AI studies in peer-reviewed journals and preprint platforms with the highest AAS since the release of CLAIM to June 24, 2021. Studies were evaluated for adherence to the 42-item CLAIM checklist with comparison between peer-reviewed manuscripts and preprints. The impact of additional factors was explored including body region, models on COVID-19 diagnosis and journal impact factor. Results: Median CLAIM adherence was 48% (20/42). The median CLAIM score of manuscripts published in peer-reviewed journals was higher than preprints, 57% (24/42) vs 40% (16/42), P < .0001. Chest radiology was the body region with the least complete reporting (P = .0352), with manuscripts on COVID-19 less complete than others (43% vs 54%, P = .0002). For studies published in peer-reviewed journals with an impact factor, the CLAIM score correlated with impact factor, rho = 0.43, P = .0040. Completeness of reporting based on CLAIM score had a positive correlation with a study’s AAS, rho = 0.68, P < .0001. Conclusions: Overall reporting adherence to CLAIM is low in imaging diagnostic accuracy AI studies with the highest AAS, with preprints reporting fewer study details than peer-reviewed manuscripts. Improved CLAIM adherence could promote adoption of AI into clinical practice and facilitate investigators building upon prior works.

Preliminary investigation: Supporting open infrastructure for preprints | Invest in Open Infrastructure, 3 October 2022

“…We are concerned that the preprints ecosystem is not yet financially sustainable, with services dependent on substantial voluntary and in-kind contributions that aren’t fully accounted for in financial plans and are not reliable for long-term strategic planning. The majority of preprints are not shared using open infrastructure. Overall, we find the potential of preprints in open scholarly communication is not yet fully realized and is at risk from competition with for-profit, commercial, and other proprietary solutions. While developments in the existing journal publishing ecosystem make it possible to more rapidly share work, we risk losing the opportunity for this activity to be done on community-governed infrastructure built on open source tools that is transparent and accountable to its stakeholders. To address these challenges and concerns, we recommend work to:

Raise awareness of the potential benefits and drawbacks of using existing open services for preprints as shared infrastructure.
Support research and development (and testing) of business models that could work at a larger scale.
Advocate for increased investment in projects and initiatives that support preprints to enable more inclusive and equitable participation in science and scholarship.”

The Case For Supporting Open Infrastructure for Preprints: A Preliminary Investigation | Naomi Penfold | Invest in Open Infrastructure, 3 October 2022

Summary: “Preprints are being used across multiple scholarly disciplines – at varying levels of adoption. In this research, we asked: what is the current situation with preprints and open infrastructure for them, and how could IOI pursue work to further investment and sustain activities in this space?”

Open Science

“For a growing number of scientists, though, the process looks like this:

The data that the scientist collects is stored in an open access repository like figshare or Zenodo, possibly as soon as it’s collected, and given its own Digital Object Identifier (DOI). Or the data was already published and is stored in Dryad.
The scientist creates a new repository on GitHub to hold her work.
As she does her analysis, she pushes changes to her scripts (and possibly some output files) to that repository. She also uses the repository for her paper; that repository is then the hub for collaboration with her colleagues.
When she’s happy with the state of her paper, she posts a version to arXiv or some other preprint server to invite feedback from peers.
Based on that feedback, she may post several revisions before finally submitting her paper to a journal.
The published paper includes links to her preprint and to her code and data repositories, which makes it much easier for other scientists to use her work as starting point for their own research.

This open model accelerates discovery: the more open work is, the more widely it is cited and re-used. However, people who want to work this way need to make some decisions about what exactly “open” means and how to do it. You can find more on the different aspects of Open Science in this book.

This is one of the (many) reasons we teach version control. …”

Reliability of citations of medRxiv preprints in articles published on COVID-19 in the world leading medical journals | PLOS ONE

Abstract:  Introduction

Preprints have been widely cited during the COVID-19 pandemics, even in the major medical journals. However, since subsequent publication of preprint is not always mentioned in preprint repositories, some may be inappropriately cited or quoted. Our objectives were to assess the reliability of preprint citations in articles on COVID-19, to the rate of publication of preprints cited in these articles and to compare, if relevant, the content of the preprints to their published version.

Methods

Articles published on COVID in 2020 in the BMJ, The Lancet, the JAMA and the NEJM were manually screened to identify all articles citing at least one preprint from medRxiv. We searched PubMed, Google and Google Scholar to assess if the preprint had been published in a peer-reviewed journal, and when. Published articles were screened to assess if the title, data or conclusions were identical to the preprint version.

Results

Among the 205 research articles on COVID published by the four major medical journals in 2020, 60 (29.3%) cited at least one medRxiv preprint. Among the 182 preprints cited, 124 were published in a peer-reviewed journal, with 51 (41.1%) before the citing article was published online and 73 (58.9%) later. There were differences in the title, the data or the conclusion between the preprint cited and the published version for nearly half of them. MedRxiv did not mentioned the publication for 53 (42.7%) of preprints.

Conclusions

More than a quarter of preprints citations were inappropriate since preprints were in fact already published at the time of publication of the citing article, often with a different content. Authors and editors should check the accuracy of the citations and of the quotations of preprints before publishing manuscripts that cite them.

Open Research in the Humanities | Unlocking Research

“The Working Group on Open Research in the Humanities was chaired by Prof. Emma Gilby (MMLL) with Dr. Rachel Leow (History), Dr. Amelie Roper (UL), Dr. Matthias Ammon (MMLL and OSC), Dr. Sam Moore (UL), Prof. Alexander Bird (Philosophy), and Prof. Ingo Gildenhard (Classics). We met for four meetings in July, September, October and December 2021, with a view to steering and developing services in support of Open Research in the Humanities. We aimed notably to offer input on how to define Open Research in the Humanities, how to communicate effectively with colleagues in the Arts and Humanities (A&H), and how to reinforce the prestige around Open Research. We hope to add our perspective to the debate on Open Science by providing a view ‘from the ground’ and from the perspective of a select group of humanities researchers. These disciplinary considerations inevitably overlap, in some measure, with the social sciences and indeed some aspects of STEM, and we hope that they will therefore have a broad audience and applicability.

Academics in A&H are, in the main, deeply committed to sharing their research. They consider their main professional contribution to be the instigation and furthering of diverse cultural conversations. They also consider open public access to their work to be a valuable goal, alongside other equally prominent ambitions: aiming at research quality and diversity, and offering support to early career scholars in a challenging and often precarious employment landscape.  

Although A&H cover a diverse range of disciplines, it is possible to discern certain common elements which guide their profile and impact. These common elements also guide the discussion that follows….”

OPEN SCIENCE INITIATIVES: THE POSTPRINT PLEDGE

“• Is it legal to post “postprints” online? • Depends on each publisher’s policies • We compiled a list of 60 Applied Linguistics journals (from Web of Science) • Examined their copyright policies from Sherpa Romeo (https://v2.sherpa.ac.uk/romeo/) • Publishers that permit postprints: • Cambridge, Elsevier, John Benjamins, SAGE, Emerald, De Gruyter, Akadémiai Kiadó • Publisher that permit postprints on personal websites only (embargo on repositories): • Springer, Oxford University Press, Taylor & Francis • Publishers that do NOT permit on postprints before an embargo period: • Wiley (usually 24-month embargo)…

What this Pledge is NOT asking you to do: • Does not ask you to break any laws. Sharing postprints is within your rights (see table next slide). • Does not ask you to share “preprints” but to share “postprints”. • Does not limit you to publishing in these journals. • Does not require you do anything else (like boycotting certain publishers or not reviewing for them)….” 

A Synthesis of the Formats for Correcting Erroneous and Fraudulent Academic Literature, and Associated Challenges | SpringerLink

Abstract:  Academic publishing is undergoing a highly transformative process, and many established rules and value systems that are in place, such as traditional peer review (TPR) and preprints, are facing unprecedented challenges, including as a result of post-publication peer review. The integrity and validity of the academic literature continue to rely naively on blind trust, while TPR and preprints continue to fail to effectively screen out errors, fraud, and misconduct. Imperfect TPR invariably results in imperfect papers that have passed through varying levels of rigor of screening and validation. If errors or misconduct were not detected during TPR’s editorial screening, but are detected at the post-publication stage, an opportunity is created to correct the academic record. Currently, the most common forms of correcting the academic literature are errata, corrigenda, expressions of concern, and retractions or withdrawals. Some additional measures to correct the literature have emerged, including manuscript versioning, amendments, partial retractions and retract and replace. Preprints can also be corrected if their version is updated. This paper discusses the risks, benefits and limitations of these forms of correcting the academic literature.

 

A Light in the Dark: Open Access to Medical Literature and the COVID-19 Pandemic

Introduction. This study was designed to evaluate the accessibility of peer-reviewed literature regarding COVID-19 and the ten diseases with the highest death toll worldwide.
Method. We conducted extensive searches of studies concerning COVID-19 and other diseases using the Web of Science, and the Google and Google Scholar search engines.
Analysis. Open access rates were obtained from the Web of Science database, taking into account different types of publications and research areas. Quantitative analyses based on random samplings were used to estimate the potential increase of open access rates achievable with open archiving of post-prints.
Results. The open access rate of COVID-19 papers (89.5%) largely outnumbered that of the ten most deadly human diseases (48.8%, on average). We estimated that most of the gap (70%) could be bridged by making available online, post-print manuscripts.
Conclusions. The pandemic represents a real breakthrough, in scientific publishing, towards the goal of health information for all, demonstrating that much greater access to medical literature is possible. The green road may be the best way to bring open access rates of peer review of other major diseases closer to that of COVID-19. However, it needs to be implemented more effectively, combining bottom-up and top-down actions and making the open science culture more widespread.

 

eLife and PREreview extend partnership to boost community engagement in open peer review | For the press | eLife

eLife and PREreview are pleased to announce their continued partnership to engage more diverse communities of researchers in peer review.

eLife and PREreview formally teamed up last year following their collaborations on a number of initiatives. Now, as eLife moves towards a new ‘publish, review, curate’ model that puts preprints first, the organisations will increase their efforts to involve more early-career researchers, and researchers from communities that are traditionally marginalised within the peer-review process, in the public review of preprints. Their work will involve further integrating PREreview into Sciety – an application developed by a team within eLife to bring open evaluation and curation together in one place – and opening up new opportunities for more researchers to participate in public review.