An examination of highly visible COVID-19 research articles reveals that 55% could be considered at risk of bias.
Only 11% of the evaluated early studies on COVID-19 adhered to good standards of reporting such as PRISMA or CONSORT.
There was no correlation between quality of reporting and either the journal Impact Factor or the article Altmetric Attention Score in early studies on COVID-19.
Most highly visible early articles on COVID-19 were published in the Lancet and Journal of the American Medical Association.”
“In the virtual 15th Conference of the European Association of Science Editors (EASE), a debate was held on the motion: Preprints are going to replace journals. I was asked to oppose the motion and this article is based on my arguments….
Regarding being disruptive, as Rob Johnson and Andrea Chiarelli showed, preprint servers are not threatening journals’ revenue. Although big publishers have been collaborating on (e.g., Springer Nature?Research Square and PLOS?Cold Spring Harbor Laboratory) and acquiring (e.g., Elsevier acquiring SSRN) preprint servers, while learned societies are building preprint communities, the overall investment in preprints still remains limited.
Are preprints destructive to publishers’ business? No way! Preprint servers’ current not-for-profit business model is not sustainable. Although 37 preprint servers were established between 2016 and September 2019, one preprint leader in biological sciences, PeerJ Preprints stopped posting preprints around the time COVID-19 hit the world, after a reality check on the costs required to do so. Since then, OSF Preprints has begun charging for previously free preprint platform services, leading to the shuttering of some preprint servers. Concerns over preprints as a source of misuse and misinterpretation of scientific information were raised before and during the pandemic. Due to significant health risks, manuscripts are being identified as ‘better not disseminated as preprints’. Acceptance of preprints, especially by the academic recruitment and promotion committees, is still far away from invading the space that has long been occupied by journal articles….”
“Outside of eLife and , to an extent , PLoS , no one of scale and weight in the commercial publishing sector has really climbed aboard the Open Science movement with a recognition of the sort of data and communication control that Open Science will require .
So what is that requirement ? In two words – Replicability and Retraction . …”
“The COVID-19 pandemic has reshaped the research landscape. The online Jisc and CNI leaders conference will focus on the pivotal role the library will play in enabling universities to equip themselves to respond. We’ll also explore the potential disruption it could cause.
Through the conference theme – at the frontier of research practice: the university library as a catalyst – we’ll cover topics such as monographs and long-form scholarly works: transitioning to open, open and faster scholarly communication in a post-COVID-19 world, how research collections are evolving and researcher environments of tomorrow….”
“The pandemic presented an urgency for effective science to inform decision-making and has shown just how fast and open scholarly communication can be. Researchers shared their preliminary results on preprint servers and institutional repositories at unprecedented rates, inspiring various preprint peer-review initiatives. Journal publishers processed manuscripts from submission to publication in record time. And much of what we know about Covid-19 has been learned through data sharing and cooperation at the international level, with the use of critical data-sharing infrastructure.
While the research community has responded with an extraordinary level of openness, speed, and collaboration, it has also brought to the fore some of the key challenges we still face in the transition to open research – and the opportunities they represent….”
“And so, in early April, we decided to start Fast Grants, which we hoped could be one of the faster sources of emergency science funding during the pandemic. We had modest hopes given our inexperience and lack of preparation, but we felt that the opportunity to provide even small accelerations would be worthwhile given the scale of the disaster.
The original vision was simple: an application form that would take scientists less than 30 minutes to complete and that would deliver funding decisions within 48 hours, with money following a few days later….
The first round of grants were given out within 48 hours. Later rounds of grants, which often required additional scrutiny of earlier results, were given out within two weeks. These timelines were much shorter than the alternative sources of funding available to most scientists. Grant recipients were required to do little more than publish open access preprints and provide monthly one-paragraph updates….”
Pandemic events often trigger a surge of clinical trial activity aimed at rapidly evaluating therapeutic or preventative interventions. Ensuring rapid public access to the complete and unbiased trial record is particularly critical for pandemic research given the urgent associated public health needs. The World Health Organization (WHO) established standards requiring posting of results to a registry within 12 months of trial completion and publication in a peer reviewed journal within 24 months of completion, though compliance with these requirements among pandemic trials is unknown.
This cross-sectional analysis characterizes availability of results in trial registries and publications among registered trials performed during the 2009 H1N1 influenza, 2014 Ebola, and 2016 Zika pandemics. We searched trial registries to identify clinical trials testing interventions related to these pandemics, and determined the time elapsed between trial completion and availability of results in the registry. We also performed a comprehensive search of MEDLINE via PubMed, Google Scholar, and EMBASE to identify corresponding peer reviewed publications. The primary outcome was the compliance with either of the WHO’s established standards for sharing clinical trial results. Secondary outcomes included compliance with both standards, and assessing the time elapsed between trial completion and public availability of results.
Three hundred thirty-three trials met eligibility criteria, including 261 H1N1 influenza trials, 60 Ebola trials, and 12 Zika trials. Of these, 139 (42%) either had results available in the trial registry within 12 months of study completion or had results available in a peer-reviewed publication within 24 months. Five trials (2%) met both standards. No results were available in either a registry or publication for 59 trials (18%). Among trials with registered results, a median of 42 months (IQR 16–76 months) elapsed between trial completion and results posting. For published trials, the median elapsed time between completion and publication was 21 months (IQR 9–34 months). Results were available within 24 months of study completion in either the trial registry or a peer reviewed publication for 166 trials (50%).
Very few trials performed during prior pandemic events met established standards for the timely public dissemination of trial results.
“Flowcite – a German-based service providing an all-in-one platform for academic research, writing, editing, and publishing –partners up with Brooklyn-based scite.ai to offer quick source evaluation for its users to ensure quality, improve the relevance of results, and thus save time on research….”
eLife is excited to announce a new approach to peer review and publishing in medicine, including public health and health policy.
One of the most notable impacts of the COVID-19 pandemic has been the desire to share important results and discoveries quickly, widely and openly, leading to rapid growth of the preprint server medRxiv. Despite the benefits of rapid, author-driven publication in accelerating research and democratising access to results, the growing number of clinical preprints means that individuals and institutions may act quickly on new information before it is adequately scrutinised.
To address this challenge, eLife is bringing its system of editorial oversight by practicing clinicians and clinician-investigators, and rigorous, consultative peer review to preprints. The journal’s goal is to produce ‘refereed preprints’ on medRxiv that provide readers and potential users with a detailed assessment of the research, comments on its potential impact, and perspectives on its use. By providing this rich and rapid evaluation of new results, eLife hopes peer-reviewed preprints will become a reliable indicator of quality in medical research, rather than journal impact factor.
“GigaByte (ISSN:2709-4715) aims to promote the most rapid exchange of scientific information in a formal peer-reviewed publishing platform. Modern research is data-driven, iterative, and aims to be FAIR: Findable, Accessible, Interoperable and Reusable. It is also fast moving, with available data and computational tools changing constantly and swiftly evolving fields continuously being tested, updated and modified by the community. Given that, GigaByte is focused on publishing short, focused, data-driven articles using a publishing platform that will allow nearly immediate online publication on acceptance as well as an ability to update published articles. This drastically reduces writing and reviewing. With that, GigaByte provides scientists a venue to rapidly and easily share and build upon each other’s research outputs.
Currently we publish two types of articles: Data Release highlight and contextualizing exceptional and openly available datasets, while Technical Release articles are present an open-source software tool or an experimental or computational method for the analysis or handling of research data.
GigaByte is an open access and open science journal. As with our sister-journal GigaScience— we publish ALL reusable and shareable research objects, such as data, software tools and workflows, from data-driven research. …”
Abstract: The impact of COVID-19 has underlined the need for reliable information to guide clinical practice and policy. This urgency has to be balanced against disruption to journal handling capacity and the continued need to ensure scientific rigour. We examined the reporting quality of highly disseminated COVID-19 research papers using a bibliometric analysis examining reporting quality and risk of bias (RoB) amongst 250 top scoring Altmetric Attention Score (AAS) COVID-19 research papers between January and April 2020. Method-specific RoB tools were used to assess quality. After exclusions, 84 studies from 44 journals were included. Forty-three (51%) were case series/studies, and only one was an randomized controlled trial. Most authors were from institutions based in China (n =?44, 52%). The median AAS and impact factor was 2015 (interquartile range [IQR] 1,105–4,051.5) and 12.8 (IQR 5–44.2) respectively. Nine studies (11%) utilized a formal reporting framework, 62 (74%) included a funding statement, and 41 (49%) were at high RoB. This review of the most widely disseminated COVID-19 studies highlights a preponderance of low-quality case series with few research papers adhering to good standards of reporting. It emphasizes the need for cautious interpretation of research and the increasingly vital responsibility that journals have in ensuring high-quality publications.
Abstract: The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic will be remembered as one of the defining events of the 21st century. The rapid global outbreak has had significant impacts on human society and is already responsible for millions of deaths. Understanding and tackling the impact of the virus has required a worldwide mobilisation and coordination of scientific research. The COVID-19 Data Portal (https://www.covid19dataportal.org/) was first released as part of the European COVID-19 Data Platform, on April 20th 2020 to facilitate rapid and open data sharing and analysis, to accelerate global SARS-CoV-2 and COVID-19 research. The COVID-19 Data Portal has fortnightly feature releases to continue to add new data types, search options, visualisations and improvements based on user feedback and research. The open datasets and intuitive suite of search, identification and download services, represent a truly FAIR (Findable, Accessible, Interoperable and Reusable) resource that enables researchers to easily identify and quickly obtain the key datasets needed for their COVID-19 research.
[Spanish-language article with an English-language abstract.]
Abstract: The impact and the universality of the pandemic by SARS-CoV-2 has caused the need to have information quickly and accessible for the benefit of decision-making among healthcare professionals. In 10 months the scientific production on this new coronavirus has exceeded the number of 66 thousand articles, according to the LitCovid database, created by the National Library of Medicine, doubling and tripling every few weeks. This same urgency has characterized some of the main features of this voluminous production, in addition to its continuous and exponential growth, such as greater dissemination in open access and preprint repositories, a certain acceleration in the manuscript review process by editorials and an abundance of opinion articles, recommendations or comments compared to a smaller number of original articles with clinical data from large groups of patients.
Abstract: Preprints are an increasingly important component of the scholarly record and preprint platforms have correspondingly grown in number. Academic communities value preprints for the opportunity to share early findings with peers and receive immediate feedback on not-yet-reviewed works. With the COVID pandemic, a broader audience is turning to preprints, as political leaders, journalists, and the public seek new information about the virus. Complications arise, however, when the unvetted nature of these works is not clearly signaled alongside discussions of their findings. In late 2020, Rick Anderson captured these concerns, highlighting cases where discredited preprints remained available to read, presenting a potential for misinformation. Anderson posited that preprint platform providers, not just editors, should ensure adequate preprint vetting and be willing to retract them. With the availability of two new open-source preprint platforms–PKP’s Open Preprint Systems (OPS) and Birkbeck’s Janeway preprint server–library publishers now have familiar, robust infrastructure for entering this space and are a logical home for such services, especially given a strong commitment to a specific research community. But what additional responsibilities must we accept–if any–as publishers of this genre? Should we establish terms for vetting of submissions? Without adequate domain knowledge, how would we enforce, or even audit, such terms? How do we indicate that a specific preprint’s findings have not yet been formally accepted? What about obligations regarding debunked publications? What are the responsibilities of platform providers, publishers, and editors? Should library publishers, as a community of practice, expand on the proposed best practices related to preprint metadata to ensure we are responsible actors in providing access to early research? Panelists will explore these questions during the session’s first half, and invite attendee participation for the second. Registered attendees will receive an advance survey regarding current/planned preprint publishing, in order to identify additional discussion topics.
“A treatment for shortening the painful episodes of sickle cell disease (SCD) is not effective, results published in JAMA indicate. But the effort it took to publish the findings is an important part of the story and reveal problems with data ownership, company motivations, and public resources that go well beyond a single clinical trial or experimental agent….”