Abstract: The impact of COVID-19 has underlined the need for reliable information to guide clinical practice and policy. This urgency has to be balanced against disruption to journal handling capacity and the continued need to ensure scientific rigour. We examined the reporting quality of highly disseminated COVID-19 research papers using a bibliometric analysis examining reporting quality and risk of bias (RoB) amongst 250 top scoring Altmetric Attention Score (AAS) COVID-19 research papers between January and April 2020. Method-specific RoB tools were used to assess quality. After exclusions, 84 studies from 44 journals were included. Forty-three (51%) were case series/studies, and only one was an randomized controlled trial. Most authors were from institutions based in China (n =?44, 52%). The median AAS and impact factor was 2015 (interquartile range [IQR] 1,105–4,051.5) and 12.8 (IQR 5–44.2) respectively. Nine studies (11%) utilized a formal reporting framework, 62 (74%) included a funding statement, and 41 (49%) were at high RoB. This review of the most widely disseminated COVID-19 studies highlights a preponderance of low-quality case series with few research papers adhering to good standards of reporting. It emphasizes the need for cautious interpretation of research and the increasingly vital responsibility that journals have in ensuring high-quality publications.
Abstract: The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic will be remembered as one of the defining events of the 21st century. The rapid global outbreak has had significant impacts on human society and is already responsible for millions of deaths. Understanding and tackling the impact of the virus has required a worldwide mobilisation and coordination of scientific research. The COVID-19 Data Portal (https://www.covid19dataportal.org/) was first released as part of the European COVID-19 Data Platform, on April 20th 2020 to facilitate rapid and open data sharing and analysis, to accelerate global SARS-CoV-2 and COVID-19 research. The COVID-19 Data Portal has fortnightly feature releases to continue to add new data types, search options, visualisations and improvements based on user feedback and research. The open datasets and intuitive suite of search, identification and download services, represent a truly FAIR (Findable, Accessible, Interoperable and Reusable) resource that enables researchers to easily identify and quickly obtain the key datasets needed for their COVID-19 research.
[Spanish-language article with an English-language abstract.]
Abstract: The impact and the universality of the pandemic by SARS-CoV-2 has caused the need to have information quickly and accessible for the benefit of decision-making among healthcare professionals. In 10 months the scientific production on this new coronavirus has exceeded the number of 66 thousand articles, according to the LitCovid database, created by the National Library of Medicine, doubling and tripling every few weeks. This same urgency has characterized some of the main features of this voluminous production, in addition to its continuous and exponential growth, such as greater dissemination in open access and preprint repositories, a certain acceleration in the manuscript review process by editorials and an abundance of opinion articles, recommendations or comments compared to a smaller number of original articles with clinical data from large groups of patients.
Abstract: Preprints are an increasingly important component of the scholarly record and preprint platforms have correspondingly grown in number. Academic communities value preprints for the opportunity to share early findings with peers and receive immediate feedback on not-yet-reviewed works. With the COVID pandemic, a broader audience is turning to preprints, as political leaders, journalists, and the public seek new information about the virus. Complications arise, however, when the unvetted nature of these works is not clearly signaled alongside discussions of their findings. In late 2020, Rick Anderson captured these concerns, highlighting cases where discredited preprints remained available to read, presenting a potential for misinformation. Anderson posited that preprint platform providers, not just editors, should ensure adequate preprint vetting and be willing to retract them. With the availability of two new open-source preprint platforms–PKP’s Open Preprint Systems (OPS) and Birkbeck’s Janeway preprint server–library publishers now have familiar, robust infrastructure for entering this space and are a logical home for such services, especially given a strong commitment to a specific research community. But what additional responsibilities must we accept–if any–as publishers of this genre? Should we establish terms for vetting of submissions? Without adequate domain knowledge, how would we enforce, or even audit, such terms? How do we indicate that a specific preprint’s findings have not yet been formally accepted? What about obligations regarding debunked publications? What are the responsibilities of platform providers, publishers, and editors? Should library publishers, as a community of practice, expand on the proposed best practices related to preprint metadata to ensure we are responsible actors in providing access to early research? Panelists will explore these questions during the session’s first half, and invite attendee participation for the second. Registered attendees will receive an advance survey regarding current/planned preprint publishing, in order to identify additional discussion topics.
“A treatment for shortening the painful episodes of sickle cell disease (SCD) is not effective, results published in JAMA indicate. But the effort it took to publish the findings is an important part of the story and reveal problems with data ownership, company motivations, and public resources that go well beyond a single clinical trial or experimental agent….”
“ICMRA1 and WHO call on the pharmaceutical industry to provide wide access to clinical data for all new medicines and vaccines (whether full or conditional approval, under emergency use, or rejected). Clinical trial reports should be published without redaction of confidential information for reasons of overriding public health interest….
Regulators continue to spend considerable resources negotiating transparency with sponsors. Both positive and negative clinically relevant data should be made available, while only personal data and individual patient data should be redacted. In any case, aggregated data are unlikely to lead to re-identification of personal data and techniques of anonymisation can be used….
Providing systematic public access to data supporting approvals and rejections of medicines reviewed by regulators, is long overdue despite existing initiatives, such as those from the European Medicines Agency and Health Canada. The COVID-19 pandemic has revealed how essential to public trust access to data is. ICMRA and WHO call on the pharmaceutical industry to commit, within short timelines, and without waiting for legal changes, to provide voluntary unrestricted access to trial results data for the benefit of public health.”
“medRxiv has been a terrific help to the scientific community during the pandemic. It has sped the communication of science and fostered interactions among scientists around the world. It is an open and rapid way to share pre-peer reviewed studies. For the most part, people seemed to have quickly realized that this is science in progress, and not to take it as truth — but as work open for comment. It has embedded the preprint culture in a way that I hope will be sustained and spread.
I am not aware of any harm that has accrued and I am aware that many good interactions have resulted from the sharing of the information. And it is certainly better than science by press release alone. Also, importantly, our screening process is intended to protect the public’s interest — safeguarding privacy, promoting registration, requiring ethics approval, and ensuring that dangerous claims are avoided….”
“On behalf of the Association of American Universities (AAU) and the Association of Public and Land-grant Universities (APLU), we are pleased to present this Guide to Accelerate Public Access to Research Data. The Guide is intended to serve as a resource to help university administrators develop robust support systems to accelerate sharing of research data. It provides advice to universities concerning actions they can take, as well as the infrastructure and support that may be required to improve access to research data on their respective campuses. It also offers examples of how institutions are approaching specific challenges to providing public access to research data and results. Advancing public access to research data is important to improving transparency and reproducibility of scientific results, increasing scientific rigor and public trust in science, and — most importantly — accelerating the pace of discovery and innovation through the open sharing of research results. Additionally, it is vital that institutions develop and implement policies now to ensure consistency of data management plans across their campuses to guarantee full compliance with federal research agency data sharing requirements. Beyond the establishment of policies, universities must invest in the infrastructure and support necessary to achieve the desired aspirations and aims of the policies. The open sharing of the results of scientific research is a value our two associations have long fought to protect and preserve. It is also a value we must continue to uphold at all levels within our universities. This will mean overcoming the various institutional and cultural impediments which have, at times, hampered the open sharing of research data….”
“The Association of American Universities (AAU) and the Association of Public and Land-grant Universities (APLU) have released their Guide to Accelerate Public Access to Research Data, the result of two years of work and national summits as part of the Accelerating Public Access to Research Data (APARD) program.
As a tool and framework for university administrators—specifically provosts, senior research officers, and IT leaders—the four-part guide is meant to “facilitate adoption of new institutional policies, procedures, and approaches that actively support and promote research data sharing, while at the same time ensuring rigor in the research process and the veracity of its intellectual outputs.” Included throughout the guide are recommendations, actions, and institutional examples and case studies for public access to research data….
Possible actions ARL member representatives can take with the release of the Guide to Accelerate Public Access to Research Data include:
Establish public access to research data as a library organization priority through incorporation into strategic plans, statements of principles, mission, and value statements.
Articulate the libraries’ role in accelerating public access to data with the mind frame of culture change. How is your library working from the bottom up (with faculty and graduate students), middle out (with department chairs and center directors) and top down (provosts, presidents, vice presidents for research, and others) to engage and influence public access to data?
Partner with campus stakeholders identified in the guide to begin mapping campus research data resources….”
“This is our proposal for how we might create a radically new scholarly publishing system with the potential to disrupt the scholarly publishing industry. The proposed model is: (a) open, (b) objective, (c) crowd sourced and community-controlled, (d) decentralised, and (e) capable of generating prestige. Submitted articles are openly rated by researchers on multiple dimensions of interest (e.g., novelty, reliability, transparency) and ‘impact prediction algorithms’ are trained on these data to classify articles into journal ‘tiers’.
In time, with growing adoption, the highest impact tiers within such a system could develop sufficient prestige to rival even the most established of legacy journals (e.g., Nature). In return for their support, researchers would be rewarded with prestige, nuanced metrics, reduced fees, faster publication rates, and increased control over their outputs….”