Why did clinical trial registries fail to prevent Covid research chaos?

“There is a long-standing global ethical obligation to register all trials before they start, shored up by regulatory requirements in some jurisdictions. Data from 18 registries worldwide feed into the WHO-managed International Clinical Trials Registry Platform (ICTRP), providing a continuously updated overview of who is researching what, when, where and how – at least in theory.

 

 

If the registry infrastructure had worked and been used as intended, much of the COVID-19 research chaos would have been avoided.

 

 

For example, researchers considering launching a hydroxychloroquine trial could have searched ICTRP and discovered that the drug was already being investigated by numerous other trials. Those researchers could accordingly have focused on investigating other treatment options instead, or aligned their outcome measures with existing trials. …

The global registry infrastructure has long been inadequately supported by legislators and regulators, and is woefully underfunded.

 

 

 

This persistent neglect of the world’s only comprehensive directory of medical research led to costly research waste on an incredible scale during the pandemic.

 

 

The WHO recommends that member states should by law require every interventional trial to be registered and reported. In addition, WHO recommends that all trial results should be made public specifically on a registry within 12 months, and that registry data should be kept up to date.

 

 

 

By enforcing these three simple rules, regulators would ensure that there is a comprehensive, up-to-date global database of all trials and their results.

 

In reality, existing laws in the EU and the US only cover a small minority of trials and are not being effectively enforced, while many other jurisdictions have no relevant laws at all. …”

 

 

Recommendations for repositories and scientific gateways from a neuroscience perspective | Scientific Data

“Digital services such as repositories and science gateways have become key resources for the neuroscience community, but users often have a hard time orienting themselves in the service landscape to find the best fit for their particular needs. INCF has developed a set of recommendations and associated criteria for choosing or setting up and running a repository or scientific gateway, intended for the neuroscience community, with a FAIR neuroscience perspective….”

Supporting knowledge creation and sharing by building a standardised interconnected repository of biodiversity data | Zenodo

“This EOSC in practice story was developed within the Cos4cloud project and targets a very wide user base as it is addressed to any researchers, teachers, students, companies, institutions and, more generally, anyone interested in knowing, studying or analysing biodiversity information.

The story presents Cos4Bio, a co-designed, interoperable and open-source service that integrates biodiversity observations from multiple citizen observatories in one place, allowing experts to save time in the species identification process and get access to an enormous number of biodiversity observations. This resource is available on the EOSC Portal Catalogue and Marketplace …”

Supporting knowledge creation and sharing by building a standardised interconnected repository of biodiversity data | EOSC Portal

“This EOSC in practice story targets a very wide user base as it is addressed to any researchers, teachers, students, companies, institutions and, more generally, anyone interested in knowing, studying or analysing biodiversity information. It was developed within the Cos4cloud project….

Cos4Bio is a co-designed, interoperable and open-source service that integrates biodiversity observations from multiple citizen observatories in one place, allowing experts to save time in the species identification process and get access to an enormous number of biodiversity observations….”

Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools

Abstract:  From a research data repositories’ perspective, offering research data management services in line with the FAIR principles is becoming increasingly important. However, there exists no globally established and trusted approach to evaluate FAIRness to date. Here, we apply five different available FAIRness evaluation approaches to selected data archived in the World Data Center for Climate (WDCC). Two approaches are purely automatic, two approaches are purely manual and one approach applies a hybrid method (manual and automatic combined).

The results of our evaluation show an overall mean FAIR score of WDCC-archived (meta)data of 0.67 of 1, with a range of 0.5 to 0.88. Manual approaches show higher scores than automated ones and the hybrid approach shows the highest score. Computed statistics indicate that the test approaches show an overall good agreement at the data collection level.

We find that while neither one of the five valuation approaches is fully fit-for-purpose to evaluate (discipline-specific) FAIRness, all have their individual strengths. Specifically, manual approaches capture contextual aspects of FAIRness relevant for reuse, whereas automated approaches focus on the strictly standardised aspects of machine actionability. Correspondingly, the hybrid method combines the advantages and eliminates the deficiencies of manual and automatic evaluation approaches.

Based on our results, we recommend future FAIRness evaluation tools to be based on a mature hybrid approach. Especially the design and adoption of the discipline-specific aspects of FAIRness will have to be conducted in concerted community efforts.

Diversity matters in digital scholarly technology – A conversation with Mark Hahnel

“Mark Hahnel is the CEO and founder of Figshare, which he created whilst completing his PhD in stem cell biology at Imperial College London. Figshare currently provides research data infrastructure for institutions, publishers and funders globally. He is passionate about open science and the potential it has to revolutionize the research community. For the last eight years, Mark has been leading the development of research data infrastructure, with the core aim of reusable and interoperable academic data. Mark sits on the board of DataCite and the advisory board for the Directory of Open Access Journals (DOAJ). He was on the judging panel for the National Institutes of Health (NIH), Wellcome Trust Open Science prize and acted as an advisor for the Springer Nature master classes….”

Empowering Data Sharing and Analytics through the Open Data Commons for Traumatic Brain Injury Research | Neurotrauma Reports

Abstract:  Traumatic brain injury (TBI) is a major public health problem. Despite considerable research deciphering injury pathophysiology, precision therapies remain elusive. Here, we present large-scale data sharing and machine intelligence approaches to leverage TBI complexity. The Open Data Commons for TBI (ODC-TBI) is a community-centered repository emphasizing Findable, Accessible, Interoperable, and Reusable data sharing and publication with persistent identifiers. Importantly, the ODC-TBI implements data sharing of individual subject data, enabling pooling for high-sample-size, feature-rich data sets for machine learning analytics. We demonstrate pooled ODC-TBI data analyses, starting with descriptive analytics of subject-level data from 11 previously published articles (N?=?1250 subjects) representing six distinct pre-clinical TBI models. Second, we perform unsupervised machine learning on multi-cohort data to identify persistent inflammatory patterns across different studies, improving experimental sensitivity for pro- versus anti-inflammation effects. As funders and journals increasingly mandate open data practices, ODC-TBI will create new scientific opportunities for researchers and facilitate multi-data-set, multi-dimensional analytics toward effective translation.

 

Digital Commons Data: for your institution’s RDM journey

“Digital Commons Data gives your institution the tools you need to successfully drive forward your Research Data Management program, with powerful features for researchers, administrators and data curators to store, manage, curate, share and preserve data. Part of the Digital Commons institutional repository system, Digital Commons Data is a turn-key, cloud-hosted, and fully supported module that delivers all the functionality to achieve an institutional research data management program without additional technical investment….”

Spend less time looking for articles with accessible datasets – The Official PLOS Blog

“We’re testing a new experimental open science feature intended to promote data sharing and reuse across the PLOS journal portfolio. A subset of PLOS articles that link to shared research data in a repository will display a prominent visual cue designed to help researchers find accessible data, and encourage best practice in data sharing….”

Generalist Repository Comparison Chart

“This chart is designed to assist researchers in finding a generalist repository should no domain repository be available to preserve their research data. Generalist repositories accept data regardless of data type, format, content, or disciplinary focus. For this chart, we included a repository available to all researchers specific to clinical trials (Vivli) to bring awareness to those in this field.” Undated.

Data and Software for Authors | AGU

“AGU requires that the underlying data needed to understand, evaluate, and build upon the reported research be available at the time of peer review and publication. Additionally, authors should make available software that has a significant impact on the research. This entails:

Depositing the data and software in a community accepted, trusted repository, as appropriate, and preferably with a DOI
Including an Availability Statement as a separate paragraph in the Open Research section explaining to the reader where and how to access the data and software
And including citation(s) to the deposited data and software, in the Reference Section….”

Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools

Abstract:  From a research data repositories’ perspective, offering research data management services in line with the FAIR principles is becoming increasingly important. However, there exists no globally established and trusted approach to evaluate FAIRness to date. Here, we apply five different available FAIRness evaluation approaches to selected data archived in the World Data Center for Climate (WDCC). Two approaches are purely automatic, two approaches are purely manual and one approach applies a hybrid method (manual and automatic combined).

The results of our evaluation show an overall mean FAIR score of WDCC-archived (meta)data of 0.67 of 1, with a range of 0.5 to 0.88. Manual approaches show higher scores than automated ones and the hybrid approach shows the highest score. Computed statistics indicate that the test approaches show an overall good agreement at the data collection level.

We find that while neither one of the five valuation approaches is fully fit-for-purpose to evaluate (discipline-specific) FAIRness, all have their individual strengths. Specifically, manual approaches capture contextual aspects of FAIRness relevant for reuse, whereas automated approaches focus on the strictly standardised aspects of machine actionability. Correspondingly, the hybrid method combines the advantages and eliminates the deficiencies of manual and automatic evaluation approaches. Based on our results, we recommend future FAIRness evaluation tools to be based on a mature hybrid approach. Especially the design and adoption of the discipline-specific aspects of FAIRness will have to be conducted in concerted community efforts.

Data sharing practices across knowledge domains: a dynamic examination of data availability statements in PLOS ONE publications

Abstract:  As the importance of research data gradually grows in sciences, data sharing has come to be encouraged and even mandated by journals and funders in recent years. Following this trend, the data availability statement has been increasingly embraced by academic communities as a means of sharing research data as part of research articles. This paper presents a quantitative study of which mechanisms and repositories are used to share research data in PLOS ONE articles. We offer a dynamic examination of this topic from the disciplinary and temporal perspectives based on all statements in English-language research articles published between 2014 and 2020 in the journal. We find a slow yet steady growth in the use of data repositories to share data over time, as opposed to sharing data in the paper or supplementary materials; this indicates improved compliance with the journal’s data sharing policies. We also find that multidisciplinary data repositories have been increasingly used over time, whereas some disciplinary repositories show a decreasing trend. Our findings can help academic publishers and funders to improve their data sharing policies and serve as an important baseline dataset for future studies on data sharing activities.