A Survey of Researchers’ Needs and Priorities for Data Sharing

Abstract:  One of the ways in which the publisher PLOS supports open science is via a stringent data availability policy established in 2014. Despite this policy, and more data sharing policies being introduced by other organizations, best practices for data sharing are adopted by a minority of researchers in their publications. Problems with effective research data sharing persist and these problems have been quantified by previous research as a lack of time, resources, incentives, and/or skills to share data.

In this study we built on this research by investigating the importance of tasks associated with data sharing, and researchers’ satisfaction with their ability to complete these tasks. By investigating these factors we aimed to better understand opportunities for new or improved solutions for sharing data.

In May-June 2020 we surveyed researchers from Europe and North America to rate tasks associated with data sharing on (i) their importance and (ii) their satisfaction with their ability to complete them. We received 617 completed responses. We calculated mean importance and satisfaction scores to highlight potential opportunities for new solutions to and compare different cohorts.

Tasks relating to research impact, funder compliance, and credit had the highest importance scores. 52% of respondents reuse research data but the average satisfaction score for obtaining data for reuse was relatively low. Tasks associated with sharing data were rated somewhat important and respondents were reasonably well satisfied in their ability to accomplish them. Notably, this included tasks associated with best data sharing practice, such as use of data repositories. However, the most common method for sharing data was in fact via supplemental files with articles, which is not considered to be best practice.

We presume that researchers are unlikely to seek new solutions to a problem or task that they are satisfied in their ability to accomplish, even if many do not attempt this task. This implies there are few opportunities for new solutions or tools to meet these researcher needs. Publishers can likely meet these needs for data sharing by working to seamlessly integrate existing solutions that reduce the effort or behaviour change involved in some tasks, and focusing on advocacy and education around the benefits of sharing data.

There may however be opportunities – unmet researcher needs – in relation to better supporting data reuse, which could be met in part by strengthening data sharing policies of journals and publishers, and improving the discoverability of data associated with published articles.

Journal Checker Tool update: we listen and learn from you | Plan S

“We value feedback from researchers, institutions, funders, and publishers and based on it we always seek to improve the Journal Checker Tool (JCT) to ensure that all users get access to clear advice for Plan S compliance. The latest changes (dated 13th October 2021) include visual modifications, language simplification in the description of results and a new feature to share the results….”

Data sharing policies: share well and you shall be rewarded | Synthetic Biology | Oxford Academic

Abstract:  Sharing research data is an integral part of the scientific publishing process. By sharing data, authors enable their readers to use their results in a way that the textual description of the results does not allow by itself. In order to achieve this objective, data should be shared in a way that makes it as easy as possible for readers to import them in computer software where they can be viewed, manipulated and analyzed. Many authors and reviewers seem to misunderstand the purpose of the data sharing policies developed by journals. Rather than being an administrative burden that authors should comply with to get published, the objective of these policies is to help authors maximize the impact of their work by allowing other members of the scientific community to build upon it. Authors and reviewers need to understand the purpose of data sharing policies to assist editors and publishers in their efforts to ensure that every article published complies with them.

OAreport: Put OA policies into practice in minutes, not months.

“We discover papers and data using open scholarly metadata, targeted text and data mining, and an institution’s internal data sources….

We transparently analyse those papers against all the terms of the institution’s current policy, or custom criteria, to provide detailed statistics and key insights….

We help libraries and funders unlock individual papers as they’re published by making outreach a one-click process, and help build evidence for systemic changes….”

How should Dora be enforced? – Research Professional News

“One lesson is that the declaration’s authors did not consider redundancy as a possible outcome of research assessment, focusing instead on hiring, promotion and funding decisions. However, in my view, redundancy processes should not be delegated to crude metrics and should be informed by the principles of Dora. 

That said, it is not Dora’s job as an organisation to intervene in the gritty particulars of industrial disputes. Nor can we arbitrate in every dispute about research assessment practices within signatory organisations. …

Recently, we have re-emphasised that university signatories must make it clear to their academic staff what signing Dora means. Organisations should demonstrate their commitment to Dora’s principles to their communities, not seek accreditation from us. In doing so, they empower their staff to challenge departures from the spirit of the declaration. Grant conditions introduced by signatory funders such as the Wellcome Trust and Research England buttress this approach. 

Dora’s approach to community engagement taps into the demand for research assessment reform while acknowledging the lack of consensus on how best to go about it. The necessary reforms are complex, intersecting with the culture change needed to make the academy more open and inclusive. They also have to overcome barriers thrown up by academics comfortable with the status quo and the increasing marketisation of higher education. In such a complex landscape, Dora has no wish to be prescriptive. Rather, we need to help institutions find their own way, which will sometimes mean allowing room for course corrections….”

How misconduct helped psychological science to thrive

“Despite this history, before Stapel, researchers were broadly unaware of these problems or dismissed them as inconsequential. Some months before the case became public, a concerned colleague and I proposed to create an archive that would preserve the data collected by researchers in our department, to ensure reproducibility and reuse. A council of prominent colleagues dismissed our proposal on the basis that competing departments had no similar plans. Reasonable suggestions that we made to promote data sharing were dismissed on the unfounded grounds that psychology data sets can never be safely anonymized and would be misused out of jealousy, to attack well-meaning researchers. And I learnt about at least one serious attempt by senior researchers to have me disinvited from holding a workshop for young researchers because it was too critical of suboptimal practices….

Much of the advocacy and awareness has been driven by early-career researchers. Recent cases show how preregistering studies, replication, publishing negative results, and sharing code, materials and data can both empower the self-corrective mechanisms of science and deter questionable research practices and misconduct….

For these changes to stick and spread, they must become systemic. We need tenure committees to reward practices such as sharing data and publishing rigorous studies that have less-than-exciting outcomes. Grant committees and journals should require preregistration or explanations of why it is not warranted. Grant-programme officers should be charged with checking that data are made available in accordance with mandates, and PhD committees should demand that results are verifiable. And we need to strengthen a culture in which top research is rigorous and trustworthy, as well as creative and exciting….”

81% of Horizon 2020 papers were published in open access journals | Science|Business

“European Commission boasts of high level of open access publishing in Horizon 2020. But researchers complain getting processing fees approved is long winded and could result in them losing out on intellectual property rights….

A large majority of Horizon 2020 researchers complied with the requirement to deposit open access publications in repositories. However, only 39% of Horizon 2020 deposited datasets are findable, with the remainder not including reliable metadata needed to track them down. Only 32% of deposited datasets can be quickly accessed via a link in the metadata….

Since then, the EU has also mandated that all papers coming from projects funded through Horizon Europe, its €95.5 billion research programme, should be published in open access journals.

 

The study estimates the average cost in Horizon 2020 of publishing an open access article was around €2,200. Processing charges for articles in subscription journals in which some of the articles are open access and some behind a paywall, had a higher average cost of €2,600. Trouble is looming, with charges for such hybrid journals no longer being eligible for funding under Horizon Europe….”

Monitoring the open access policy of Horizon 2020 – Publications Office of the EU

The report examines, monitors and quantifies compliance with the open access requirements of Horizon 2020, for both publications and research data. With a steadily increase over the years and an average success rate of 83% open access to scientific publications, key findings indicate that the European Commission’s leadership in the Open Science policy has paid off. The study concludes with specific recommendations to improve the monitoring of compliance with the policy under Horizon Europe – which has a more stringent and comprehensive set of rights and obligations for Open Science. The data management plan and the datasets of the study are also available on data.europa.eu, the official portal for European data.

Do authors of research funded by the Canadian Institutes of Health Research comply with its open access mandate?: A meta-epidemiologic study

Overall, we found that there was a significant decrease in the proportion of CIHR funded studies published as OA from 2014 compared to 2017, though this difference did not persist when comparing both 2014–2015 to 2016–2017. The primary limitation was the reliance of self-reported data from authors on CIHR funding status. We posit that this decrease may be attributable to CIHR’s OA policy change in 2015. Further exploration is warranted to both validate these studies using a larger dataset and, if valid, investigate the effects of potential interventions to improve the OA compliance, such as use of a CIHR publication database, and reinstatement of a policy for authors to immediately submit their findings to OA repositories upon publication.

Status, use and impact of sharing individual participant data from clinical trials: a scoping review | BMJ Open

Abstract:  Objectives To explore the impact of data-sharing initiatives on the intent to share data, on actual data sharing, on the use of shared data and on research output and impact of shared data.

Eligibility criteria All studies investigating data-sharing practices for individual participant data (IPD) from clinical trials.

Sources of evidence We searched the Medline database, the Cochrane Library, the Science Citation Index Expanded and the Social Sciences Citation Index via Web of Science, and preprints and proceedings of the International Congress on Peer Review and Scientific Publication. In addition, we inspected major clinical trial data-sharing platforms, contacted major journals/publishers, editorial groups and some funders.

Charting methods Two reviewers independently extracted information on methods and results from resources identified using a standardised questionnaire. A map of the extracted data was constructed and accompanied by a narrative summary for each outcome domain.

Results 93 studies identified in the literature search (published between 2001 and 2020, median: 2018) and 5 from additional information sources were included in the scoping review. Most studies were descriptive and focused on early phases of the data-sharing process. While the willingness to share IPD from clinical trials is extremely high, actual data-sharing rates are suboptimal. A survey of journal data suggests poor to moderate enforcement of the policies by publishers. Metrics provided by platforms suggest that a large majority of data remains unrequested. When requested, the purpose of the reuse is more often secondary analyses and meta-analyses, rarely re-analyses. Finally, studies focused on the real impact of data-sharing were rare and used surrogates such as citation metrics.

Conclusions There is currently a gap in the evidence base for the impact of IPD sharing, which entails uncertainties in the implementation of current data-sharing policies. High level evidence is needed to assess whether the value of medical research increases with data-sharing practices.

Clinical trial transparency and data sharing among biopharmaceutical companies and the role of company size, location and product type: a cross-sectional descriptive analysis | BMJ Open

Abstract:  Objectives To examine company characteristics associated with better transparency and to apply a tool used to measure and improve clinical trial transparency among large companies and drugs, to smaller companies and biologics.

Design Cross-sectional descriptive analysis.

Setting and participants Novel drugs and biologics Food and Drug Administration (FDA) approved in 2016 and 2017 and their company sponsors.

Main outcome measures Using established Good Pharma Scorecard (GPS) measures, companies and products were evaluated on their clinical trial registration, results dissemination and FDA Amendments Act (FDAAA) implementation; companies were ranked using these measures and a multicomponent data sharing measure. Associations between company transparency scores with company size (large vs non-large), location (US vs non-US) and sponsored product type (drug vs biologic) were also examined.

Results 26% of products (16/62) had publicly available results for all clinical trials supporting their FDA approval and 67% (39/58) had public results for trials in patients by 6 months after their FDA approval; 58% (32/55) were FDAAA compliant. Large companies were significantly more transparent than non-large companies (overall median transparency score of 95% (IQR 91–100) vs 59% (IQR 41–70), p<0.001), attributable to higher FDAAA compliance (median of 100% (IQR 88–100) vs 57% (0–100), p=0.01) and better data sharing (median of 100% (IQR 80–100) vs 20% (IQR 20–40), p<0.01). No significant differences were observed by company location or product type.

Conclusions It was feasible to apply the GPS transparency measures and ranking tool to non-large companies and biologics. Large companies are significantly more transparent than non-large companies, driven by better data sharing procedures and implementation of FDAAA trial reporting requirements. Greater research transparency is needed, particularly among non-large companies, to maximise the benefits of research for patient care and scientific innovation.

Clinical trial results for FDA-approved drugs often remain hidden, new study finds

“A team of American researchers examined 62 products by 42 pharma companies that gained FDA approval in 2016 and 2017. Collectively, these drugs and biologics were approved based on 1,017 clinical trials involving more than 187,000 participants….

Around a quarter of these trials were subject to the FDA Amendments Act, a transparency law that requires drug makers to register applicable trials on a public registry within 21 days of their start date, and to make their results public on the registry within 30 days of initial FDA approval of a product.

 

 

 

 

 

The study team found that 55 of the 62 FDA approvals included at least one clinical trial that was subject to the transparency law. However, in the case of 13 products, these trials did not consistently meet legal registration or reporting requirements.

Large pharma companies were far more likely to comply with the law. For example, Merck Sharp & Dohme was legally responsible for registering and reporting 27 trials, and fully complied in every single case. However, several other major players – Gilead, Johnson & Johnson / Janssen, Novo Nordisk, Sanofi, and Shire – fell short of legal requirements.

 

 

Nonetheless, the study – which also covered companies’ data sharing policies – found that overall, there had been “sustained improvement” in pharma industry disclosure practices compared to previous years….”

 

European law could improve ‘scandalous’ lack of clinical trial data reporting | Science | AAAS

“The global pandemic has turned a spotlight on clinical trials, which test thousands of drugs and therapies each year. In Europe, however, the enthusiasm for trials is not matched with a zeal for reporting the results to the public. A total of 3846 European trials—nearly 28% of 13,874 completed trials in the EU Clinical Trials Register (EUCTR) on 1 July—had not posted their results on the register, according to the latest data from the EU Trials Tracker, set up by U.K. researchers in 2018 to expose lax reporting. Public research hospitals and universities, not drugmakers, are responsible for the vast majority of the lapses, which appear to violate European rules that require sponsors to post their results within 1 year of a trial’s conclusion….”