Data sharing policies: share well and you shall be rewarded | Synthetic Biology | Oxford Academic

Abstract:  Sharing research data is an integral part of the scientific publishing process. By sharing data, authors enable their readers to use their results in a way that the textual description of the results does not allow by itself. In order to achieve this objective, data should be shared in a way that makes it as easy as possible for readers to import them in computer software where they can be viewed, manipulated and analyzed. Many authors and reviewers seem to misunderstand the purpose of the data sharing policies developed by journals. Rather than being an administrative burden that authors should comply with to get published, the objective of these policies is to help authors maximize the impact of their work by allowing other members of the scientific community to build upon it. Authors and reviewers need to understand the purpose of data sharing policies to assist editors and publishers in their efforts to ensure that every article published complies with them.

Prospective Clinical Trial Registration: A Prerequisite for Publishing Your Results | Radiology

“The ICMJE requires that clinical trial results be published in the same clinical trial depository where the trial is registered. These results are in the form of a short (?500 words) abstract or table (6,7). Full disclosure of the existing results publication in a clinical trial registry should be explicitly stated when the manuscript is submitted for publication. The Food and Drug Administration (FDA) has indicated it will enforce trial results reporting related to ClinicalTrials.gov (8). The FDA is authorized to seek civil monetary penalties from responsible parties, including additional civil monetary penalties. In the United States, the sponsor of an applicable clinical trial is considered the responsible party, unless or until the sponsor designates a qualified principal investigator as the responsible party. The FDA issued its first Notice of Noncompliance in April 2021 for failure to report results in ClinicalTrials.gov based on a lack of reporting the safety and effectiveness results for the drug dalantercept in combination with axitinib in patients with advanced renal cell carcinoma (8).

Finally, as of July 1, 2018, manuscripts submitted to ICMJE journals that report the results of clinical trials must contain a data sharing statement. Clinical trials that begin enrolling participants on or after January 1, 2019, must include a data sharing plan in the trial registration. (for further information, see www.icmje.org/recommendations/browse/publishing-and-editorial-issues/clinical-trial-registration.html). Since most clinical trials take 2 or more years for results to be reported, the Radiology editorial board had expected such mandatory data sharing plans to be reported in the current year. However, because of the COVID-19 pandemic, many clinical trials were halted. Thus, journal publication requirements to include data sharing statements are more likely to impact authors beginning in 2023. Data sharing statements required for Radiological Society of North America (RSNA) journals may be found at https://pubs.rsna.org/page/policies#clinical.

In conclusion, prospective clinical trial registration is a mechanism allowing us to ensure transparency in clinical research conduct, honest and complete reporting of the clinical trial results, and minimization of selective result publications. Since its inception in 2004, this requirement has evolved into a policy that is practiced by major medical journals worldwide, is mandatory for publication of trial results, and, in some circumstances, is enforced by the FDA. Further, ICMJE journals, including RSNA journals, are expecting manuscripts that report trial results to include statements on data sharing. As each clinical trial design is unique, we encourage authors to refer to the full description of the current ICMJE policy at icmje.org for additional information pertaining to their specific circumstances.”

PsyArXiv Preprints | When open data closes the door: Problematising a one size fits all approach to open data in journal submission guidelines

Abstract:  Opening data promises to improve research rigour and democratise knowledge production. But it also poses practical, theoretical, and ethical risks for qualitative research. Despite discussion about open data in qualitative social psychology predating the replication crisis, the nuances of this discussion have not been translated into current journal policies. Through a content analysis of 261 journals in the domain of social psychology, we establish the state of current journal policies for open data. We critically discuss how these expectations may not be adequate for establishing qualitative rigour, can introduce ethical challenges, and may place those who wish to use qualitative approaches at a disadvantage in peer review and publication processes. We assert that open data requirements should include clearer guidelines that reflect the nuance of data sharing in qualitative research, and move away from a universal ‘one-size-fits-all’ approach to data sharing.

 

OSF Preprints | A survey of funders’ and institutions’ needs for understanding researchers’ open research practices

Abstract:  A growing number of research-performing organisations (institutions) and funding agencies have policies that support open research practices — sharing of research data, code and software. However, funders and institutions lack sufficient tools, time or resources to monitor compliance with these policies.

  To better understand funder and institution needs related to understanding open research practices of researchers, we targeted funders and institutions with a survey in 2020 and received 122 completed responses. Our survey assessed and scored, (from 0-100), the importance of and satisfaction with 17 factors associated with understanding open research practices. This includes things such as knowing if a research paper includes links to research data in a repository; knowing if a research grant made code available in a public repository; knowing if research data were made available in a reusable form; and knowing reasons why research data are not publicly available. Half of respondents had tried to evaluate researchers’ open research practices in the past and 78% plan to do this in the future. The most common method used to find out if researchers are practicing open research was personal contact with researchers and the most common reason for doing it was to increase their knowledge of researchers’ sharing practices (e.g. determine current state of sharing; track changes in practices over time; compare different departments/disciplines). The results indicate that nearly all of the 17 factors we asked about in the survey were underserved. The mean importance of all factors to respondents was 71.7, approaching the 75 threshold of “very important”. The average satisfaction of all factors was 41.3, indicating a negative level of satisfaction with ability to complete these tasks. The results imply an opportunity for better solutions to meet these needs. The growth of policies and requirements for making research data and code available does not appear to be matched with solutions for determining if these policies have been complied with. We conclude that publishers can better support some of the needs of funders and institutions by introducing simple solutions such as: – Mandatory data availability statements (DAS) in research articles – Not permitting generic “data available on request” statements – Enabling and encouraging the use of data repositories and other methods that make data available in a more reusable way – Providing visible links to research data on publications – Making information available on data and code sharing practices in publications available to institutions and funding agencies – Extending policies that require transparency in sharing of research data, to sharing of code

How can publishers better meet the open research needs of funders and institutions?

“Publishers investing in simple solutions in their workflows can help to better meet the needs of funders and institutions who wish to support open research practices, research released this week by PLOS concludes.

Policies can be an effective solution for changing research culture and practice. A growing number of research-performing organisations (institutions) and funding agencies have policies that support open research practices — sharing of research data, code and software — as do publishers. Seeking to deepen our understanding of funder and institution needs related to open research, we surveyed more than 100 funders and institutions in 2020. We wanted to know if they are evaluating how researchers share data and code, how they are doing it, why they are doing it, and how satisfied they are with their ability to get these tasks done. Our results are available as a preprint along with an anonymised dataset….

Simple solutions more publishers could provide include:

Mandatory Data Availability Statements (DAS) in all relevant publications.
Across the STM industry around 15% of papers include a DAS. Since we introduced our data availability policy in 2014, 100% of PLOS research articles include a DAS.
Supporting researchers to provide information on why research data (and code) are not publicly available with their publications.
Time and again “data available on request” has been shown to be ineffective at supporting new research — and is not permitted in PLOS journals. 
Enabling and encouraging the use of data repositories.
Recommending the use of data repositories is a useful step, but making them easily and freely accessible — integrated into the publishing process — can be even more effective. Rates of repository use are higher in journals that partner closely with repositories and remove cost barriers to their use.
Providing visible links to research data on publications. Many researchers also struggle to find data they can reuse, hence PLOS will soon be experimenting with improving this functionality in our articles, and integrating the Dryad repository with submission….”

 

Revisiting: Is There a Business Case for Open Data? – The Scholarly Kitchen

Looking back at this 2017 post brings a mixed bag of thoughts. First, the fortunes being made with collecting, curating, and selling access to consumer data still haven’t spilled across into research data, and that’s likely because a) relatively few research datasets are available, and b) for the most part, the ones that are available have inadequate metadata and incompatible structures, so that combining datasets for meta-analyses is scarcely worthwhile. Until we address the problem of missing research data – which (full disclosure) we’re trying to do with DataSeer – we can’t really make much headway with getting it all into a consistent format. However, while combining datasets for re-use is a core feature for consumer data, it’s only one of the reasons for sharing research data. Open data also allows readers to check the results for the paper itself, and perhaps this is where our attention for the ‘business model for open data’ should turn. In particular, peer review is considerably simpler when the authors submit computationally reproducible manuscripts. Editors and reviewers can then be sure that the datasets support the analyses and hence the results, allowing them to focus solely on the appropriateness of the experimental design and the significance of the conclusions. It’s therefore conceivable that journals could reduce the APC for computationally reproducible articles (or hike it for non-reproducible ones), thereby incentivizing the extra effort required to required to produce them. No matter what route we choose, it’s clear that our current incentive structures around open science (mostly strongly worded policies and the lure of extra citations) are not getting the job done, and we need to consider alternatives. Money can enter the equation at a few places: by only funding open science, as exemplified by Aligning Science Across Parkinson’s, or by offsetting the extra effort required by researchers with additional financial resources, by making things cheaper or non-open science more expensive. Let’s see where we go.

Editorial policy regarding the citation of preprints in the British Journal of Pharmacology (BJP) – George – – British Journal of Pharmacology – Wiley Online Library

“Because of the increasing number of articles submitted to BJP over the past year and that cite preprint material, the Editor-In-Chief and Senior Editors with the full Editorial Board of BJP have undertaken a review of the issues and our discipline-relevant data to set policy on the issue of preprint citation for the Journal….

The discussion so far has highlighted the negative aspects of preprints, but it is important to be balanced in our considerations and to note that, during the COVID-19 pandemic, the availability of preprints has been viewed as a key factor in the break-neck speed with which the biomedical research community has shared research on insights regarding the biology and clinical features of the infection, resulting in the rapid and timely delivery of much needed therapeutic options (Else, 2020)….

An excellent example is the Randomised Evaluation of COVID-19 Therapy (RECOVERY) trial which showed the benefit of the simple and low-cost utility of dexamethasone that has saved many lives globally. The RECOVERY trial was published as a preprint on 22 June 2020 (Horby et al., 2020) and as a peer-reviewed article published as an epub in the New England Journal of Medicine on July 17th 2020 (RECOVERY collaborative group, 2021). Whilst it is highly likely that the preprint publication and sharing of the results saved lives during the short time between preprint posting and full publication, the data were made available to regulatory authorities and clinicians prior to full publication….

CONCLUSION: THE BJP WILL NOT ALLOW THE FORMAL CITATION OF PREPRINTS

 

The Editorial Board of the BJP support the principles of preprinting. However, given the potential risks associated with allowing the citation of preprints, it is our collective view, supported by feedback received from the journal’s international Editorial Board, that BJP should take all reasonable steps to avoid perpetuating these risks….

We are aware that the issue of preprint citation is under discussion at COPE and that the British Pharmacological Society is establishing a working group to review this issue more broadly across its publications. Thus, the stated editorial position will be reviewed, and if solutions to the problems highlighted above emerge, we will revisit our policy….”

Preprints and Medical Journals: Some Things You Should Know as an Author

“Due to the recent submission of preprints, we have had to add a preprint policy to our editorial policy and author guidelines for the Balkan Medical Journal. In this issue, for the first time, we are publishing a study that was previously posted to a preprint platform. We received an article a few months ago, but the author had not informed us that the article had been sent to a preprint platform before it was sent to our journal. Before sending the manuscript to the reviewers, we checked its similarity score, and noticed that the similarity rate was extremely high. We found that this was due to the preprint version of the same manuscript. This experience led us to decide that we need a preprint policy and that the journal should guide the authors in this regard. This is why we created our preprint policy and published it on the web, in consultation with our editorial board. The full policy is now available online at https:// balkanmedicaljournal.org/en/editorial-policy-1018. Now, through this editorial, we wish to inform the authors and readers about preprint publication and what they should look for….”

Full article: Making data meaningful: guidelines for good quality open data

“In the most recent editorial for the The Journal of Social Psychology (JSP), J. Grahe (2021) set out and justified a new journal policy: publishing papers now requires authors to make available all data on which claims are based. This places the journal amongst a growing group of forward-thinking psychology journals that mandate open data for research outputs.1 It is clear that the editorial team hopes to raise the credibility and usefulness of research in the journal, as well as the discipline, through increased research transparency….

This commentary represents a natural and complementary alliance between the ambition of JSP’s open data policy and the reality of how data sharing often takes place. We share with JSP the belief that usable and open data is good for social psychology and supports effective knowledge exchange within and beyond academia. For this to happen, we must have not just more open data, but open data that is of a sufficient quality to support repeated use and replication (Towse et al., 2020). Moreover, it is becoming clear that researchers across science are seeking guidance, training and standards for open data provision (D. Roche et al., 2021; Soeharjono & Roche, 2021). With this in mind, we outline several simple steps and point toward a set of freely available resources that can help make datasets more valuable and impactful. Specifically, we explain how to make data meaningful; easily findable, accessible, complete and understandable. We have provided a simple checklist (Table 1) and useful resources (Appendix A) based on our recommendations, these can also be found on the project page for this article (https:doi.org/10.17605/OSF.IO/NZ5WS). While we have focused mostly on sharing quantitative data, much of what has been discussed remains relevant to qualitative research (for an in-depth discussion of qualitative data sharing, see DuBois et al., 2018)….”

Ouvrir la Science – Guidelines for journals that wish to establish a “data policy” related to their publications

“This document is designed for journals and editorial boards that wish to establish a data policy. A data policy defines what the journal expects from its authors in terms of managing and sharing the data related to its publications.

This document is intended in particular for editors of journals in the humanities and social sciences, as they have been relatively less active in this area than their counterparts in science, technology and medicine. However, it can be useful to all editors, regardless of the disciplinary scope of their journal….”

Announcing eLife and Dryad’s seamless data publishing integration | Inside eLife | eLife

“eLife and Dryad have long supported making research publicly accessible and reusable. Over the last nine years, Dryad has increasingly curated and published datasets supporting eLife publications. As the open science landscape continues to evolve, with a growing emphasis on best practices and making all research components openly available, both organisations acknowledge that the workflows need to be simplified. Working with eJournalPress, eLife and Dryad are pleased to announce Dryad’s first platform-based integration, allowing authors to submit datasets to Dryad seamlessly through eLife’s submission process.

As authors submit research to eLife, they will be prompted about data availability during the full submission. Authors are welcome to deposit their data to any suitable disciplinary repository and, if data do not yet have a home, authors will have the opportunity to upload their data to Dryad….”

A descriptive analysis of the data availability statements accompanying medRxiv preprints and a comparison with their published counterparts

Abstract:  Objective

To determine whether medRxiv data availability statements describe open or closed data—that is, whether the data used in the study is openly available without restriction—and to examine if this changes on publication based on journal data-sharing policy. Additionally, to examine whether data availability statements are sufficient to capture code availability declarations.

Design

Observational study, following a pre-registered protocol, of preprints posted on the medRxiv repository between 25th June 2019 and 1st May 2020 and their published counterparts.

Main outcome measures

Distribution of preprinted data availability statements across nine categories, determined by a prespecified classification system. Change in the percentage of data availability statements describing open data between the preprinted and published versions of the same record, stratified by journal sharing policy. Number of code availability declarations reported in the full-text preprint which were not captured in the corresponding data availability statement.

Results

3938 medRxiv preprints with an applicable data availability statement were included in our sample, of which 911 (23.1%) were categorized as describing open data. 379 (9.6%) preprints were subsequently published, and of these published articles, only 155 contained an applicable data availability statement. Similar to the preprint stage, a minority (59 (38.1%)) of these published data availability statements described open data. Of the 151 records eligible for the comparison between preprinted and published stages, 57 (37.7%) were published in journals which mandated open data sharing. Data availability statements more frequently described open data on publication when the journal mandated data sharing (open at preprint: 33.3%, open at publication: 61.4%) compared to when the journal did not mandate data sharing (open at preprint: 20.2%, open at publication: 22.3%).

Conclusion

Requiring that authors submit a data availability statement is a good first step, but is insufficient to ensure data availability. Strict editorial policies that mandate data sharing (where appropriate) as a condition of publication appear to be effective in making research data available. We would strongly encourage all journal editors to examine whether their data availability policies are sufficiently stringent and consistently enforced.

Increasing transparency through open science badges

“Authors who adopt transparent practices for an article in Conservation Biology are now able to select from 3 open science badges: open data, open materials, and preregistration. Badges appear on published articles as visible recognition and highlight these efforts to the research community. There is an emerging body of literature regarding the influences of badges, for example, an increased number of articles with open data (Kidwell et al 2016) and increased rate of data sharing (Rowhani?Farid et al. 2018). However, in another study, Rowhani?Farid et al. (2020) found that badges did not “noticeably motivate” researchers to share data. Badges, as far as we know, are the only data?sharing incentive that has been tested empirically (Rowhani?Farid et al. 2017).

Rates of data and code sharing are typically low (Herold 2015; Roche et al 2015; Archmiller et al 2020; Culina et al 2020). Since 2016, we have asked authors of contributed papers, reviews, method papers, practice and policy papers, and research notes to tell us whether they “provided complete machine and human?readable data and computer code in Supporting Information or on a public archive.” Authors of 31% of these articles published in Conservation Biology said they shared their data or code, and all authors provide human?survey instruments in Supporting Information or via a citation or online link (i.e., shared materials)….”

Principles of open, transparent and reproducible science in author guidelines of sleep research and chronobiology journals

Abstract:  Background: “Open science” is an umbrella term describing various aspects of transparent and open science practices. The adoption of practices at different levels of the scientific process (e.g., individual researchers, laboratories, institutions) has been rapidly changing the scientific research landscape in the past years, but their uptake differs from discipline to discipline. Here, we asked to what extent journals in the field of sleep research and chronobiology encourage or even require following transparent and open science principles in their author guidelines.

Methods: We scored the author guidelines of a comprehensive set of 27 sleep and chronobiology journals, including the major outlets in the field, using the standardised Transparency and Openness (TOP) Factor. The TOP Factor is a quantitative summary of the extent to which journals encourage or require following various aspects of open science, including data citation, data transparency, analysis code transparency, materials transparency, design and analysis guidelines, study pre-registration, analysis plan pre-registration, replication, registered reports, and the use of open science badges.

Results: Across the 27 journals, we find low values on the TOP Factor (median [25 th, 75 th percentile] 3 [1, 3], min. 0, max. 9, out of a total possible score of 29) in sleep research and chronobiology journals.

Conclusions: Our findings suggest an opportunity for sleep research and chronobiology journals to further support recent developments in transparent and open science by implementing transparency and openness principles in their author guidelines.

Collaborating with our community to increase code sharing

“Given how essential newly developed code can be to computational biology research we have been collaborating with the Editorial Board of PLOS Computational Biology and consulting with computational biology researchers to develop a new more-rigorous code policy that is intended to increase code sharing on publication of articles….”