Frontiers | Open Science for Veterinary Education Research | Veterinary Science

“The manifesto for reproducible science (15) details a range of approaches that can be used to support more open research practices. For veterinary education, there are a number that can be integrated into our current practice….

Data sharing is another aspect of reporting which supports openness within education research. While data sharing is highly prevalent in some fields, there are complex ethical considerations regarding human data within social science contexts (32, 36). Where participants are informed and have consented to share their data, and where reasonable precautions are taken regarding ethical concerns (37), sharing data can help reduce unnecessary data collection, support the development of researchers in areas like the Global South (38), and help to catch errors within the research process (39).

Finally, dissemination and reporting can be further improved through pre-printing, the process of making articles available prior to peer-review. Pre-printing has a host of benefits (40, 41) including enhancing sight of the findings and facilitating open review, improving the transparency of peer review, and facilitating the publication of controversial findings. Pre-printing also allows for the sharing of author’s final version manuscripts, as they can be updated post peer-review. This will support the availability of research beyond paywalls. Unfortunately, not all journals support pre-printing. In the author’s experience, both Medical Teacher and Journal of Veterinary Medical Education have in 2020–2021 discouraged the use of pre-printing by considering it prior-publication, thus making pre-printed papers unable to be published by those journals. However, other journals, such as Frontiers in Veterinary Science support the use of open publishing approaches. Researchers must be cautious in pre-printing to ensure they are not inadvertently cutting themselves off from their desired audience, but should also participate in journal communities to encourage pre-printing where appropriate….”

Promoting Open Science Through Research Data Management

Abstract:  Data management, which encompasses activities and strategies related to the storage, organization, and description of data and other research materials, helps ensure the usability of datasets — both for the original research team and for others. When contextualized as part of a research workflow, data management practices can provide an avenue for promoting other practices, including those related to reproducibility and those that fall under the umbrella of open science. Not all research data needs to be shared, but all should be well managed to establish a record of the research process.

 

Study Shows Ensuring Reproducibility in Research Is Needed – IEEE Spectrum

“About 60 percent of IEEE conferences, magazines, and journals have no practices in place to ensure reproducibility of the research they publish. That’s according to a study by an ad hoc committee formed by the IEEE Computer Society to investigate the matter and suggest remedies.

Reproducibility—the ability to repeat a line of research and obtain consistent results—can help confirm the validity of scientific discoveries, IEEE Fellow Manish Parashar points out. He is chair of the society’s Committee on Open Science and Reproducibility….

The goal of the ad hoc committee’s study was to ensure that research results IEEE publishes are reproducible and that readers can look at the results and “be confident that they understand the processes used to create those results and they can reproduce them in their labs,” Parashar says….

Here are three key recommendations from the report:

Researchers should include specific, detailed information about the products they used in their experiment. When naming the software program, for example, authors should include the version and all necessary computer codes that were written. In addition, journals should make submitting the information easier by adding a step in the submission process. The survey found that 22 percent of the society’s journals, magazines, and conferences already have infrastructure in place for submitting such information.
All researchers should include a clear, specific, and complete description of how the reported results were reached. That includes input data, computational steps, and the conditions under which experiments and analysis were performed.
Journals and magazines, as well as scientific societies requesting submissions for their conferences, should develop and disclose policies about achieving reproducibility. Guidelines should include such information as how the papers will be evaluated for reproducibility and criteria code and data must meet….”

Toward Reusable Science with Readable Code and Reproducibility

Abstract:  An essential part of research and scientific communication is researchers’ ability to reproduce the results of others. While there have been increasing standards for authors to make data and code available, many of these files are hard to re-execute in practice, leading to a lack of research reproducibility. This poses a major problem for students and researchers in the same field who cannot leverage the previously published findings for study or further inquiry. To address this, we propose an open-source platform named RE3 that helps improve the reproducibility and readability of research projects involving R code. Our platform incorporates assessing code readability with a machine learning model trained on a code readability survey and an automatic containerization service that executes code files and warns users of reproducibility errors. This process helps ensure the reproducibility and readability of projects and therefore fast-track their verification and reuse.

 

Reproducibility: expect less of the scientific paper

“Many calls have been made to improve this scenario. Proposed measures include increasing sample sizes, preregistering protocols and using stricter statistical analyses. Another proposal is to introduce heterogeneity in methods and models to evaluate robustness — for instance, using more than one way to suppress gene expression across a variety of cell lines or rodent strains. In our work on the initiative, we have come to appreciate the amount of effort involved in following these proposals for a single experiment, let alone for an entire paper.

Even in a simple RT-PCR experiment, there are dozens of steps in which methods can vary, as well as a breadth of controls to assess the purity, integrity and specificity of materials. Specifying all of these steps in advance represents an exhaustive and sometimes futile process, because protocols inevitably have to be adapted along the way. Recording the entire method in an auditable way generates spreadsheets with hundreds of rows for every experiment.

We do think that the effort will pay off in terms of reproducibility. But if every paper in discovery science is to adopt this mindset, a typical high-profile article might easily take an entire decade of work, as well as a huge budget. This got us thinking about other, more efficient ways to arrive at reliable science….”

How misconduct helped psychological science to thrive

“Despite this history, before Stapel, researchers were broadly unaware of these problems or dismissed them as inconsequential. Some months before the case became public, a concerned colleague and I proposed to create an archive that would preserve the data collected by researchers in our department, to ensure reproducibility and reuse. A council of prominent colleagues dismissed our proposal on the basis that competing departments had no similar plans. Reasonable suggestions that we made to promote data sharing were dismissed on the unfounded grounds that psychology data sets can never be safely anonymized and would be misused out of jealousy, to attack well-meaning researchers. And I learnt about at least one serious attempt by senior researchers to have me disinvited from holding a workshop for young researchers because it was too critical of suboptimal practices….

Much of the advocacy and awareness has been driven by early-career researchers. Recent cases show how preregistering studies, replication, publishing negative results, and sharing code, materials and data can both empower the self-corrective mechanisms of science and deter questionable research practices and misconduct….

For these changes to stick and spread, they must become systemic. We need tenure committees to reward practices such as sharing data and publishing rigorous studies that have less-than-exciting outcomes. Grant committees and journals should require preregistration or explanations of why it is not warranted. Grant-programme officers should be charged with checking that data are made available in accordance with mandates, and PhD committees should demand that results are verifiable. And we need to strengthen a culture in which top research is rigorous and trustworthy, as well as creative and exciting….”

Preclinical Western Blot in the Era of Digital Transformation and Reproducible Research, an Eastern Perspective | SpringerLink

Abstract:  The current research is an interdisciplinary endeavor to develop a necessary tool in preclinical protein studies of diseases or disorders through western blotting. In the era of digital transformation and open access principles, an interactive cloud-based database called East–West Blot (https://rancs-lab.shinyapps.io/WesternBlots) is designed and developed. The online interactive subject-specific database built on the R shiny platform facilitates a systematic literature search on the specific subject matter, here set to western blot studies of protein regulation in the preclinical model of TBI. The tool summarizes the existing publicly available knowledge through a data visualization technique and easy access to the critical data elements and links to the study itself. The application compiled a relational database of PubMed-indexed western blot studies labeled under HHS public access, reporting downstream protein regulations presented by fluid percussion injury model of traumatic brain injury. The promises of the developed tool include progressing toward implementing the principles of 3Rs (replacement, reduction, and refinement) for humane experiments, cultivating the prerequisites of reproducible research in terms of reporting characteristics, paving the ways for a more collaborative experimental design in basic science, and rendering an up-to-date and summarized perspective of current publicly available knowledge.

 

Incorporating open science into evidence-based practice: The TRUST Initiative

Abstract:  To make evidence-based policy, the research ecosystem must produce trustworthy evidence. In the US, federal evidence clearinghouses evaluate research using published standards designed to identify “evidence-based” interventions. Because their evaluations can affect billions of dollars in funding, we examined 10 federal evidence clearinghouses. We found their standards focus on study design features such as randomization, but they overlook issues related to transparency, openness, and reproducibility that also affect whether research tends to produce true results. We identified intervention reports used by these clearinghouses, and we developed a method to assess journals that published those evaluations. Based on the Transparency and Openness Promotion (TOP) Guidelines, we created new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices). Of the 340 journals that published influential research, we found that some endorse the TOP Guidelines, but standards for transparency and openness are not applied systematically and consistently. Examining the quality of our tools, we also found varying levels of interrater reliability across different TOP standards. Using our results, we delivered a normative feedback intervention. We informed editors how their journals compare with TOP and with their peer journals. We also used the Theoretical Domains Framework to develop a survey concerning obstacles and facilitators to implementing TOP; 88 editors responded and reported that they are capable of implementing TOP but lack motivation to do so. The results of this program of research highlight ways to support and to assess transparency and openness throughout the evidence ecosystem.

 

Making Strides in Research Reporting – The Official PLOS Blog

“PLOS keeps a watchful and enthusiastic eye on emerging research, and we update our policies as needed to address new challenges and opportunities that surface. In doing so, we work to advance our core mission and values aimed at transforming research communication and promoting Open Science. 

Here, I summarize a few key updates we made between 2016-2021….”

Social Science Reproduction Platforms

“The Social Science Reproduction Platform (SSRP) is an openly licensed platform that facilitates the sourcing, cataloging, and review of attempts to verify and improve the computational reproducibility of social science research. Computational reproducibility is the ability to reproduce the results, tables, and other figures found in research articles using the data, code, and materials made available by the authors. The SSRP is meant to be used in combination with the Guide for Accelerating Computational Reproducibility (ACRe Guide), a protocol that includes detailed steps and criteria for assessing and improving reproducibility.

Assessments of reproducibility often gravitate towards binary judgments that declare entire papers as “reproducible” or “not reproducible”. The SSRP allows for a more nuanced approach to reproducibility, where reproducers analyze individual claims and their associated display items, and take concrete steps to improve their reproducibility. SSRP reproductions are transparent and reproducible in themselves since they are based on the ACRe Guide’s standardized reproduction protocol and publicly document their analyses to allow for collaboration, discussion, and reuse. Sign up for a free account now to get started in improving computational reproducibility—one claim at a time!

SSRP was developed as part of the Accelerating Computational Reproducibility in Economics (ACRE) project led by the Berkeley Initiative for Transparency in the Social Sciences (BITSS in collaboration with the AEA Data Editor)….”

Social Science Reproduction Platforms

“The Social Science Reproduction Platform (SSRP) is an openly licensed platform that facilitates the sourcing, cataloging, and review of attempts to verify and improve the computational reproducibility of social science research. Computational reproducibility is the ability to reproduce the results, tables, and other figures found in research articles using the data, code, and materials made available by the authors. The SSRP is meant to be used in combination with the Guide for Accelerating Computational Reproducibility (ACRe Guide), a protocol that includes detailed steps and criteria for assessing and improving reproducibility.

Assessments of reproducibility often gravitate towards binary judgments that declare entire papers as “reproducible” or “not reproducible”. The SSRP allows for a more nuanced approach to reproducibility, where reproducers analyze individual claims and their associated display items, and take concrete steps to improve their reproducibility. SSRP reproductions are transparent and reproducible in themselves since they are based on the ACRe Guide’s standardized reproduction protocol and publicly document their analyses to allow for collaboration, discussion, and reuse. Sign up for a free account now to get started in improving computational reproducibility—one claim at a time!

SSRP was developed as part of the Accelerating Computational Reproducibility in Economics (ACRE) project led by the Berkeley Initiative for Transparency in the Social Sciences (BITSS in collaboration with the AEA Data Editor)….”

CODECHECK: an Open Science initiative for the independent execution of computations underlying research articles during peer review to improve reproducibility

Abstract:  The traditional scientific paper falls short of effectively communicating computational research.  To help improve this situation, we propose a system by which the computational workflows underlying research articles are checked. The CODECHECK system uses open infrastructure and tools and can be integrated into review and publication processes in multiple ways. We describe these integrations along multiple dimensions (importance, who, openness, when). In collaboration with academic publishers and conferences, we demonstrate CODECHECK with 25 reproductions of diverse scientific publications. These CODECHECKs show that asking for reproducible workflows during a collaborative review can effectively improve executability. While CODECHECK has clear limitations, it may represent a building block in Open Science and publishing ecosystems for improving the reproducibility, appreciation, and, potentially, the quality of non-textual research artefacts. The CODECHECK website can be accessed here: https://codecheck.org.uk/.

 

eLabFTW as an Open Science tool to improve the quality and translation of preclinical research

Abstract:  Reports of non-replicable research demand new methods of research data management. Electronic laboratory notebooks (ELNs) are suggested as tools to improve the documentation of research data and make them universally accessible. In a self-guided approach, we introduced the open-source ELN eLabFTW into our life-science lab group and, after using it for a while, think it is a useful tool to overcome hurdles in ELN introduction by providing a combination of properties making it suitable for small life-science labs, like ours. We set up our instance of eLabFTW, without any further programming needed. Our efforts to embrace open data approach by introducing an ELN fits well with other institutional organized ELN initiatives in academic research and our goals towards data quality management.

 

Clearinghouse Standards of Evidence on the Transparency, Openness, and Reproducibility of Intervention Evaluations | SpringerLink

Abstract:  Clearinghouses are influential repositories of information on the effectiveness of social interventions. To identify which interventions are “evidence-based,” clearinghouses review intervention evaluations using published standards of evidence that focus primarily on internal validity and causal inferences. Open science practices can improve trust in evidence from evaluations on the effectiveness of social interventions. Including open science practices in clearinghouse standards of evidence is one of many efforts that could increase confidence in designations of interventions as “evidence-based.” In this study, we examined the policies, procedures, and practices of 10 federal evidence clearinghouses that review preventive interventions—an important and influential subset of all evidence clearinghouses. We found that seven consider at least one open science practice when evaluating interventions: replication (6 of 10 clearinghouses), public availability of results (6), investigator conflicts of interest (3), design and analysis transparency (3), study registration (2), and protocol sharing (1). We did not identify any policies, procedures, or practices related to analysis plan registration, data sharing, code sharing, material sharing, and citation standards. We provide a framework with specific recommendations to help federal and other evidence clearinghouses implement the Transparency and Openness Promotion (TOP) Guidelines. Our proposed “TOP Guidelines for Clearinghouses” includes reporting whether evaluations used open science practices, incorporating open science practices in their standards for receiving “evidence-based” designations, and verifying that evaluations used open science practices. Doing so could increase the trustworthiness of evidence used for policy making and support improvements throughout the evidence ecosystem.

 

Reproducibility and research integrity

“Research integrity is an important driver of reliable and trustworthy research, and includes issues such as reproducibility and replicability. There is a need to promote robust research, starting at the lab bench and extending to the dissemination of findings to the scientific community, as well as the public. 

Following a call from the UK House of Commons Science and Technology Committee for evidence on reproducibility and research integrity, and the roles different institutions play in this, BMC Research Notes has partnered with the UK Reproducibility Network to provide a platform to share feedback on the topic with the wider scientific community. 

In this BMC Research Notes collection, we welcome contributions on the following topics:

Factors that influence reproducibility and research integrity;
The role of different stakeholders in addressing these factors;
Proposals for improving research integrity and quality;
Guidance and support for researchers….”