Abstract: In recent years, the scientific community has called for improvements in the credibility, robustness, and reproducibility of research, characterized by higher standards of scientific evidence, increased interest in open practices, and promotion of transparency. While progress has been positive, there is a lack of consideration about how this approach can be embedded into undergraduate and postgraduate research training. Currently, the impact of integrating an open and reproducible approach into the curriculum on student outcomes is not well articulated in the literature. Therefore, in this paper, we provide the first comprehensive review of how integrating open and reproducible scholarship into teaching and learning may impact students, using a large-scale, collaborative, team-science approach. Our review highlighted how embedding open and reproducible scholarship may impact: (1) students’ scientific literacies (i.e., students’ understanding of open research, consumption of science, and the development of transferable skills); (2) student engagement (i.e., motivation and engagement with learning, collaboration, and engagement in open research), and (3) students’ attitudes towards science (i.e., trust in science and confidence in research findings). Our review also identified a need for more robust and rigorous methods within evaluations of teaching practice. We discuss implications for teaching and learning scholarship in this area.
“In the past few years, a variety of articles have examined why attempts to replicate studies in biomedical, natural and social sciences often are without success. These debates on the so-called ‘replication crisis’ led Rik Peels and Lex Bouter in 2018 to ask the question: What about replication in the humanities? Scholars in the humanities go about their research in other ways than those in the sciences, because of the difference in the sources, data and methods they work with, the type of questions they try to answer and the purposes they aim to serve. But two of the things that both domains have in common, is that they aspire to acquire knowledge that is not largely dependent on the idiosyncrasies of the researcher and that their future studies often relate to or build upon the findings of previous ones. Might replication studies be a useful way to corroborate findings in the humanities? If so, what would they look like in various fields within the humanities and how would they differ from replication in the biomedical, natural and social sciences? What aims would they strive for in terms of epistemic progress? What can the humanities learn from replication studies in the sciences and vice versa? In addition to this, we need to ask whether and how, as Peels and Bouter introduced, replication might contribute to the trustworthiness of research in the humanities. Furthermore, concerns regarding replication studies in the humanities voiced by other scholars, like Leonelli and Penders, Holbrook and De Rijcke, call for further investigation….”
Steinhardt I, Kruschick F (2022) Knowledge Equity and Open Science in qualitative research – Practical research considerations. Research Ideas and Outcomes 8: e86387. https://doi.org/10.3897/rio.8.e86387
How can Knowledge In/Equity be addressed in qualitative research by taking the idea of Open Science into account? Two projects from the Open Science Fellows Programme by Wikimedia Deutschland will be used to illustrate how Open Science practices can succeed in qualitative research, thereby reducing In/Equity. In this context, In/Equity is considered as a fair and equal representation of people, their knowledge and insights and comprehends questions about how epistemic, structural, institutional and personal biases generate and shape knowledge as guidance. Three questions guide this approach: firstly, what do we understand by In/Equity in the context of knowledge production in these projects? Secondly, who will be involved in knowledge generation and to what extent will they be valued or unvalued? Thirdly, how can data be made accessible for re-use to enable true participation and sharing?
“Are you passionate about improving research to make it more transparent, reusable and reproducible?
Are you looking for an opportunity to further develop your skills as a researcher?
Are you a team player interested in the science of science? Do you have excellent communication and organisational skills?
If so, we look forward to meeting you!
We are looking for a pre-doc candidate with good qualitative and/or quantitative skills in data collection, data analysis, and/or qualitative methods to work in a range of European Commission projects related to Open Science and the reproducibility of research. Initially, the candidate will work in projects to design and develop workflows and policies for Open Science, as well as studying Open Science platforms, tools and methods. The opportunity to pursue a Ph.D. in the candidate’s chosen discipline will be supported.
You will work in the interdisciplinary Open and Reproducible Research Group led by Dr. Tony Ross-Hellauer, which uses evidence-based approaches to make research cultures more open, transparent and participatory through new practices and technologies (https://orrg.eu)….”
“The KE Task and Finish group on FAIR Data and Software supporting Reproducible Research have produced a scoping document which provides an overview for this work.
We are inviting consultants to submit proposals to undertake work around ‘Minimum conditions supporting research reproducibility’. Full details of the work and its requirements are included in the Call for proposals document….”
Abstract: In order for science to be truly open, readers and reviewers must be able to understand how authors produced the computational results, which parameters were used for the analysis, and how manipulations to these parameters affect the results. Increasingly, journals and funding agencies are mandating that researchers share their code and data when reporting on computational results. However, even when data and code are provided by authors, and published, they are oftentimes just posted as links and relegated to platforms entirely separated from publishing workflows, disconnected from the published “full text”. We believe that preprints are better suited than external repositories in enabling open, reproducible science because they are connected to the published full text via scholarly infrastructure, they are author-centric, and allow versioning. In particular, we propose a simple (yet innovative and experimental) workflow whereby authors deposit a preprint version of their articles in an html-first preprint server. In it, authors can then enhance the preprint, through edits and revisions, with data, code, computational notebooks, interactive visualizations, and dashboards. As such, preprints can be used as an experimental vehicle for directly disseminating the interactive, data-driven, and multi-media nature of Open Science outputs, in parallel and connected with more traditional published outputs.
Sign up for Heidi Seibold’s newsletter on Open and Reproducible Data Science.
Abstract: Reproducibility crisis urge scientists to promote transparency which allows peers to draw same conclusions after performing identical steps from hypothesis to results. Growing resources are developed to open the access to methods, data and source codes. Still, the computational environment, an interface between data and source code running analyses, is not addressed. Environments are usually described with software and library names associated with version labels or provided as an opaque container image. This is not enough to describe the complexity of the dependencies on which they rely to operate on. We describe this issue and illustrate how open tools like Guix can be used by any scientist to share their environment and allow peers to reproduce it. Some steps of research might not be fully reproducible, but at least, transparency for computation is technically addressable. These tools should be considered by scientists willing to promote transparency and open science.
In this editorial, we describe the work that has been undertaken by the ESTS editorial collective (EC) over the last two years towards establishing a publishing infrastructure for open research data. A broad movement in the scholarly community is pushing towards data sharing or “open data,” particularly in the natural sciences and medicine. Recognizing that there are compelling reasons why scholars are wary of data sharing and careful to protect their work, our EC has pursued experiments towards establishing a publishing infrastructure. The goal is to better understand the possible benefits for the STS community from data sharing and the role that a scholarly-run journal like ESTS could play in realizing such opportunities. The sharing of data could serve as an archive of work in/for STS; offer greater recognition of diverse contributions to scholarly research beyond individual author(s); enable reuse of data for new insights and pedagogical opportunities; and engender new forms of scholarly community in the field.
The UKRN (Open Research) Project Officer will support a programme of activity to advance research and innovation culture at Oxford. The post-holder will work in close coordination with the UKRN Institutional Lead for Oxford, the Research Practice team in Research Services, and with the UKRN.
We are looking for an organised and confident communicator who will provide support across a range of projects. You will be responsible for the coordination of UKRN activities at the University of Oxford, for collaboration with other project officers and UKRN institutional leads across other linked institutions. You will manage, organise, and support the delivery and evaluation of training and other events, design and prepare a range of communication material, as well as undertake other tasks required of a project officer.
You will be based in the Open Scholarship Support team in the Bodleian Libraries, and you will work closely with the new Research Practice team in Research Services as well as with units within Divisions, Departments/Faculties, the Researcher Hub, and IT Services.
This post is fixed term for 3 years, and is available at 60% FTE (22.5 hours per week), with some flexibility about how this is achieved. The team is currently working in a hybrid manner.
You will be required to upload a CV and a supporting statement as part of your online application. Your supporting statement should list each of the essential and desirable selection criteria, as listed in the job description, and explain how you meet each one. CVs alone will not be considered.
Our staff and students come from all over the world, and we proudly promote a friendly and inclusive culture. Diversity is positively encouraged, through diverse groups and champions, as well as a number of family-friendly policies, such as the right to apply for flexible working and support for staff returning from periods of extended absence, for example shared parental leave.
Transparency and reproducibility are expected to be normative practices in clinical trials used for decision-making on marketing authorisations for new medicines. This registered report introduces a cross-sectional study aiming to assess inferential reproducibility for main trials assessed by the European Medicines Agency.
Two researchers independently identified all studies on new medicines, biosimilars and orphan medicines given approval by the European Commission between January 2017 and December 2019, categorised as ‘main studies’ in the European Public Assessment Reports (EPARs). Sixty-two of these studies were randomly sampled. One researcher retrieved the individual patient data (IPD) for these studies and prepared a dossier for each study, containing the IPD, the protocol and information on the conduct of the study. A second researcher who had no access to study reports used the dossier to run an independent re-analysis of each trial. All results of these re-analyses were reported in terms of each study’s conclusions, p-values, effect sizes and changes from the initial protocol. A team of two researchers not involved in the re-analysis compared results of the re-analyses with published results of the trial.
Two hundred ninety-two main studies in 173 EPARs were identified. Among the 62 studies randomly sampled, we received IPD for 10 trials. The median number of days between data request and data receipt was 253 [interquartile range 182–469]. For these ten trials, we identified 23 distinct primary outcomes for which the conclusions were reproduced in all re-analyses. Therefore, 10/62 trials (16% [95% confidence interval 8% to 28%]) were reproduced, as the 52 studies without available data were considered non-reproducible. There was no change from the original study protocol regarding the primary outcome in any of these ten studies. Spin was observed in the report of one study.
Despite their results supporting decisions that affect millions of people’s health across the European Union, most main studies used in EPARs lack transparency and their results are not reproducible for external researchers. Re-analyses of the few trials with available data showed very good inferential reproducibility.
“Various factors contribute to the restricted access to materials: avoiding criticism, fear of falsification and retraction, or a desire to stay ahead of peers. Commercial and proprietary concerns also play a significant role in the decision of scientists and organizations to conceal replication materials (Campbell & Bendavid,?2002; Hong & Walsh,?2009). Such motivations are more prominent as the line between academic and commercially oriented research becomes blurred. Nowadays, commercial firms commonly publish in scientific journals, whereas scientists, universities, and research institutions benefit from the commercialization of research findings and often seek patent protection. All of this cultivates an environment of secrecy, in contrast with the scientific tradition of openness and sharing (Merton,?1942)….
Instead of choosing between IP rights and replicability, we suggest an inclusive approach that facilitates replications without depriving scientists of IP rights. Our proposal is to implement a new policy tool: the Conditional Access Agreement (CAA). Recall that it is public access to replication materials that jeopardizes both the prospect of securing patent protection (as novelty and non-obviousness are examined vis-à-vis the public prior art) and trade secret protection (since the pertinent information must be kept out of the public domain). Access, however, does not have to be public. This is precisely the gist of the CAA mechanism—establishing a private, controlled channel of communication for the transfer of replication materials between authors and replicators….
The CAA mechanism would work as follows (Fig?1): When submitting a paper for publication, an author would execute an agreement vis-à-vis the journal, pledging to provide full access to replication materials upon demand. The agreement would specify that anyone requesting access to the materials can only obtain it upon signing a non-disclosure agreement (NDA). Under an NDA, the receiving party commits to use the information disclosed by the other party only for a limited purpose while keeping it confidential. …”
“To aid researchers in development and validation of EEG biomarkers, and development of new (AI) methodologies, we hereby also announce our open access EEG dataset: the Two Decades Brainclinics Research Archive for Insights in Neuroscience (TDBRAIN)….
The whole raw EEG dataset as well as python code to preprocess the raw data is available at www.brainclinics.com/resources and can freely be downloaded using ORCID credentials….”
“Since July 2022, the University of Mannheim has become a member of the German Reproducibility Network (GRN), a multidisciplinary consortium advocating for more transparency in research. The University of Mannheim is the first university to join this network.
The German Reproducibility Network (GRN) is a multidisciplinary consortium that aims to increase trustworthiness and transparency of scientific research. Their focus is on the reproducibility of scientific results, whereby repetitions using the same or similar data, code, analyses, and methods yield the same results as the original study. The GRN was established in February 2021. Members are research institutions, scientific societies and reproducibility initiatives. The University of Mannheim is the first university to join the GRN. The University of Mannheim is committed to the goals of transparent and inclusive research practices, open access to scientific results, and reproducibility of research results. Therefore, the university established an Open Science Office in 2021. The Open Science Office supports researchers in implementing open science practices and brings open science issues into strategic discussions at the University of Mannheim. As a new member of the GRN, the University of Mannheim will share its experience with an institutional and interdisciplinary approach to open science and support other institutions in developing similar structures and activities. It will also share experiences, materials and information on open science and reproducibility for research with the GRN. The collaboration between the GRN and the University of Mannheim will advance efforts that lead to more open science and reproducibility in the research landscape throughout Germany….”
“In 2016 the University of Michigan Library embarked on an exciting initiative to address the research data management, sharing, and preservation needs of the University through providing a suite of data curation services. Data curation enhances the value of data sets through activities such as augmenting metadata, file format transformation, and digital preservation. The library’s data curators collaborate with researchers at all stages of the data lifecycle to provide support for sharing data in ways that are findable, accessible, interoperable and reusable (FAIR) as well as ethical. The Library operates its own data repository, Deep Blue Data, as a means of sharing and preserving research data developed at U-M.
The Michigan Institute for Data Science (MIDAS) is the university-wide unit to support data science and Artificial Intelligence (AI). Central to MIDAS’ mission is to ensure U-M’s leading role in data science and AI research through enabling interdisciplinary research and transforming traditional disciplinary research with cutting-edge data science and AI methods, as well as training for investigators. In the past two years, MIDAS’ work to promote and to enable ethical and reproducible data science and AI research has been particularly well received by research communities from across the University, and has made MIDAS known among academic data science institutes for leading such work. Reproducibility of research results is essential to any advancement in science. This and the ethical aspects of data science — making data representative, unbiased and of high-quality — is closely related and complementary to the Library’s data curation effort. The Library has also been a key collaborator in the research reproducibility activities organized by MIDAS.
All of the efforts from the Library and MIDAS to improve the FAIRness of data, the quality and representativeness of data, the ethical use of data and the reproducibility of data science and AI research not only play an essential role to ensure U-M’s research leadership, but are also expected by scholarly societies and funding agencies. A major obstacle, however, is that at U-M (and at most other leading universities) there remains an urgent need to develop tools and resources that researchers have easy access to, and training that builds researchers’ skills to use such resources. This new staff member will play a critical role in building systematic approaches for the massive adoption of best practices to ensure data FAIRness and the ethical and reproducible use of data.
The Data Curation and Research Reproducibility Specialist is a joint position between the Library’s Deep Blue Repository and Research Data Services unit and MIDAS. This position is centered on understanding the needs of researchers in the reproducibility of their work and building resources, curating research data submitted to the Deep Blue Data repository with an emphasis on supporting reproducibility, and developing actionable standards and practical tools to improve the data quality with respect to representativeness and quality….”