“This September 16th, 2023, is Software Freedom Day, an annual world-wide celebration coordinated by the Digital Freedom Foundation to raise awareness of what it means to use free software and to encourage its use.
At the same time, PKP celebrates a quarter of a century developing and maintaining free and open software with the scholarly publishing community. Since its very beginnings, inspired by the inability to share research resources publicly and openly, PKP has decidedly taken action to distribute its applications freely.
In this blog post, PKP joins in on the celebrations, and takes a moment to share a key message: PKP is and always will be dedicated to the development of free software….”
“We are pleased to announce that Version 1.0 of the Amsterdam Declaration on Funding Research Software Sustainability (ADORE.software) is now released. ADORE.software is the first step towards formalising, on a global level, the basic principles and recommendations related to funding the sustainability of research software, including the people needed to achieve this goal. Now that Version 1.0 has been released, this means that funding organisations that support research software, and/or the people who develop and maintain it, are now invited to formally sign ADORE.software.
The declaration was initiated in November 2022 by the Research Software Alliance and Netherlands eScience Center who organised the International Funders Workshop: The Future of Research Software which focused on creating the first draft of the declaration. Since the workshop, the research software community around the world, including members of the combined Research Data Alliance and Research Software Funders Forum provided input towards this release of the declaration….”
“The Centre for Science and Technology Studies at Leiden University in the Netherlands, which publishes university rankings, plans to start a new ranking based entirely on open data and open algorithms in 2024.
The open-source CWTS ranking will sit alongside listings produced, as in previous years, based on bibliographic data from the Web of Science database of Clarivate*….”
“JooYoung Seo, a professor of information sciences at the University of Illinois Urbana-Champaign, is developing a data visualization tool that will help make visual representations of statistical data accessible to researchers and students who are blind or visually impaired.
The multimodal representation tool is aimed at the accessibility of statistical graphs, such as bar plots, box plots, scatter plots and heat maps….
The tool, called Multimodal Access and Interactive Data Representation, presents data through sonification, text and Braille….
His accessibility module will be added to the Teach Access repository, and Seo plans to share it on GitHub as an open-source project. He’ll also introduce it to his data science students during this academic year….”
Today, CISA released an Open Source Software Security Roadmap to lay out—in alignment with the National Cybersecurity Strategy and the CISA Cybersecurity Strategic Plan—how we will partner with federal agencies, open source software (OSS) consumers, and the OSS community, to secure OSS infrastructure. To that end, the roadmap details four key goals:
“pyOpenSci is accepting applications for a Community Manager. The Community Manager supports growth and development of an inclusive pyOpenSci community. Our vibrant community is dedicated to supporting high quality Python open source software that drives open science.”
Abstract: The growing impact of preprint servers enables the rapid sharing of time-sensitive research. Likewise, it is becoming increasingly difficult to distinguish high-quality, peer-reviewed research from preprints. Although preprints are often later published in peer-reviewed journals, this information is often missing from preprint servers. To overcome this problem, the PreprintResolver was developed, which uses four literature databases (DBLP, SemanticScholar, OpenAlex, and CrossRef / CrossCite) to identify preprint-publication pairs for the arXiv preprint server. The target audience focuses on, but is not limited to inexperienced researchers and students, especially from the field of computer science. The tool is based on a fuzzy matching of author surnames, titles, and DOIs. Experiments were performed on a sample of 1,000 arXiv-preprints from the research field of computer science and without any publication information. With 77.94 %, computer science is highly affected by missing publication information in arXiv. The results show that the PreprintResolver was able to resolve 603 out of 1,000 (60.3 %) arXiv-preprints from the research field of computer science and without any publication information. All four literature databases contributed to the final result. In a manual validation, a random sample of 100 resolved preprints was checked. For all preprints, at least one result is plausible. For nine preprints, more than one result was identified, three of which are partially invalid. In conclusion the PreprintResolver is suitable for individual, manually reviewed requests, but less suitable for bulk requests. The PreprintResolver tool (this https URL, Available from 2023-08-01) and source code (this https URL, Accessed: 2023-07-19) is available online.
“The conscious use of research software is becoming increasingly important. The Max Planck Digital Library supports scientists in this. To this end, the MPDL has written an application for software management plans (SMP). It can be used to organize projects with research software in the open-source application RDMO. This application has now been revised and handed over to the RDMO community.
At the same time, the MPDL team has written an additional application to look at the own software according to the FAIR principles for research software (FAIR4RS). This complements the work with an SMP and enables scientists to check the FAIRness of their own code. In addition to quality management, this can, i.e. also be used for third-party funding applications.”
Abstract: A key responsibility for many library publishers is to collaborate with authors to determine the best mechanisms for sharing and publishing research. Librarians are often asked to assist with a wide range of research outputs and publication types, including eBooks, digital humanities (DH) projects, scholarly journals, archival and thematic collections, and community projects. These projects can exist on a variety of platforms both for profit and academy owned. Additionally, over the past decade, more and more academy owned platforms have been created to support both library publishing programs. Library publishers who wish to emphasize open access and open-source publishing can feel overwhelmed by the proliferation of available academy-owned or -affiliated publishing platforms. For many of these platforms, documentation exists but can be difficult to locate and interpret. While experienced users can usually find and evaluate the available resources for a particular platform, this kind of documentation is often less useful to authors and librarians who are just starting a new publishing project and want to determine if a given platform will work for them. Because of the challenges involved in identifying and evaluating the various platforms, we created this comparative crosswalk to help library publishers (and potentially authors) determine which platforms are right for their services and authors’ needs.
“At the core of education, engineering, and science lies the quest to better understand and improve the world. This document aims to explain the essential role of open-source hardware for this mission and why it should be considered an essential pillar of the ongoing open science programmes in Dutch Universities.
Open-source hardware is hardware whose design is made publicly available so that anyone can study, modify, distribute, make, and sell the design or other hardware and products based on that design. Ideally, the design of open source hardware is available in the preferred format for making modifications and uses widely available components and materials, standard processes, open infrastructure, unrestricted content, and open-source design tools to maximize the ability of other individuals to make and use hardware. Open-source hardware gives people the freedom to control their technology while sharing knowledge and encouraging commerce through the open exchange of (compatible) designs.
Open Science practices are becoming the norm in academia, and are rightly encouraged by funders and policymakers of higher education. Open-source hardware is an essential pillar of Open Science. Sharing hardware designs openly both enables more people and teams to access it, and through encouraging replication it makes science more reproducible. But it is also an area of contention because of the exclusive knowledge transfer practices and (not always justified) confidentiality clauses in research partnerships or contract defaults.
Beyond academia, open hardware has the potential to radically transform science, education, and society by facilitating collaborative innovation and democratizing access to technology. It can massively accelerate the transition of an invention into a useful product, and simultaneously reduce costs and promote sustainable practices. By promoting open-source hardware initiatives, the Netherlands can solidify its position as a leader in Open Science and contribute to the global effort of achieving the sustainable development goals.”
“Are you interested in reproducible code and Open Science? Then we have the perfect opportunity for you!
As part of a pilot project between TU Delft and CODECHECK, we are organising a codechecking hackathon on 18th September 2023! During this hackathon, you will learn the concept behind codechecking, and practise related skills to check whether available code and data can reproduce the results in a paper, preprint or project. More information about the codechecking process can be found here.
Would you like to participate as a codechecker, and help promote reproducible code and Open Science? Register via this page, and save the date! The hackathon will take place over two sessions, in the morning and afternoon. Details of the programme will be released in early September.
PhD candidates at TU Delft are eligible for 0.5 Graduate School credits, provided they attend the entire session (morning and afternoon), and write a short reflection (between 300-350 words) on the skills they learned during the codechecking session, to be uploaded on their DMA profiles. To confirm their eligibility for GS credits, PhD candidates must seek approval from their supervisors and their Faculty Graduate Schools in advance of the session. If confirmation of attendance is required from the organisers, please let us know beforehand….”
Abstract: If a scientific paper is computationally reproducible, the analyses it reports can be repeated independently by others. At the present time most papers are not reproducible. However, the tools to enable computational reproducibility are now widely available, using free and open source software. We conducted a pilot study in which we offered ‘reproducibility as a service’ within a UK psychology department for a period of 6 months. Our rationale was that most researchers lack either the time or expertise to make their own work reproducible, but might be willing to allow this to be done by an independent team. Ten papers were converted into reproducible format using R markdown, such that all analyses were conducted by a single script that could download raw data from online platforms as required, generate figures, and produce a pdf of the final manuscript. For some studies this involved reproducing analyses originally conducted using commercial software. The project was an overall success, with strong support from the contributing authors who saw clear benefit from this work, including greater transparency and openness, and ease of use for the reader. Here we describe our framework for reproducibility, summarise the specific lessons learned during the project, and discuss the future of computational reproducibility. Our view is that computationally reproducible manuscripts embody many of the core principles of open science, and should become the default format for scientific communication.
Abstract: Research software plays a crucial role in advancing scientific knowledge, but ensuring its sustainability, maintainability, and long-term viability is an ongoing challenge. To address these concerns, the Sustainable Research Software Institute (SRSI) Model presents a comprehensive framework designed to promote sustainable practices in the research software community. This white paper provides an in-depth overview of the SRSI Model, outlining its objectives, services, funding mechanisms, collaborations, and the significant potential impact it could have on the research software community. It explores the wide range of services offered, diverse funding sources, extensive collaboration opportunities, and the transformative influence of the SRSI Model on the research software landscape
“We are network of collaborators trying to keep track and curate interesting open source projects related to neurosciences. If you have a project that you’d like to see listed here or if you know of a project that should be listed, drop us a line, via E-mail, or Twitter.”
Abstract: Data-driven computational analysis is becoming increasingly important in biomedical research, as the amount of data being generated continues to grow. However, the lack of practices of sharing research outputs, such as data, source code and methods, affects transparency and reproducibility of studies, which are critical to the advancement of science. Many published studies are not reproducible due to insufficient documentation, code, and data being shared. We conducted a comprehensive analysis of 453 manuscripts published between 2016-2021 and found that 50.1% of them fail to share the analytical code. Even among those that did disclose their code, a vast majority failed to offer additional research outputs, such as data. Furthermore, only one in ten papers organized their code in a structured and reproducible manner. We discovered a significant association between the presence of code availability statements and increased code availability (p=2.71×10?9). Additionally, a greater proportion of studies conducting secondary analyses were inclined to share their code compared to those conducting primary analyses (p=1.15*10?07). In light of our findings, we propose raising awareness of code sharing practices and taking immediate steps to enhance code availability to improve reproducibility in biomedical research. By increasing transparency and reproducibility, we can promote scientific rigor, encourage collaboration, and accelerate scientific discoveries. We must prioritize open science practices, including sharing code, data, and other research products, to ensure that biomedical research can be replicated and built upon by others in the scientific community.