Good Practices Primer – Code and Software (community enhanced).docx

“As organizations develop open science policies pertaining to code and software, they can maximize their open source investments by considering the following issues: ? Timing . Does the funder or institutional policy require that code or software be made openly available immediately upon the posting of research findings (e.g., publication of an article, deposit of a dataset), or with some embargo (noting that open components remain open throughout)? Will institutions and researchers develop policies for community development of code throughout the entire lifecycle? ? Financial Support. W ill the relevant policy maker provide funding to defray costs of preparing and/or depositing the code or software, as well as providing the ongoing support to the community that receives or supports the code ? I f so, is there a cap on the amount? Must the researcher explicitly account for these expenses at the time of proposal development or project design? ? Viability, Sustainability , Future Proofing and Maintenance . Is there an existing community of developers or users that could be engaged or leveraged? If necessary, what is the viability of forming a new community with skill, interest, capacity and freedom to develop and maintain the code? What are the expectations for the duration and extent to which code should be kept up to date? Is there funding to support community development, ongoing maintenance of the software, or dependencies of the software? Is there a plan for sustainability of the community of developers and users? ? Proprietary Software. To the extent that some or all of the code base upon which research relies cannot be put under an open source license, what steps can be taken to reduce restrictions on its reuse? ? Licensing. W hat type of licensing requirements will the policy include to facilitate reuse? What are the goals of the researcher, university, funder, and society and what licenses support these goals? What resources are available to the researcher? How can institutions support the researchers? What support is available to support researchers trying to ensure compliance with licenses of underlying software dependencies? ? Metadata. What documentation and descriptive details are needed to understand and execute the code or run the software program? How will the computational environment in which software or code was originally executed be described and archived? Is the documentation and accompanying material prepared in a manner such that any reasonably adept programmer and systems engineer could easily set up, compile and run it? ? Preservation. What constitutes an appropriate deposit location for the code or software? Is there a repository that is appropriate for the subject matter in question, and/or has emerged within a specific research community as the default resource in that field? Is the repository secure, stable, open and discoverable for all to access? ? Attribution. How will the creators of the software be credited for their work, and how will the code be referred to using identifiers? How will the provenance of non-code contributions, such as design or funding, be recorded? What mechanisms exist for persistent citation? How do identifiers and provenance work interoperate with other systems? ? Further contributions. How will the project build in processes or allocate funds to give back to open source tools which it uses, in order to make a more sustainable ecosystem as a whole? ? Integration. How will an open source programs office (OSPO) integrate with the university community? How does the OSPO support the creation of software inventories, metrics, assessment, etc.? How does the OSPO work with research administration including issues such as ethical use of software? …”

Data Repository Attributes WG Case Statement | RDA

“A complete and current description of a research data repository is important to help a user discover a repository; to understand the repository’s purpose, policies, functionality, and other characteristics; and to evaluate the fitness for their use of the repository and the data that it stewards. Many repositories do not provide adequate descriptions in their websites, structured metadata, and documentation, which can make this challenging. Descriptive attributes may be expressed and exposed in different ways, making it difficult to compare repositories and to enable interoperability among repositories and other infrastructures such as registries. Incomplete and proprietary repository descriptions present challenges for stakeholders such as researchers, repository managers, repository developers, publishers, funders, and registries to enable the discovery and comparison of data repositories. For example:

 

As a researcher, I would like to be able to generate a list of repositories to determine where I can deposit my data based on a query of descriptive attributes that are important to me.
As a repository manager, I would like to know what attributes are important for me to provide to users in order to advertise my repository, its services, and its data collections.
As a repository developer, I would like to know how to express and serialize these attributes as structured metadata for reuse by users and user agents in a manner that is integrated into the functionality of my repository software platform.
As a publisher, I would like to inform journal editors and authors of what repositories are appropriate to deposit their datasets that are associated with manuscripts that are being submitted.
As a funder, I would like to be able to recommend and monitor data repositories to be utilized in conjunction with public access plans and data management plans for the research that I am sponsoring.
As a registry, I would like to be able to easily harvest and index attributes of data repositories to help users find the best repository for their purpose.

 

While this is not an exhaustive list of stakeholders and potential use cases, the value of identifying and harmonizing a list of descriptive attributes of data repositories and highlighting current approaches being taken by repositories would help the community address these important challenges and move towards developing a standard for the description and interoperability of information about data repositories. The statements of interest below demonstrate that there is a significant interest in this work….

Many sets of attributes have been identified by different initiatives with differing scopes and motivations.[2] These attributes have included information about data repositories such as terms of deposit, subject classifications, geographic coverage, API and protocol support, funding models, governance, preservation services and policies, openness of the underlying infrastructure, adherence to relevant standards and certifications, and more….”

Data-sharing practices in publications funded by the Canadian Institutes of Health Research: a descriptive analysis | CMAJ Open

Abstract:  Background: As Canada increases requirements for research data management and sharing, there is value in identifying how research data are shared and what has been done to make them findable and reusable. This study aimed to understand Canada’s data-sharing landscape by reviewing how data funded by the Canadian Institutes of Health Research (CIHR) are shared and comparing researchers’ data-sharing practices to best practices for research data management and sharing.

Methods: We performed a descriptive analysis of CIHR-funded publications from PubMed and PubMed Central published between 1946 and Dec. 31, 2019, that indicated that the research data underlying the results of the publication were shared. We analyzed each publication to identify how and where data were shared, who shared data and what documentation was included to support data reuse.

Results: Of 4144 CIHR-funded publications identified, 1876 (45.2%) included accessible data, 935 (22.6%) stated that data were available via request or application, and 300 (7.2%) stated that data sharing was not applicable or possible; we found no evidence of data sharing in 1558 publications (37.6%). Frequent data-sharing methods included via a repository (1549 [37.4%]), within supplementary files (1048 [25.3%]) and via request or application (935 [22.6%]). Overall, 554 publications (13.4%) included documentation that would facilitate data reuse.

Interpretation: Publications funded by the CIHR largely lack the metadata, access instructions and documentation to facilitate data discovery and reuse. Without measures to address these concerns and enhanced support for researchers seeking to implement best practices for research data management and sharing, much CIHR-funded research data will remain hidden, inaccessible and unusable.

Gathering input for an online dashboard highlighting good practices in research assessment | DORA

“As institutions experiment with and refine academic assessment policies and practices, there is a need for knowledge sharing and tools to support culture change. On September 9, 2021, we held a community call to gather early-stage input for a new resource: an interactive online dashboard to identify, track, and display good practices for academic career assessment. The dashboard is one of the key outputs of Tools to Advance Research Assessment (TARA), which is a DORA project sponsored by Arcadia – a charitable fund of Lisbet Rausing and Peter Baldwin to facilitate the development of new policies and practices for academic career assessment….

It comes as no surprise that academic assessment reform is complex. Institutions are at different stages of readiness for reform and have implemented new practices in a variety of academic disciplines, career stages, and evaluation processes. The dashboard aims to capture this progress and provide counter-mapping to common proxy measures of success (e.g., Journal Impact Factor (JIF), H-index, and university rankings). Currently, we picture the general uses of the dashboard will include:

Tracking policies: Collecting academic institutional standards for hiring, promotion, and tenure.
Capturing new and innovative policies: Enabling the ability to share new assessment policies and practices.
Visualizing content: Displaying source material to see or identify patterns or trends in assessment reform.

 

Because the dashboard will highlight positive trends and examples in academic career assessment, it is important to define what constitutes good practice. One idea comes from the 2020 working paper from the Research on Research Institute (RoRI), where the authors define responsible research assessment as: approaches to assessment which incentivize, reflect and reward the plural characteristics of high-quality research, in support of diverse and inclusive research cultures….”

The importance of adherence to international standards for depositing open data in public repositories | BMC Research Notes | Full Text

Abstract:  There has been an important global interest in Open Science, which include open data and methods, in addition to open access publications. It has been proposed that public availability of raw data increases the value and the possibility of confirmation of scientific findings, in addition to the potential of reducing research waste. Availability of raw data in open repositories facilitates the adequate development of meta-analysis and the cumulative evaluation of evidence for specific topics. In this commentary, we discuss key elements about data sharing in open repositories and we invite researchers around the world to deposit their data in them.

 

Sharing published short academic works in institutional repositories after six months | LIBER Quarterly: The Journal of the Association of European Research Libraries

Abstract:  The ambition of the Netherlands, laid down in the National Plan Open Science, is to achieve 100% open access for academic publications. The ambition was to be achieved by 2020. However, it is to be expected that for the year 2020 between 70% and 75% of the articles will be open access. Until recently, the focus of the Netherlands has been on the gold route – open access via journals and publishers’ platforms. This is likely to be costly and it is also impossible to cover all articles and other publication types this way. Since 2015, Dutch Copyright Act has offered an alternative with the implementation of Article 25fa (also known as the ‘Taverne Amendment’), facilitating the green route, i.e. open access via (trusted) repositories. This amendment allows researchers to share short scientific works (e.g. articles and book chapters in edited collections), regardless of any restrictive guidelines from publishers. From February 2019 until August 2019 all Dutch universities participated in the pilot ‘You Share, we Take Care!’ to test how this copyright amendment could be interpreted and implemented by institutions as a policy instrument to enhance green open access and “self-archiving”. In 2020 steps were taken to scale up further implementation of the amendment. This article describes the outcomes of this pilot and shares best practices on implementation and awareness activities in the period following the pilot until early 2021, in which libraries have played an instrumental role in building trust and working on effective implementations on an institutional level. It concludes with some possible next steps for alignment, for example on a European level.

 

Incorporating open science into evidence-based practice: The TRUST Initiative

Abstract:  To make evidence-based policy, the research ecosystem must produce trustworthy evidence. In the US, federal evidence clearinghouses evaluate research using published standards designed to identify “evidence-based” interventions. Because their evaluations can affect billions of dollars in funding, we examined 10 federal evidence clearinghouses. We found their standards focus on study design features such as randomization, but they overlook issues related to transparency, openness, and reproducibility that also affect whether research tends to produce true results. We identified intervention reports used by these clearinghouses, and we developed a method to assess journals that published those evaluations. Based on the Transparency and Openness Promotion (TOP) Guidelines, we created new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices). Of the 340 journals that published influential research, we found that some endorse the TOP Guidelines, but standards for transparency and openness are not applied systematically and consistently. Examining the quality of our tools, we also found varying levels of interrater reliability across different TOP standards. Using our results, we delivered a normative feedback intervention. We informed editors how their journals compare with TOP and with their peer journals. We also used the Theoretical Domains Framework to develop a survey concerning obstacles and facilitators to implementing TOP; 88 editors responded and reported that they are capable of implementing TOP but lack motivation to do so. The results of this program of research highlight ways to support and to assess transparency and openness throughout the evidence ecosystem.

 

Open Research Infrastructure Programs at LYRASIS

“Academic libraries, and institutional repositories in particular, play a key role in the ongoing quest for ways to gather metrics and connect the dots between researchers and research contributions in order to measure “institutional impact,” while also streamlining workflows to reduce administrative burden. Identifying accurate metrics and measurements for illustrating “impact” is a goal that many academic research institutions share, but these goals can only be met to the extent that all organizations across the research and scholarly communication landscape are using best practices and shared standards in research infrastructure. For example, persistent identifiers (PIDs) such as ORCID iDs (Open Researcher and Contributor Identifier) and DOIs (Digital Object Identifiers) have emerged as crucial best practices for establishing connections between researchers and their contributions while also serving as a mechanism for interoperability in sharing data across systems. The more institutions using persistent identifiers (PIDs) in their workflows, the more connections can be made between entities, making research objects more FAIR (findable, accessible, interoperable, and reusable). Also, when measuring institutional repository usage, clean, comparable, standards-based statistics are needed for accurate internal assessment, as well as for benchmarking with peer institutions….”

Open Research Infrastructure Programs at LYRASIS

“Academic libraries, and institutional repositories in particular, play a key role in the ongoing quest for ways to gather metrics and connect the dots between researchers and research contributions in order to measure “institutional impact,” while also streamlining workflows to reduce administrative burden. Identifying accurate metrics and measurements for illustrating “impact” is a goal that many academic research institutions share, but these goals can only be met to the extent that all organizations across the research and scholarly communication landscape are using best practices and shared standards in research infrastructure. For example, persistent identifiers (PIDs) such as ORCID iDs (Open Researcher and Contributor Identifier) and DOIs (Digital Object Identifiers) have emerged as crucial best practices for establishing connections between researchers and their contributions while also serving as a mechanism for interoperability in sharing data across systems. The more institutions using persistent identifiers (PIDs) in their workflows, the more connections can be made between entities, making research objects more FAIR (findable, accessible, interoperable, and reusable). Also, when measuring institutional repository usage, clean, comparable, standards-based statistics are needed for accurate internal assessment, as well as for benchmarking with peer institutions….”

Digital GLAM Spaces Conference

“Web accessibility and user experience are important for centering those who want to learn, research, and teach with digital and digitized cultural heritage. In 2018, the University of Oregon was awarded an Andrew W. Mellon Foundation grant to experiment and build collaboration capacity for how GLAM assets could be used in innovation ways for research.

Digital GLAM Spaces is a conference about building community around web accessibility and user experience. It’s a place for GLAM practitioners to share definitions and best practices for what is UX and accessibility; communicate digital strategies for incorporating user research into digital projects; and talk about the people, skillsets, and support needed to be better and make web accessibility and user experience part of our work instead of bolted on….”

Best Practices for Software Registries and Repositories | FORCE11

“Software is a fundamental element of the scientific process, and cataloguing scientific software is helpful to enable software discoverability. During the years 2019-2020, the Task Force on Best Practices for Software Registries of the FORCE11 Software Citation Implementation Working Group worked to create Nine Best Practices for Scientific Software Registries and Repositories . In this post, we explain why scientific software registries and repositories are important, why we wanted to create a list of best practices for such registries and repositories, the process we followed, what the best practices include, and what the next steps for this community are….”

Open Environmental Data Project

“We partner and collaborate with institutes, nonprofits, individuals and universities to help articulate best practices for new data commons models in the environmental context….

We provide insight that identifies, evaluates and summarizes scientific, legal, economic and cultural incentive and strategy levers for advancing environmental generative actions….”

Preprints in times of COVID19: the time is ripe for agreeing on terminology and good practices | BMC Medical Ethics | Full Text

Abstract:  Over recent years, the research community has been increasingly using preprint servers to share manuscripts that are not yet peer-reviewed. Even if it enables quick dissemination of research findings, this practice raises several challenges in publication ethics and integrity. In particular, preprints have become an important source of information for stakeholders interested in COVID19 research developments, including traditional media, social media, and policy makers. Despite caveats about their nature, many users can still confuse pre-prints with peer-reviewed manuscripts. If unconfirmed but already widely shared first-draft results later prove wrong or misinterpreted, it can be very difficult to “unlearn” what we thought was true. Complexity further increases if unconfirmed findings have been used to inform guidelines. To help achieve a balance between early access to research findings and its negative consequences, we formulated five recommendations: (a) consensus should be sought on a term clearer than ‘pre-print’, such as ‘Unrefereed manuscript’, “Manuscript awaiting peer review” or ‘’Non-reviewed manuscript”; (b) Caveats about unrefereed manuscripts should be prominent on their first page, and each page should include a red watermark stating ‘Caution—Not Peer Reviewed’; (c) pre-print authors should certify that their manuscript will be submitted to a peer-review journal, and should regularly update the manuscript status; (d) high level consultations should be convened, to formulate clear principles and policies for the publication and dissemination of non-peer reviewed research results; (e) in the longer term, an international initiative to certify servers that comply with good practices could be envisaged.

 

Guest Post – APC Waiver Policies; A Job Half-done? – The Scholarly Kitchen

“Most, if not all, open access publishers offer to waive publication charges (of whatever flavor) for researchers in lower and middle-income countries (LMICs) without access to funds to pay them. After all, no-one wants to see open access actually increasing barriers and reducing diversity and inclusion in direct opposition to one of its fundamental objectives. However, as an echo of the “build it and they will come” mentality, waiver policies may end up failing to achieve their intended outcome if they are poorly constructed and communicated to their intended beneficiaries. A recent study by INASP revealed that fully 60% of respondents to an AuthorAID survey had paid Article Processing Charges (APCs) from their own pockets, despite the widespread availability of waivers. This could be due to internal organizational bureaucracy but more likely to the lack of awareness and understanding of APC waivers and how to claim them.

A White Paper published jointly by STM and Elsevier’s International Center for the Study of Research in September 2020 on how to achieve an equitable transition to open access included a specific recommendation to make publisher policies on APC waivers more consistent and more transparent. The authors commented, “Even though this business model may turn out to be an interim step on the road to universal open access, it is likely to persist for several years to come and may unwittingly end up preventing much important research from reaching its intended audience.”…”