NSF Public Access Plan 2.0

“NSF’s updated public access plan integrates new agency guidance issued by the White House Office of Science and Technology Policy in August of 2022. This guidance, which includes zero-embargo public access for research publications and their supporting data, was developed with leadership from the National Science and Technology Council Subcommittee on Open Science, in which NSF has always been actively engaged. NSF developed its plan while considering issues of importance to our many partners across academia and industry, and in alignment with all other U.S. agencies which fund scientific research. This plan is a first step, and we look forward to its further evolution as we address changes in technology and in the needs of members across our communities. Promoting immediate public access to federally funded research results and data is a critically important aspect of achieving the NSF mission of promoting the progress of science, securing the national defense, and advancing the national health, prosperity, and welfare. Indeed, scientific openness, academic freedom, scientific integrity, equity in science, and fairness are American values that rest on the pillar of public access to federally funded research and data….

The sections of this plan describe how NSF will ensure: • That all peer-reviewed scholarly publications resulting from NSF-funded research will be made freely available and publicly accessible by default in the NSF Public Access Repository, or NSF-PAR, without embargo or delay. • That scientific data associated with peer-reviewed publications resulting from NSF awards will be made available in appropriate scientific disciplinary repositories. • That exceptions to the data-sharing requirements will be made based on legal, privacy, ethical, intellectual property and national security considerations. • That persistent identifiers, or PIDs, and other critical metadata associated with peer-reviewed publications and data resulting from NSF-funded research will be collected and made publicly available in NSF-PAR. • That the agency coordinates with other federal funders of scientific research in implementing new public access requirements….

Primary areas of interest that will shape NSF policy as implementation approaches are formulated include: • Minimizing the equity impact of over-reliance on article processing charges, or APCs, also known as the “Gold Open Access” publication model, including inequity for fields, organizations or researchers lacking access to funding; consequences of possible citation bias; the impact on ability to fund research and training activities; and potential negative impacts with respect to public trust. • Promoting use of author’s accepted manuscripts, or AAMs, as a no-cost option to comply with public access requirements. • Minimizing the consequences of changing publishing ecosystems, including impacts for organizations least able to weather dramatic changes to subscription policies, which can increase precarity for those affiliated with these organizations. • Ameliorating the possible impacts of large APCs on small awards. • Involving affected communities regarding issues associated with data collection, data governance, verifying permitted data access, and data destruction, particularly for groups that have previously suffered from the appropriation or misuse of data. 19 • Ensure accessibility of data and results, including access to data cyberinfrastructures for under-resourced and underserved institutions/researchers, as well as considerations for persons with visual disabilities. 20 • Maximizing the reach and impact of U.S. research while seeking to minimize both access barriers in underresourced and underserved communities and challenges related to the language or interpretability of data. • Identifying the full range of costs (tangible and intangible) associated with data provision and addressing any inequities introduced by these costs. • Developing processes for addressing inequities identified in sharing and accessing research findings….”

Good data practices: Removing barriers to data reuse with CC0 licensingDryad news

“Why is CC0 a great choice for open data? Learn to love this frequently misunderstood license waiver.

Authors who submit data to Dryad are asked to consent to the publication of their data under the The Creative Commons Public Domain Dedication, more commonly known as CC0. In doing so, authors are being asked to confirm that any materials that have been previously published by another author or working group were published under conditions compatible with CC0 and that they agree to novelly publish any previously unpublished materials under this waiver. 

Given the continually evolving research landscape, our curation team frequently receives questions about what CC0 means in relation to their data. Let’s review the advantages of CC0 as well as some common concerns and misconceptions that we encounter to guide researchers in data sharing and to explain why we only publish data under CC0….”

Providing a framework for the reuse of research data based on the development dynamic framework of United Nations Development Program (UNDP) | Emerald Insight

Abstract:  Purpose

The present research is aimed at presenting a framework for the reuse of research data in Iran through applying the United Nations Development Program (UNDP).

Design/methodology/approach

The research at hand has a mixed methods design. In the qualitative section, the authors first carried out meta-synthesis and then an interview was conducted. Likewise, in the quantitative section, the reliability of the recommended framework was measured through carrying out a survey. Finally, the framework for data reuse was presented in five dimensions, namely human, organizational, policies and laws, technical, implementation and analysis.

Findings

Through structural equation modeling, the fitness of the framework was confirmed, and it was found out that the dimensions of policies, human and organizational played more prominent roles in the explanation of the framework in comparison with the other two dimensions.

Originality/value

Research studies in the area of data reuse have been conducted either quantitatively or qualitatively and in most of them interviews or questionnaires were used as tools for collecting data; however, due to the nature of this area and its relatively new literature in Iran, it is necessary to use mixed methods in order to be able to arrive at a proper understanding of this concept using both quantitative and qualitative approaches.

Predicting psychologists’ approach to academic reciprocity and data sharing with a theory of collective action | Emerald Insight

“This study found that data sharing among psychologists is driven primarily by their perceptions of community benefits, academic reciprocity and the norms of data sharing. This study also found that academic reciprocity is significantly influenced by psychologists’ perceptions of community benefits, academic reputation and the norms of data sharing. Both academic reputation and academic reciprocity are affected by psychologists’ prior experiences with data reuse. Additionally, psychologists’ perceptions of community benefits and the norms of data sharing are significantly affected by the perception of their academic reputation.”

A Pilot Study to Locate Historic Scientific Data in a University Archive | Issues in Science and Technology Librarianship

Abstract:  Historic data in analog (or print) format is a valuable resource that is utilized by scientists in many fields. This type of data may be found in various locations on university campuses including offices, labs, storage facilities, and archives. This study investigates whether biological data held in one institutional university archives could be identified, described, and thus made potentially useful for contemporary life scientists. Scientific data was located and approximately half of it was deemed to be of some value to current researchers and about 20% included enough information for the study to be repeated. Locating individual data sets in the collections at the University Archives at the University of Minnesota proved challenging. This preliminary work points to possible ways to move forward to make raw data in university archives collections more discoverable and likely to be reused. It raises questions that can help inform future work in this area.

 

New preprint explores tracing data reuse and citations – Scholarly Communications Lab | ScholCommLab

“In our digital era, scientists are certainly sharing and reusing open data. Yet it remains unclear how widespread data reuse and citation practices are within academic disciplines, and why scientists cite—or do not cite—data in their research work. 

In a recent preprint from the Meaningful Data Counts project, Kathleen Gregory (postdoctoral researcher at the University of Vienna and University of Ottawa) and fellow ScholCommLab members—Anton Boudreau Ninkov, Chantal Ripp, Emma Roblin, Isabella Peters, and Stefanie Haustein—surveyed nearly 2,500 academic authors to explore their practices, preferences, and motivations for reusing and citing data, and how these practices vary by discipline. 

In this interview, we ask Kathleen about how she got involved in study, why some researchers cite and reuse data while others do not, and how her work informs data citation policies and standards in the scholarly community….”

Experimental Publishing Compendium | Community-Led Open Publication Infrastructures for Monographs (COPIM)

The Experimental Publishing Compendium is a guide and reference for scholars, publishers, developers, librarians, and designers who want to challenge, push and redefine the shape, form and rationale of scholarly books. The compendium brings together tools, practices, and books to promote the publication of experimental scholarly works. Read more

Beta 1.0 (2023)

Version 1.0 has been curated by Janneke Adema, Julien McHardy, and Simon Bowie. Future versions will be overseen, curated, and maintained by an Editorial Board (members TBC).

Back-end coding by Simon Bowie, front-end coding by Joel Galvez, design by Joel Galvez & Martina Vanini.

Special thanks to Gary Hall, Rebekka Kiesewetter, Marcell Mars, Toby Steiner, and Samuel Moore, and everyone who has provided feedback on our research or shared suggestions of examples to feature, including the participants of COPIM’s experimental publishing workshop, and Nicolás Arata, Dominique Babini, Maria Fernanda Pampin, Sebastian Nordhoff, Abel Packer, and Armanda Ramalho, and Agatha Morka.

Our appreciation also goes out to the Next Generation Library Publishing Project for sharing an early catalogue-in-progress version of SComCat with us, which formed one of the inspirations behind the Compendium.

The compendium grew out of the following two reports:

Adema, J., Bowie, S., Mars, M., and T. Steiner (2022) Books Contain Multitudes: Exploring Experimental Publishing (2022 update). Community-Led Open Publication Infrastructures for Monographs (COPIM). doi: 10.21428/785a6451.1792b84f & 10.5281/zenodo.6545475.

Adema, J., Moore, S., and T. Steiner (2021) Promoting and Nurturing Interactions with Open Access Books: Strategies for Publishers and Authors. Community-Led Open Publication Infrastructures for Monographs (COPIM). doi: 10.21428/785a6451.2d6f4263 and 10.5281/zenodo.5572413

COPIM and the Experimental Publishing Compendium are supported by the Research England Development (RED) Fund and by Arcadia, a charitable fund of Lisbet Rausing and Peter Baldwin.

Experimenting with Copyright Licences | Community-led Open Publication Infrastructures for Monographs (COPIM)

Hall, G. (2023). Experimenting with Copyright Licences. Community-Led Open Publication Infrastructures for Monographs (COPIM). Retrieved from https://copim.pubpub.org/pub/combinatorial-books-documentation-copyright-licences-post6

As part of the documentation for the first book coming out of the Combinatorial Books Pilot Project, we are discussing our rationale for chosing a CC-BY licence for this project as well as the limitations and potentials of this licence regarding more collaborative scholarship.

This is the sixth blogpost in a series documenting the COPIM/OHP Pilot Project Combinatorial Books: Gathering Flowers. You can find the previous blogposts here, here, here, here, and here.

When it comes to publishing a book, many authors and presses show a surprising lack of concern about whether the copyright licence used is consistent with what’s actually being said in the content of the work. Now it’s not our intention to single anyone out for particular criticism: our reservation is about a system more than individuals. But perhaps we can start with a brief analysis of Michael Hardt and Antonio Negri’s 2017 book Assembly, just to explain what we mean and illustrate why the choice of license matters far more than most people seem to think.  

We are taking Hardt and Negri as our example because, as the authors of volumes such as Empire (2001), Multitude (2005) and Commonwealth (2009), they are among the most politically radical of theorists at work today. But we’re also focusing on them because, like us, they are interested in the generation of new forms of human and nonhuman collaboration. What’s so intriguing about Hardt and Negri in this context is that, in terms of their relationship to the decentralised, self-organising mobilisations they take inspiration from in Assembly – the Occupy movement, the Indignados movement in Spain, etcetera – these two autonomous Marxists can be seen to repeat much the same behaviour they criticise platform capitalist companies for engaging in with regard to the social relations of their users.

[…]

 

 

We need a plan D | Nature Methods

“Ensuring data are archived and open thus seems a no-brainer. Several funders and journals now require authors to make their data public, and a recent White House mandate that data from federally funded research must be made available immediately on publication is a welcome stimulus. Various data repositories exist to support these requirements, and journals and preprint servers also provide storage options. Consequently, publications now often include various accession numbers, stand-alone data citations and/or supplementary files.

But as the director of the National Library of Medicine, Patti Brennan, once noted, “data are like pictures of children: the people who created them think they’re beautiful, but they’re not always useful”. So, although the above trends are to be applauded, we should think carefully about that word ‘useful’ and ask what exactly we mean by ‘the data’, how and where they should be archived, and whether some data should be kept at all….

Researchers, institutions and funders should collaborate to develop an overarching strategy for data preservation — a plan D. There will doubtless be calls for a ‘PubMed Central for data’. But what we really need is a federated system of repositories with functionality tailored to the information that they archive. This will require domain experts to agree standards for different types of data from different fields: what should be archived and when, which format, where, and for how long. We can learn from the genomics, structural biology and astronomy communities, and funding agencies should cooperate to define subdisciplines and establish surveys of them to ensure comprehensive coverage of the data landscape, from astronomy to zoology….”

Open Sourcing Reuse | Community-led Open Publication Infrastructures for Monographs (COPIM)

Adema, J., Bowie, S., & Kiesewetter, R. (2023). Open Sourcing Reuse. Community-Led Open Publication Infrastructures for Monographs (COPIM). https://doi.org/10.21428/785a6451.6564c3be As part of the documentation for the first book coming out of the Combinatorial Books Pilot Project, we are introducing and discussing the set of modular, open source writing, editing, annotating, and publishing software, tools, and platforms we have used. This is the fifth blogpost in a series documenting the COPIM/OHP Pilot Project Combinatorial Books: Gathering Flowers. You can find the previous blogposts here, here, here, and here. In the context of the Combinatorial Books: Gathering Flowers Pilot Project one of our aims has been to use, wherever possible, modular and open source writing, editing, annotating, and publishing software, tools, and platforms. We wanted to show how these can be used in the context of reusing and rewriting existing open access books or collections of books. Instead of creating our own custom solutions we have tried to create (technical) workflows that consist of existing open source applications, to enable other authors and publishers to apply or adapt these to their own writing and publishing workflows. At this stage of the Pilot Project we want to share some preliminary insights and reflections in combination with a closer description of the tools and platforms we have used. We will do so in textual form and as part of an audio interview with Simon Bowie, who is working as an open source software developer on the COPIM project. Specifically, we want to share our rationale for using open source applications, and reflect upon how these tools can either be integrated into or require adaptations to classical editorial and publishing workflows, timelines, tasks, and relationalities between those involved in publishing a book (for example those between tool and platform providers, publishers, developers, and editors).    

Attitudes Toward Providing Open Access for Use of Biospecimens and Health Records: A Cross-Sectional Study from Jordan

Abstract:  Purpose: Biospecimen repositories and big data generated from clinical research are critically important in advancing patient-centered healthcare. However, ethical considerations arising from reusing clinical samples and health records for subsequent research pose a hurdle for big-data health research. This study aims to assess the public’s opinions in Jordan toward providing blanket consent for using biospecimens and health records in research.

Participants and Methods: A cross-sectional study utilizing a self-reported questionnaire was carried out in different cities in Jordan, targeting adult participants. Outcome variables included awareness of clinical research, participation in clinical research, and opinions toward providing open access to clinical samples and records for research purposes. Descriptive analysis was utilized for reporting the outcome as frequency (percentages) out of the total responses. Univariate and multivariate logistic regression were used to investigate the association between independent variables and the outcome of interest.

Results: A total of 1033 eligible participants completed the questionnaire. Although the majority (90%) were aware of clinical research, only 24% have ever participated in this type of research. About half (51%) agreed on providing blanket consent for the use of clinical samples, while a lower percentage (43%) agreed on providing open access to their health records. Privacy concerns and lack of trust in the researcher were cited as major barriers to providing blanket consent. Participation in clinical research and having health insurance were predictors for providing open access to clinical samples and records.

Conclusion: The lack of public trust in Jordan toward data privacy is evident from this study. Therefore, a governance framework is needed to raise and maintain the public’s trust in big-data research that warrants the future reuse of clinical samples and records. As such, the current study provides valuable insights that will inform the design of effective consent protocols required in data-intensive health research.

WorldFAIR project

“In the WorldFAIR project, CODATA ( the Committee on Data of the International Science Council) and RDA (the Research Data Alliance), work with a set of 11 disciplinary and cross-disciplinary case studies to advance implementation of the FAIR principles and, in particular, to improve interoperability and reusability of digital research objects, including data. Particular attention is paid to the articulation of an interoperability framework for each case study and research domain.”

The RDA and Oracle for Research: Collaboration to accelerate data-driven discovery | RDA

“The Research Data Alliance (RDA) and Oracle for Research  are delighted to announce a special collaboration for the global research data community.

This landmark agreement, signed in January 2023, will run for 12 months and support the mission of RDA to build the social and technical bridges to enable open sharing and re-use of research data. Its aim is to accelerate the development of community-driven solutions across all technologies, disciplines, and countries. Facilitating and advancing open research and open science are common goals of the two organisations.

Private sector engagement is a strategic priority for RDA, and Oracle for Research support will facilitate activities taking place within RDA working groups. These volunteer-driven groups focus on the development of data standards and strategies for data publication, while addressing access and sharing challenges. The results of their activities are then made available to the public….”

Advancing the Promise of Open Science: We Want to Hear from You! – Office of Science Policy

“Today, we are pleased to announce that the “NIH Plan to Enhance Public Access to the Results of NIH-Supported Research” (NIH’s Public Access Plan) is now available for public review and comment. We are issuing this Plan in response to the OSTP memo and also because it is consistent with NIH’s longstanding commitment to open science. This Plan builds upon the strong foundation of the NIH Public Access Policy which, since 2008, has made over 1.4 million articles describing NIH-supported research available to the public through PubMed Central. As you will see, the Plan builds on what we currently do, and we expect to maintain many current practices. But importantly, we ultimately plan to institute a zero-embargo period on publications so that research results are freely available to the public without delay.

It is important to keep in mind that this Plan is not a proposed policy, but a roadmap of steps NIH will take to enhance access to research products.  Any future updates to the NIH Public Access Policy will, in turn, be released as a draft for public comment. Also, to loop back to the DMS Policy—we expect that the DMS Policy will meet all expectations related to data sharing in the OSTP memo….”