Data and Software for Authors | AGU

“AGU requires that the underlying data needed to understand, evaluate, and build upon the reported research be available at the time of peer review and publication. Additionally, authors should make available software that has a significant impact on the research. This entails:

Depositing the data and software in a community accepted, trusted repository, as appropriate, and preferably with a DOI
Including an Availability Statement as a separate paragraph in the Open Research section explaining to the reader where and how to access the data and software
And including citation(s) to the deposited data and software, in the Reference Section….”

DOI (Digital Object Identifier) for Systematic Reviewers and other Researchers: Benefits, Confusions, and Need-to-Knows | by Farhad | Jan, 2022 | Medium

“DOI enhances the accessibility, discoverability, trustability, and interoperability of digital objects and serves the openness and visibility of professionally published content. While I am not a DOI expert, I know about it because I use it a lot in my profession. I believe DOI will play a significant role in the automation of literature reviews. More than it does now.

It is the responsibility of librarians, information specialists and other information professionals to raise awareness about the benefits of DOI. …”

Citation needed? Wikipedia bibliometrics during the first wave of the COVID-19 pandemic | GigaScience | Oxford Academic

Abstract:  Background

With the COVID-19 pandemic’s outbreak, millions flocked to Wikipedia for updated information. Amid growing concerns regarding an “infodemic,” ensuring the quality of information is a crucial vector of public health. Investigating whether and how Wikipedia remained up to date and in line with science is key to formulating strategies to counter misinformation. Using citation analyses, we asked which sources informed Wikipedia’s COVID-19–related articles before and during the pandemic’s first wave (January–May 2020).

Results

We found that coronavirus-related articles referenced trusted media outlets and high-quality academic sources. Regarding academic sources, Wikipedia was found to be highly selective in terms of what science was cited. Moreover, despite a surge in COVID-19 preprints, Wikipedia had a clear preference for open-access studies published in respected journals and made little use of preprints. Building a timeline of English-language COVID-19 articles from 2001–2020 revealed a nuanced trade-off between quality and timeliness. It further showed how pre-existing articles on key topics related to the virus created a framework for integrating new knowledge. Supported by a rigid sourcing policy, this “scientific infrastructure” facilitated contextualization and regulated the influx of new information. Last, we constructed a network of DOI-Wikipedia articles, which showed the landscape of pandemic-related knowledge on Wikipedia and how academic citations create a web of shared knowledge supporting topics like COVID-19 drug development.

Conclusions

Understanding how scientific research interacts with the digital knowledge-sphere during the pandemic provides insight into how Wikipedia can facilitate access to science. It also reveals how, aided by what we term its “citizen encyclopedists,” it successfully fended off COVID-19 disinformation and how this unique model may be deployed in other contexts.

Incentivising research data sharing: a scoping review

Abstract:  Background: Numerous mechanisms exist to incentivise researchers to share their data. This scoping review aims to identify and summarise evidence of the efficacy of different interventions to promote open data practices and provide an overview of current research.

Methods: This scoping review is based on data identified from Web of Science and LISTA, limited from 2016 to 2021. A total of 1128 papers were screened, with 38 items being included. Items were selected if they focused on designing or evaluating an intervention or presenting an initiative to incentivise sharing. Items comprised a mixture of research papers, opinion pieces and descriptive articles.

Results: Seven major themes in the literature were identified: publisher/journal data sharing policies, metrics, software solutions, research data sharing agreements in general, open science ‘badges’, funder mandates, and initiatives.

Conclusions: A number of key messages for data sharing include: the need to build on existing cultures and practices, meeting people where they are and tailoring interventions to support them; the importance of publicising and explaining the policy/service widely; the need to have disciplinary data champions to model good practice and drive cultural change; the requirement to resource interventions properly; and the imperative to provide robust technical infrastructure and protocols, such as labelling of data sets, use of DOIs, data standards and use of data repositories.

Brave New Publishing World | The Scientist Magazine®

“This simply means that journalistic outlets, members of the public, researchers, politicians, and other interested parties must be extra vigilant when considering findings reported in preprints. Mullins, who is also the editor in chief of open-access journal Toxicology Communications (and my next-door neighbor), and other scientists offer several suggestions for this proper contextualization of preprints on the part of the research community. Giving preprints DOI numbers that expire after a set time instead of the permanent DOIs they now receive, referring to them as an “unrefereed manuscripts,” or emblazoning each page of preprints with a warning label that alerts readers to the unreviewed nature of the paper may well help to present preprints as those first drafts of science.

On the journalism side, it’s imperative that newsrooms at media behemoths and niche publications alike adopt policies that strike a balance—between rapidly communicating valuable information and disseminating well-founded scientific insights—by appropriately contextualizing and vetting findings reported in preprints. At The Scientist, we have done just this, and our policies regarding preprints are posted on the Editorial Policies page of www.the-scientist.com. Please head there to review the specifics, and feel free to share your thoughts and comments on our social media channels. In my opinion, preprints and the servers that host them do still harbor a promise and utility that may help when the next global health emergency comes knocking. The scientific community, the public, the press, and the political sphere must adjust our views and treatment of these first drafts of science so that we avoid the pitfalls and reap the benefits of a more direct communication of research findings.”

92 million new citations added to COCI | OpenCitations blog

“It’s been a month since the announcement of 1.09 Billion Citations available in the July 2021 release of COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations.  

We’re now proud to announce the September 2021 release of COCI, which is based on open references to works with DOIs within the Crossref dump dated August 2021. This new release extends COCI with more than 92 Million additional citations, giving a total number of more than 1.18 Billion DOI-to-DOI citation links….”

Open Grant Proposals · Business of Knowing, summer 2021

“One of those informal frontiers is crowdfunding for scientific research. For the past year, I’ve worked on Experiment, helping hundreds of scientists design and launch crowdfunding campaigns for their research questions. Experiment has been doing this for almost a decade, with more than 1,000 successfully funded projects on the platform. The process is very different than the grant funding mechanisms set up by agencies and foundations. It’s not big money yet, as the average fundraise is still ~$5,000. But in many ways, the process is better: faster, transparent, and more encouraging to early-career scientists. Of all the lessons learned, one stands out for broader consideration: grant proposals and processes should be open by default.

Grant proposals that meet basic requirements for scientific merit and rigor should be posted online, ideally in a standardized format, in a centralized (or several) database or clearinghouse. They should include more detail than just the abstract and dollar amount totals that are currently shown now on federal databases, especially in terms of budgets and costs. The proposals should include a DOI number so that future work can point back to the original question, thinking, and scope. A link to these open grant proposals should be broadly accepted as sufficient for submission to requests from agencies or foundations….

Open proposals would make research funding project-centric, rather than funder-centric….

Open proposals would promote more accurate budgets….

Open proposals would increase the surface area of collaboration….

Open proposals would improve citation metrics….

Open proposals would create an opportunity to reward the best question-askers in addition to the best question-answerers….

Open proposals would give us a view into the whole of science, including the unfunded proposals and the experiments with null results….”

Open Research Infrastructure Programs at LYRASIS

“Academic libraries, and institutional repositories in particular, play a key role in the ongoing quest for ways to gather metrics and connect the dots between researchers and research contributions in order to measure “institutional impact,” while also streamlining workflows to reduce administrative burden. Identifying accurate metrics and measurements for illustrating “impact” is a goal that many academic research institutions share, but these goals can only be met to the extent that all organizations across the research and scholarly communication landscape are using best practices and shared standards in research infrastructure. For example, persistent identifiers (PIDs) such as ORCID iDs (Open Researcher and Contributor Identifier) and DOIs (Digital Object Identifiers) have emerged as crucial best practices for establishing connections between researchers and their contributions while also serving as a mechanism for interoperability in sharing data across systems. The more institutions using persistent identifiers (PIDs) in their workflows, the more connections can be made between entities, making research objects more FAIR (findable, accessible, interoperable, and reusable). Also, when measuring institutional repository usage, clean, comparable, standards-based statistics are needed for accurate internal assessment, as well as for benchmarking with peer institutions….”

Open Research Infrastructure Programs at LYRASIS

“Academic libraries, and institutional repositories in particular, play a key role in the ongoing quest for ways to gather metrics and connect the dots between researchers and research contributions in order to measure “institutional impact,” while also streamlining workflows to reduce administrative burden. Identifying accurate metrics and measurements for illustrating “impact” is a goal that many academic research institutions share, but these goals can only be met to the extent that all organizations across the research and scholarly communication landscape are using best practices and shared standards in research infrastructure. For example, persistent identifiers (PIDs) such as ORCID iDs (Open Researcher and Contributor Identifier) and DOIs (Digital Object Identifiers) have emerged as crucial best practices for establishing connections between researchers and their contributions while also serving as a mechanism for interoperability in sharing data across systems. The more institutions using persistent identifiers (PIDs) in their workflows, the more connections can be made between entities, making research objects more FAIR (findable, accessible, interoperable, and reusable). Also, when measuring institutional repository usage, clean, comparable, standards-based statistics are needed for accurate internal assessment, as well as for benchmarking with peer institutions….”