Long-term availability of data associated with articles in PLOS ONE | PLOS ONE

Abstract:  The adoption of journal policies requiring authors to include a Data Availability Statement has helped to increase the availability of research data associated with research articles. However, having a Data Availability Statement is not a guarantee that readers will be able to locate the data; even if provided with an identifier like a uniform resource locator (URL) or a digital object identifier (DOI), the data may become unavailable due to link rot and content drift. To explore the long-term availability of resources including data, code, and other digital research objects associated with papers, this study extracted 8,503 URLs and DOIs from a corpus of nearly 50,000 Data Availability Statements from papers published in PLOS ONE between 2014 and 2016. These URLs and DOIs were used to attempt to retrieve the data through both automated and manual means. Overall, 80% of the resources could be retrieved automatically, compared to much lower retrieval rates of 10–40% found in previous papers that relied on contacting authors to locate data. Because a URL or DOI might be valid but still not point to the resource, a subset of 350 URLs and 350 DOIs were manually tested, with 78% and 98% of resources, respectively, successfully retrieved. Having a DOI and being shared in a repository were both positively associated with availability. Although resources associated with older papers were slightly less likely to be available, this difference was not statistically significant, suggesting that URLs and DOIs may be an effective means for accessing data over time. These findings point to the value of including URLs and DOIs in Data Availability Statements to ensure access to data on a long-term basis.



Produce, publish, promote, track and analyze: Altmetric and Figshare for NTROs – Digital Science

“Join us on June 15 at 2pm AEST for a webinar with Altmetric and Figshare. 

We’ll be discussing how you can effectively promote your Non-Traditional Research Outputs (NTROs) using a Figshare repository and how to generate Altmetric attention around those research outputs, ensuring you get the credit you deserve. …”

The experiment begins: Arcadia publishing 1.0 · Reimagining scientific publishing

“In thinking about how to share Arcadia’s research, we wanted to keep features of traditional publishing that have been honed over centuries, but improve upon what hasn’t quite adapted to the nature of modern science and technology. We have a unique opportunity to use our own research to develop mechanisms of sharing and quality control that can be more agile and adaptable. Our initial attempt is outlined here and we will continue to iterate upon it, always keeping the advancement of knowledge as our guiding principle when making decisions on what to try next….

We are reimagining scientific publishing — sharing our work early and often, maximizing utility and reusability, and improving our science on the basis of public feedback.

This is our first draft. We have ambitious goals and we’re committed to replicable long-term solutions, but we also know that “perfection is the enemy of good.” We’re using this platform to release findings now rather than hiding them until we’ve gotten everything exactly how we want it. Readers can think of the pubs on this platform as drafts that will evolve over time, shaped by public feedback. The same goes for the platform itself! We’re treating our publishing project like an experiment — we’re not sure where we will land, but we can only learn if we try. In this pub, we’re sharing our strategy and the reasoning behind some of our key decisions, highlighting features we’re excited about and areas for improvement. …

Data and Software for Authors | AGU

“AGU requires that the underlying data needed to understand, evaluate, and build upon the reported research be available at the time of peer review and publication. Additionally, authors should make available software that has a significant impact on the research. This entails:

Depositing the data and software in a community accepted, trusted repository, as appropriate, and preferably with a DOI
Including an Availability Statement as a separate paragraph in the Open Research section explaining to the reader where and how to access the data and software
And including citation(s) to the deposited data and software, in the Reference Section….”

DOI (Digital Object Identifier) for Systematic Reviewers and other Researchers: Benefits, Confusions, and Need-to-Knows | by Farhad | Jan, 2022 | Medium

“DOI enhances the accessibility, discoverability, trustability, and interoperability of digital objects and serves the openness and visibility of professionally published content. While I am not a DOI expert, I know about it because I use it a lot in my profession. I believe DOI will play a significant role in the automation of literature reviews. More than it does now.

It is the responsibility of librarians, information specialists and other information professionals to raise awareness about the benefits of DOI. …”

Citation needed? Wikipedia bibliometrics during the first wave of the COVID-19 pandemic | GigaScience | Oxford Academic

Abstract:  Background

With the COVID-19 pandemic’s outbreak, millions flocked to Wikipedia for updated information. Amid growing concerns regarding an “infodemic,” ensuring the quality of information is a crucial vector of public health. Investigating whether and how Wikipedia remained up to date and in line with science is key to formulating strategies to counter misinformation. Using citation analyses, we asked which sources informed Wikipedia’s COVID-19–related articles before and during the pandemic’s first wave (January–May 2020).


We found that coronavirus-related articles referenced trusted media outlets and high-quality academic sources. Regarding academic sources, Wikipedia was found to be highly selective in terms of what science was cited. Moreover, despite a surge in COVID-19 preprints, Wikipedia had a clear preference for open-access studies published in respected journals and made little use of preprints. Building a timeline of English-language COVID-19 articles from 2001–2020 revealed a nuanced trade-off between quality and timeliness. It further showed how pre-existing articles on key topics related to the virus created a framework for integrating new knowledge. Supported by a rigid sourcing policy, this “scientific infrastructure” facilitated contextualization and regulated the influx of new information. Last, we constructed a network of DOI-Wikipedia articles, which showed the landscape of pandemic-related knowledge on Wikipedia and how academic citations create a web of shared knowledge supporting topics like COVID-19 drug development.


Understanding how scientific research interacts with the digital knowledge-sphere during the pandemic provides insight into how Wikipedia can facilitate access to science. It also reveals how, aided by what we term its “citizen encyclopedists,” it successfully fended off COVID-19 disinformation and how this unique model may be deployed in other contexts.