SocArXiv Papers | A scoping review on the use and acceptability of preprints

Abstract:  Background: Preprints are open and accessible scientific manuscript or report that has not been submitted to a peer reviewed journal. The value and importance of preprints has grown since its contribution during the public health emergency of the COVID-19 pandemic. Funders and publishers are establishing their position on the use of preprints, in grant applications and publishing models. However, the evidence supporting the use and acceptability of preprints varies across funders, publishers, and researchers. The purpose of this scoping review was to explore the current evidence on the use and acceptability of preprints by publishers, funders, and the research community throughout the research lifecycle.

  Methods: A scoping review was undertaken with no study or language limits. The search strategy was limited to the last five years (2017-2022) to capture changes influenced by COVID-19 (e.g., accelerated use and role of preprints in research). The review included international literature, including grey literature, and two databases were searched: Scopus and Web of Science (24 August 2022). Results: 379 titles and abstracts and 193 full text articles were assessed for eligibility. Ninety-eight articles met eligibility criteria and were included for full extraction. For barriers and challenges, 26 statements were grouped under four main themes (e.g., volume/growth of publications, quality assurance/trustworthiness, risks associated to credibility, and validation). For benefits and value, 34 statements were grouped under six themes (e.g., openness/transparency, increased visibility/credibility, open review process, open research, democratic process/systems, increased productivity/opportunities). Conclusions: Preprints provide opportunities for rapid dissemination but there is a need for clear policies and guidance from journals, publishers, and funders. Cautionary measures are needed to maintain the quality and value of preprints, paying particular attention to how findings are translated to the public. More research is needed to address some of the uncertainties addressed in this review.

Which Nationals Use Sci-Hub Mostly?: The Serials Librarian: Vol 0, No 0

Abstract:  In the last decade, Sci-Hub has become prevalent among academic information users across the world. Providing thousands of users with millions of uncopyrighted electronic academic resources, this information pirate website has become a significant threat to copyrights in cyberspace. Information scholars have examined the unequal distribution of IP addresses of Sci-Hub users’ nationality and emphasized the high proportion taken by users from the developed countries. This study finds new evidence from Google Scholar. Searching “Sci-Hub.tw” in the academic search engine, the author finds 531 results containing the keyword. Considering the result, the author argues that academic users in South American countries may use Sci-Hub more frequently than their counterparts in the rest of the world. Moreover, users in the Global North also rely on Sci-Hub to complete their research as well. The new evidence on Google Scholar proves the universal use of Sci-Hub across the world.

 

Conversion to Open Access using equitable new model sees upsurge in usage

“Leading nonprofit science publisher Annual Reviews has successfully converted the first fifteen journal volumes of the year to open access (OA) resulting in substantial increases in downloads of articles in the first month.

Through the innovative OA model called Subscribe to Open (S2O), developed by Annual Reviews, existing institutional customers continue to subscribe to the journals. With sufficient support, every new volume is immediately converted to OA under a Creative Commons license and is available for everyone to read and re-use. In addition, all articles from the previous nine volumes are also accessible to all. If support is insufficient, the paywall is retained….”

One more way AI can help us harness one of the most underutilized datasets in the world

“Satellite data may be one of the most underutilized datasets in the world. 

At Planet alone, we have six years of documented history — which means we have over 2,000 images on average for every point on earth’s landmass. This dataset at high resolution never existed before Planet came along and created it. 

What this dataset means is that you can see a lot of change…if you know where to look. 

We’re pulling down 30TB of data daily (nearly 4 million images!) off of ~200 satellites, and it would be impossible for humans to look at, consume and derive insights from all of that manually. Some days, it can literally feel like the world’s largest hidden picture puzzle. 

That’s why we crucially need artificial intelligence (AI) and machine learning (ML) to detect and inform us about what’s in this imagery. Given the size of our archive, it’s a veritable playground for Planeteers and our partners to train AI and ML models and to build algorithms that can extract objects and patterns – to find newly-built roads, identify collapsed or raised buildings, monitor change in forests throughout time, or track surveillance balloons over oceans – all possible today….”

When the Big Deal Gets Smaller: Use of ScienceDirect after Cancellations

Abstract:  This study investigates how article downloads from ScienceDirect changed after Temple University Libraries downsized its all-inclusive Elsevier big deal bundle to a selective custom package. After the libraries lost current-year access to nearly half of Elsevier’s active journals, the total downloads from Elsevier journals declined by 16.2 percent over three years. Combined use of still-subscribed and open access journals fell 10.6 percent in the same three years, suggesting that the drop in total use is due not only to the loss of journals but also to factors that would affect the remaining journals, such as the COVID-19 pandemic and a slight decrease in enrollment. Patrons may have substituted articles from still-subscribed and open access journals for those that were canceled, though the data are not conclusive. Reliance on open access appears to have increased.

Open Access Agreements: Factors to Consider – SPARC

“This document is intended to provide an overview of questions to ask and factors to consider when evaluating potential investments in open access (OA). 

When evaluating an offer from a publisher that incorporates an open access component with a subscription offer. This might include offers for read-and-publish/publish-and-read agreements or tiered membership models.
When evaluating an OA membership model that provides your institution’s authors with a discount on [or removal of] article processing charges (APCs).

There are many other models for open access transformations. Many of the same principles described in this document would apply to evaluating those offers. 

You can refer to OA analysis data sources for information on tools available for gathering this data.

In evaluating any OA offer, one must remember that collections decisions are based on many factors. Each subscription must be considered within the institution’s entire collections portfolio. This document addresses questions specific to agreements that include some sort of OA component….”

‘All your data are belong to us’: the weaponisation of library usage data and what we can do about it | UKSG

By Caroline Ball – Academic Librarian, University of Derby, #ebookSOS campaigner
Twitter: @heroicendeavour, Mastodon: @heroicendeavour@mas.to

and Anthony Sinnott, Access and Procurement Development Manager, University of York; Twitter: @librarianth

What do 850 football players and their performance data have in common with academic libraries and online resources? More than you’d think! The connecting factor is data, how it is collected, used and for what purposes.

‘Project Red Card’ is demanding compensation for the use of footballers’ performance data by betting companies, video game manufacturers, scouts and others, arguing that players should have more control over how their personal data is collected and particularly how it is monetized and commercialised.

Similarly, libraries’ online resources, whether a single ebook or vast databases, are producing enormous amounts of data, utilised by librarians to assist us in our vital functions: assessing usage and value, determining demand and relevance.

But are we the only ones using this data generated by our users? What other uses is this data being put to? We know for certain that vendors have access to more data than they provide to us via COUNTER statistics etc, but we have no way of knowing how much, what types, or what is done with it.

Witness the recent controversy generated by Wiley’s removal of 1,379 e-books from Academic Complete. Publishers like Wiley determine high use by accessing statistics generated by our end-users via the various e-book platforms through which they access the content. This in itself is indicative of our end-user/library data being provided to third parties without our knowledge or consent, particularly concerning given our licences are with vendors and not publishers. We are also not privy to what data-sharing agreements exist between vendors and publishers. Should we allow library usage data to be weaponized against us in this fashion? What recourse do we have to push back against this practice of ‘data extractivism’, to either withhold this data from publishers and vendors or prohibit them from using it for their own commercial gain?

 

 

Video: Open Access Usage Data: Present Knowledge, Future Developments | Open Access Book Network @ Youtube

Christina Drummond (Executive Director of the OA eBook Usage Data Trust) and Lucy Montgomery (Professor of Knowledge Innovation at Curtin University and co-lead of the Curtin Open Knowledge Initiative) discuss the OAeBU Usage Data Trust project and the new developments its work will take over the coming years.

Lucy Montgomery’s slides are available here: https://zenodo.org/record/7309149

Utilization of Open Access Journals by Library and Information Science Undergraduates in Delta State University, Abraka, Nigeria

Abstract:  The study examined the utilization of open access journals by Library and Information Science (LIS) undergraduate at Delta State University, Abraka. Two research questions and one hypothesis guided the study. A descriptive survey design was used by the researchers. The population of the study comprised 477 LIS undergraduates, and a simple random sampling technique was used to determine the sample size which is 217 students, representing 45% of the total population. The questionnaire was the instrument used for data collection. The questionnaire was validated by two experts and the Cronbach Alpha was used to establish the reliability of the instrument which yielded 0.75. Data were analysed with frequency count, simple percentages, and Statistical Product and Service Solutions (SPSS) version 23 was used to generate the mean, and standard deviation while Pearson’s product-moment correlation coefficient was used to test the hypothesis at 0.05 significant levels. The findings revealed that the students had a high level of awareness and a high level of usage of open access journals. From the test of the hypothesis, the study discovered that there is a significant relationship between the level of awareness and the use of open access journals. Hence, the student’s level of awareness positively influenced the use of open access journals. Based on the findings, the researchers recommended that the library management and lecturers should continue to promote the use of open access journals generally among the students to sustain its use.

Tracking Open Access Usage – ChronosHub

“Open Access usage is a complex topic. In this webinar, we’ll look at what metrics can be collected, and whether we should look at the data globally, or at an institutional level, possibly to evaluate affiliated institutions’ APC payments or open access agreements.

 

We will discuss the topic both from a publisher and a library perspective, with panelists sharing their experiences and opinion on the feasibility of conducting a usage-based analysis of open access articles to determine their value to institutions and libraries….”

Open Science Observatory – OpenAIRE Blog

“The Open Science Observatory (https://osobservatory.openaire.eu) is an OpenAIRE platform showcasing a collection of indicators and visualisations that help policy makers and research administrators better understand the Open Science landscape in Europe, across and within countries.  

The broader context: As the number of Open Science mandates have been increasing across countries and scientific fields, so has the need to track Open Science practices and uptake in a timely and granular manner. The Open Science Observatory assists the monitoring, and consequently the enhancing, of open science policy uptake across different dimensions of interest, revealing weak spots and hidden potential. Its release comes in a timely fashion, in order to support UNESCO’s global initiative for Open Science and the European Open Science Cloud (the current development and enhancement is co-funded by the EOSC Future H2020 project and will appear in the EOSC Portal).  …

How does it work: Based on the OpenAIRE Research Graph, following open science principles and an evidence-based approach, the Open Science Observatory provides simple metrics and more advanced composite indicators which cover various aspects of open science uptake such us

different openness metrics
FAIR principles
Plan S compatibility & transformative agreements
APCs

as well as measures related to the outcomes of Open Access research output as they relate to

network & collaborations
usage statistics and citations
Sustainable Development Goals

across and within European countries. ”