Micah Vandegrift Appointed ARL Visiting Program Officer for Accelerating Social Impact of Research – Association of Research Libraries

“The Association of Research Libraries (ARL) has named Micah Vandegrift as a visiting program officer in the Scholars & Scholarship program for July 2021–July 2022. Vandegrift is the open knowledge librarian at NC State University Libraries.

As visiting program officer, Vandegrift will design and deliver a pilot experience for a cohort of eight ARL member libraries that are advancing open research practices at their institutions. The pilot Accelerating the Social Impact of Research (ASIR) program will help participants develop a strategic approach for advancing the social impact of science, aimed at building and reinforcing institutional points of influence for open research practices. This initiative is in coordination with the US National Academy of Sciences, Engineering, and Medicine (NASEM) roundtable on Aligning Incentives for Open Science and with the NASEM Board on Research Data and Information (BRDI)….”

Open access science leads to more citations – The Science Show – ABC Radio National

Abstract:  The traditional method in releasing scientific results, still widely practiced, is to have a paper published in a peer reviewed journal, one usually accessible only by subscription. But that is changing. Some results are allowed to be seen by all. But it goes further. Some scientists release their results step by step and welcome feedback as experiments are underway. This is open access science. Kiera McNeice, Research Data Manager at Cambridge University Press says the publisher is pushing for more open access research while maintaining high standards of peer review. She says it leads to more citations, which for many scientists is a key measure of their work.

 

Citation Advantage? | Clarke & Esposito

“You might think that after 20 years of research and more than 130 studies on the subject, we’d have a clear picture of the effect that open access publishing has on an article’s citation performance. Unfortunately, decades of poor studies and a mystifying unwillingness to perform experimental controls for confounding factors continues to muddy the waters around the alleged open access citation advantage (OACA).

 
In a new paper published in PLOS ONE, authors from the University of Minnesota Libraries attempted to perform a meta-analysis of the 134 studies they could locate on the subject. But to be valid, a meta-analysis must look at comparable experiments, and because the OACA studies were so heterogenous, this proved impossible. Definitions of “open access,” fields of study, time periods studied, etc. were all over the place, negating any possible conclusions that could be drawn….”

Article Processing Charges based publications: to which extent the price explains scientific impact?

The present study aims to analyze relationship between Citations Normalized Score (NCS) of scientific publications and Article Processing Charges (APCs) amounts of Gold Open access publications. To do so, we use APCs information provided by OpenAPC database and citations scores of publications in the Web of Science database (WoS). Database covers the period from 2006 to 2019 with 83,752 articles published in 4751 journals belonging to 267 distinct publishers. Results show that contrary to this belief, paying dearly does not necessarily increase the impact of publications. First, large publishers with high impact are not the most expensive. Second, publishers with the highest APCs are not necessarily the best in terms of impact. Correlation between APCs and impact is moderate. Otherwise, in the econometric analysis we have shown that publication quality is strongly determined by journal quality in which it is published. International collaboration also plays an important role in citations score.

Lo | The Factors Significant to the Introduction of Institutional Open Access Policies: Two Case Studies of R-1 Universities | Journal of Librarianship and Scholarly Communication

Abstract:  INTRODUCTION US universities are increasingly unable to afford research journal subscriptions due to the rising prices charged by for-profit academic publishers. Open access (OA) appears to be the most backed option to disrupt the current publishing model. The purpose of this study is to understand the factors significant to the introduction of institutional OA policies at selected United States R-1 universities. METHODS An in-depth qualitative study, including interviews with stakeholders, was conducted on two R-1universities with OA policies that have been implemented for at least five years. results The results of this study reveal that while the perceived sustainability of the scholarly communication business model was an initial driver, open dissemination of knowledge was the primary factor for the development of institutional policies. discussion Open dissemination of knowledge aligns with the mission of both institutions. Interviewees believe that a wider and more open dissemination of the institution’s research cost could positively affect their faculty’s research impact, which could then affect the institution’s reputation, rankings, classifications and funding. CONCLUSION While the initial driver for exploring OA scholarly communication for both institutions was the perceived unsustainability of the scholarly communication model, the most important factor that led to the creation of their policies was the desire to disseminate the faculty’s scholarship.

 

The “Sci-Hub effect” can almost double the citations of research articles, study suggests

“Scientific articles that get downloaded from the scholarly piracy website Sci-Hub tend to receive more citations, according to a new study published in Scientometrics. The number of times an article was downloaded from Sci-Hub also turned out to be a robust predictor of future citations….”

Comparison of subscription access and open access obstetrics and gynecology journals in the SCImago database | Özay | Ginekologia Polska

Abstract:  Objectives:

 The aim of this study is to compare the annual SJR and to evaluate the other parameters that show the scientific effect of journals in terms of open access (OA) or subscription access (SA) in the field of obstetrics and gynecology according to the SCImago database.

Material and methods:This study was conducted between September-December 2019 at Near East University. The SCImago Journal & Country Rank database was used to collect information about the journals. We evaluated and compared the changes in the one-year SJR (SCImago Journal Rank) and journal impact factor (JIF) of OA and SA journals.

Results:Data from 183 scientific journals in the field of obstetrics and gynecology from the period between 1999 and 2018 were evaluated, where 140 of these journals were SA and 43 were OA. The average SJR of OA journals in 1999 was 0.17, while it was 0.38 for SA journals. In 2018, these values were 0.31 and 0.78 for OA and SA journals, respectively. In the comparison of JIF, the average of the OA journals in 1999 was 0.09, while it was 0.66 for SA journals. In 2018, these values were 0.80 and 1.93 for OA and SA journals, respectively.

Conclusions:Access to information has become easier due to technological developments and this will continue to affect the access policies of journals. Despite the disadvantages of predator journals, the rise of OA journals in terms of number and quality is likely to continue.

Journal impact factor gets a sibling that adjusts for scientific field | Science | AAAS

“The new Journal Citation Indicator (JCI) accounts for the substantially different rates of publication and citation in different fields, Clarivate says. But the move is drawing little praise from the critics, who say the new metric remains vulnerable to misunderstanding and misuse….”

Promoting inclusive metrics of success and impact to dismantle a discriminatory reward system in science

Abstract:  Success and impact metrics in science are based on a system that perpetuates sexist and racist “rewards” by prioritizing citations and impact factors. These metrics are flawed and biased against already marginalized groups and fail to accurately capture the breadth of individuals’ meaningful scientific impacts. We advocate shifting this outdated value system to advance science through principles of justice, equity, diversity, and inclusion. We outline pathways for a paradigm shift in scientific values based on multidimensional mentorship and promoting mentee well-being. These actions will require collective efforts supported by academic leaders and administrators to drive essential systemic change.

 

Meet the new Faculty Opinions Score – Faculty Opinions Blog

“Traditional citation metrics, such as the journal impact factor, can frequently act as biased measurements of research quality and contribute to the broken system of research evaluation. Academic institutions and funding bodies are increasingly moving away from relying on citation metrics towards greater use of transparent expert opinion. 

Faculty Opinions has championed this cause for two decades, with over 230k recommendations made by our 8000+ Faculty Members, and we are excited to introduce the next step in our evolution – a formidable mark of research quality – the new Faculty Opinions Score….

The Faculty Opinions Score assigns a numerical value of research publications in Biology and Medicine, to quantify their impact and quality compared to other publications in their field. 

The Faculty Opinions Score is derived by combining our unique star-rated Recommendations on individual publications, made by world-class experts, with bibliometrics to produce a radically new metric in the research evaluation landscape. 

 

 

 

Key properties of the Faculty Opinions Score: 

A score of zero is assigned to articles with no citations and no recommendations. 
The average score of a set of recommended articles has an expected value of 10. However, articles with many recommendations or highly cited articles may have a much higher score. There is no upper bound. 
Non-recommended articles generally score lower than recommended articles. 
Recommendations contribute more to the score than bibliometric performance. In other words, expert recommendations increase an article’s score (much) more than citations do….”

New metric ‘leverages opinions of 8,000 experts’ | Research Information

“Faculty Opinions has introduced a new metric in the research evaluation landscape, leveraging the opinions of more than 8,000 experts. 

The Faculty Opinions Score is designed to be an early indicator of an article’s future impact and a mark of research quality. The company describes the implications for researchers, academic institutions and funding bodies as ‘promising’….”

Game over: empower early career researchers to improve research quality

Abstract:  Processes of research evaluation are coming under increasing scrutiny, with detractors arguing that they have adverse effects on research quality, and that they support a research culture of competition to the detriment of collaboration. Based on three personal perspectives, we consider how current systems of research evaluation lock early career researchers and their supervisors into practices that are deemed necessary to progress academic careers within the current evaluation frameworks. We reflect on the main areas in which changes would enable better research practices to evolve; many align with open science. In particular, we suggest a systemic approach to research evaluation, taking into account its connections to the mechanisms of financial support for the institutions of research and higher education in the broader landscape. We call for more dialogue in the academic world around these issues and believe that empowering early career researchers is key to improving research quality.

 

Social media attention and citations of published outputs from re-use of clinical trial data: a matched comparison with articles published in the same journals | BMC Medical Research Methodology | Full Text

Abstract:  Background

Data-sharing policies in randomized clinical trials (RCTs) should have an evaluation component. The main objective of this case–control study was to assess the impact of published re-uses of RCT data in terms of media attention (Altmetric) and citation rates.

Methods

Re-uses of RCT data published up to December 2019 (cases) were searched for by two reviewers on 3 repositories (CSDR, YODA project, and Vivli) and matched to control papers published in the same journal. The Altmetric Attention Score (primary outcome), components of this score (e.g. mention of policy sources, media attention) and the total number of citations were compared between these two groups.

Results

89 re-uses were identified: 48 (53.9%) secondary analyses, 34 (38.2%) meta-analyses, 4 (4.5%) methodological analyses and 3 (3.4%) re-analyses. The median (interquartile range) Altmetric Attention Scores were 5.9 (1.3—22.2) for re-use and 2.8 (0.3—12.3) for controls (p?=?0.14). No statistical difference was found on any of the components of in the Altmetric Attention Score. The median (interquartile range) numbers of citations were 3 (1—8) for reuses and 4 (1 – 11.5) for controls (p?=?0.30). Only 6/89 re-uses (6.7%) were cited in a policy source.

Conclusions

Using all available re-uses of RCT data to date from major data repositories, we were not able to demonstrate that re-uses attracted more attention than a matched sample of studies published in the same journals. Small average differences are still possible, as the sample size was limited. However matching choices have some limitations so results should be interpreted very cautiously. Also, citations by policy sources for re-uses were rare.

Industry not harvest: Principles to minimise collateral damage in impact assessment at scale | Impact of Social Sciences

“As the UK closes the curtains on the Research Excellence Framework 2021 (REF2021) and embarks on another round of consultation, there is little doubt that, whatever the outcome, the expectation remains that research should be shown to be delivering impact. If anything, this expectation is only intensifying. Fuelled by the stated success of REF 2014, the appetite for impact assessment also appears – at least superficially – to be increasing internationally, albeit largely stopping short of mirroring a fully formalised REF-type model. Within this context, the UK’s Future Research Assessment Programme was recently announced, with a remit to explore revised or alternative approaches. Everything is on the table, so we are told, and the programme sensibly includes the convening of an external body of international advisors to cast their, hopefully less jaded eyes upon proceedings….”