The Great Inflation: How COVID-19 affected the Journal Impact Factor of high impact medical journals

Abstract:  The journal impact factor (IF) is the leading method of scholarly assessment in today’s research world, influencing where scholars submit their research and funders distribute their resources. The Coronavirus disease 2019 (COVID-19), one of the most serious health crises, resulted in an unprecedented surge of publications across all areas of knowledge. An important question is whether COVID-19 affected the “gold standard of scholarly assessment”. We took as an example six high impact general medicine journals (Annals, BMJ, Lancet, Nature, NEJM and JAMA) and searched the literature using the Web of Science database for manuscripts published between January 1, 2019 and December 31, 2021. To assess the effect of COVID-19 and non-COVID-19 literature in their scholarly impact, we calculated their annual IFs and percentage changes. Thereafter, we estimated the citation probability of COVID-19 and non-COVID-19 publications along with their publication and citation rates by journal. A significant increase in IF change for COVID-19 manuscripts published from 2019 to 2020 was seen, against non-COVID-19 ones. The likelihood of highly cited publications was significantly increased in COVID-19 manuscripts from 2019 to 2021.The publication and citation rates of COVID-19 publications followed a positive trajectory, as opposed to non-COVID-19. The citation rate for COVID-19 publications peaked 10 months earlier than the publication rate. The rapid surge of COVID-19 publications emphasised the capacity of scientific communities to respond against a global health emergency, yet inflated IFs create ambiguity as benchmark tools for assessing scholarly impact. The immediate implication is a loss in value of and trust on journal IFs as metrics of research and scientific rigour perceived by academia and the society. Loss of confidence towards procedures employed by highly reputable publishers may incentivise authors to exploit the publication process by monopolising their research on COVID-19 and encourage them towards publishing in journals of predatory behaviour.

 

‘Stop Congratulating Colleagues for Publishing in High-Impact Factor Journals’ – The Wire Science

The current scholarly publishing system is detrimental to the pursuit of knowledge and needs a radical shift. Publishers have already anticipated new trends and have tried to protect their profits.
Current publishers’ power stems from the historical roots of their journals – and researchers are looking for symbolic status in the eye of their peers by publishing in renowned journals.
To counter them effectively, we need to identify obstacles that researchers themselves might face. Journals still perform some useful tasks and it requires effort to devise working alternatives.
There have already been many attempts and partial successes to drive a new shift in scholarly publishing. Many of them should be further developed and generalised.
In this excerpt from a report prepared by the Basic Research Community for Physics, the authors discuss these successes and make recommendations to different actors….”

Impact Factors, Altmetrics, and Prestige, Oh My: The Relationship Between Perceived Prestige and Objective Measures of Journal Quality | SpringerLink

Abstract:  The focus of this work is to examine the relationship between subjective and objective measures of prestige of journals in our field. Findings indicate that items pulled from Clarivate, Elsevier, and Google all have statistically significant elements related to perceived journal prestige. Just as several widely used bibliometric metrics related to prestige, so were altmetric scores.

 

Open Access and Research Metrics – ChronosHub

“Let’s talk about research metrics, notably journal and article metrics, in an open access context. Is open access content read and hence cited more widely? Do open access journals have a higher impact factor than non-OA journals, or vice versa? And how does flipping a journal from closed to open affect the Impact Factor? Should we be looking at other metrics for open access content? And what are authors looking for, when choosing journals to submit their articles to? Our panelists will share their insights and possible answers to these questions through short presentations and a discussion.”

“Superior identification index – Quantifying the capability of academic journals to recognize good research

Abstract:  In this paper we present “superior identification index” (SII), a metric to quantify the capability of academic journals to recognize top papers restricted by specific time window and study field. Intuitively, SII is the percentage of papers from a journal in the top p% papers in the field. SII provides flexible framework to make trade-offs on journal quality and quantity, as p rises it puts more weight on quantity and less weight on quality. Concerns on the p selection are discussed, and extended metrics of SII, including superior identification efficiency (SIE) and paper rank percentile (PRP), were proposed to sketch other dimensions of journal performance. Based on bibliometric data from ecological field, we find that as p increases, the correlation between SIE and JIF first rises then drops, indicating that JIF might most likely reflect “how well a journal identifies the top 26~34% papers in the field”. Hopefully, the new proposed SII metric and its extensions could promote the quality awareness and provide flexible tools for research evaluation.

Starstruck by journal prestige and citation counts? On students’ bias and perceptions of trustworthiness according to clues in publication references | SpringerLink

Abstract:  Research is becoming increasingly accessible to the public via open access publications, researchers’ social media postings, outreach activities, and popular disseminations. A healthy research discourse is typified by debates, disagreements, and diverging views. Consequently, readers may rely on the information available, such as publication reference attributes and bibliometric markers, to resolve conflicts. Yet, critical voices have warned about the uncritical and one-sided use of such information to assess research. In this study we wanted to get insight into how individuals without research training place trust in research based on clues present in publication references. A questionnaire was designed to probe respondents’ perceptions of six publication attributes. A total of 148 students responded to the questionnaire of which 118 were undergraduate students (with limited experience and knowledge of research) and 27 were graduate students (with some knowledge and experience of research). The results showed that the respondents were mostly influenced by the number of citations and the recency of publication, while author names, publication type, and publication origin were less influential. There were few differences between undergraduate and graduate students, with the exception that undergraduate students more strongly favoured publications with multiple authors over publications with single authors. We discuss possible implications for teachers that incorporate research articles in their curriculum.

 

The impact factors of social media users’ forwarding behavior of COVID-19 vaccine topic: Based on empirical analysis of Chinese Weibo users – PMC

Abstract:  Introduction

Social media, an essential source of public access to information regarding the COVID-19 vaccines, has a significant effect on the transmission of information regarding the COVID-19 vaccines and helps the public gain correct insights into the effectiveness and safety of the COVID-19 vaccines. The forwarding behavior of social media users on posts concerned with COVID-19 vaccine topics can rapidly disseminate vaccine information in a short period, which has a significant effect on transmission and helps the public access relevant information. However, the factors of social media users’ forwarding posts are still uncertain thus far. In this paper, we investigated the factors of the forwarding COVID-19 vaccines Weibo posts on Chinese social media and verified the correlation between social network characteristics, Weibo textual sentiment characteristics, and post forwarding.

Methods

This paper used data mining, machine learning, sentiment analysis, social network analysis, and regression analysis. Using “???? (COVID-19 vaccine)” as the keyword, we used data mining to crawl 121,834 Weibo posts on Sina Weibo from 1 January 2021 to 31 May 2021. Weibo posts not closely correlated with the topic of the COVID-19 vaccines were filtered out using machine learning. In the end, 3,158 posts were used for data analysis. The proportions of positive sentiment and negative sentiment in the textual of Weibo posts were calculated through sentiment analysis. On that basis, the sentiment characteristics of Weibo posts were determined. The social network characteristics of information transmission on the COVID-19 vaccine topic were determined through social network analysis. The correlation between social network characteristics, sentiment characteristics of the text, and the forwarding volume of posts was verified through regression analysis.

Results

The results suggest that there was a significant positive correlation between the degree of posting users in the social network structure and the amount of forwarding. The relationship between the closeness centrality and the forwarding volume was significantly positive. The betweenness centrality was significantly positively correlated with the forwarding volume. There was no significant relationship between the number of posts containing more positive sentiments and the forwarding volume of posts. There was a significant positive correlation between the number of Weibo posts containing more negative sentiments and the forwarding volume.

Conclusion

According to the characteristics of users, COVID-19 vaccine posts from opinion leaders, “gatekeepers,” and users with high-closeness centrality are more likely to be reposted. Users with these characteristics should be valued for their important role in disseminating information about COVID-19 vaccines. In addition, the sentiment contained in the Weibo post is an important factor influencing the public to forward vaccine posts. Special attention should be paid to the negative sentimental tendency contained in this post on Weibo to mitigate the negative impact of the information epidemic and improve the transmission effect of COVID-19 vaccine information.

Unified citation parameters for journals and individuals: Beyond the journal impact factor or the h-index alone | SpringerLink

Abstract:  We seek a unified and distinctive citation description of both journals and individuals. The journal impact factor has a restrictive definition that constrains its extension to individuals, whereas the h-index for individuals can easily be applied to journals. Going beyond any single parameter, the shape of each negative slope Hirsch curve of citations vs. rank index is distinctive. This shape can be described through five minimal parameters or ‘flags’: the h-index itself on the curve; the average citation of each segment on either side of h; and the two axis endpoints. We obtain the five flags from real data for two journals and 10 individual faculty, showing they provide unique citation fingerprints, enabling detailed comparative assessments. A computer code is provided to calculate five flags as the output, from citation data as the input. Since papers (citations) can form nodes (links) of a network, Hirsch curves and five flags could carry over to describe local degree sequences of general networks.

 

Grants and hiring: will impact factors and h-indices be scrapped?

“Universities, scientific academies, funding institutions and other organizations around the world will have the option to sign a document that would oblige signatories to change how they assess researchers for jobs, promotions and grants.

Signatories would commit to moving away from standard metrics such as impact factors, and adopting a system that rewards researchers for the quality of their work and their full contributions to science. “People are questioning the way they are being evaluated,” says Stephane Berghmans, director of research and innovation at the European University Association (EUA). The Brussels-based group helped to draft the agreement, which is known as the Agreement on Reforming Researcher Assessment. “This was the time.” 

Universities and other endorsers will be able to sign the agreement from 28 September. The European Commission (EC) announced plans last November for putting together the agreement; it proposed that assessment criteria reward ethics and integrity, teamwork and a variety of outputs, along with ‘research quality’ and impact. In January, the commission began to draft the agreement with the EUA and others….”

Does the journal impact factor predict individual article citation rate in otolaryngology journals? – Salman Hussain, Abdullah Almansouri, Lojaine Allanqawi, Justine Philteos, Vincent Wu, Yvonne Chan, 2022

Abstract:  Objective

Citation skew is a phenomenon that refers to the unequal citation distribution of articles in a journal. The objective of this study was to establish whether citation skew exists in Otolaryngology—Head and Neck Surgery (OHNS) journals and to elucidate whether journal impact factor (JIF) was an accurate indicator of citation rate of individual articles.

Methods

Journals in the field of OHNS were identified using Journal Citation Reports. After extraction of the number of citations in 2020 for all primary research articles and review articles published in 2018 and 2019, a detailed citation analysis was performed to determine citation distribution. The main outcome of this study was to establish whether citation skew exists within OHNS literature and whether JIF was an accurate prediction of individual article citation rate.

Results

Thirty-one OHNS journals were identified. Citation skew was prevalent across OHNS literature with 65% of publications achieving citation rates below the JIF. Furthermore, 48% of publications gathered either zero or one citation. The mean and median citations for review articles, 3.66 and 2, respectively, were higher than the mean and median number of citations for primary research articles, 1 and 2.35, respectively (P < .001). A statistically significant correlation was found between citation rate and JIF (r = 0.394, P = 0.028).

Conclusions

The current results demonstrate a citation skew among OHNS journals, which is in keeping with findings from other surgical subspecialties. The majority of publications did not achieve citation rates equal to the JIF. Thus, the JIF should not be used to measure the quality of individual articles. Otolaryngologists should assess the quality of research through the use of other metrics, such as the evaluation of sound scientific methodology, and the relevance of the articles.

Uses of the Journal Impact Factor in national journal rankings in China and Europe – Kulczycki – Journal of the Association for Information Science and Technology – Wiley Online Library

Abstract:  This paper investigates different uses of the Journal Impact Factor (JIF) in national journal rankings and discusses the merits of supplementing metrics with expert assessment. Our focus is national journal rankings used as evidence to support decisions about the distribution of institutional funding or career advancement. The seven countries under comparison are China, Denmark, Finland, Italy, Norway, Poland, and Turkey—and the region of Flanders in Belgium. With the exception of Italy, top-tier journals used in national rankings include those classified at the highest level, or according to tier, or points implemented. A total of 3,565 (75.8%) out of 4,701 unique top-tier journals were identified as having a JIF, with 55.7% belonging to the first Journal Impact Factor quartile. Journal rankings in China, Flanders, Poland, and Turkey classify journals with a JIF as being top-tier, but only when they are in the first quartile of the Average Journal Impact Factor Percentile. Journal rankings that result from expert assessment in Denmark, Finland, and Norway regularly classify journals as top-tier outside the first quartile, particularly in the social sciences and humanities. We conclude that experts, when tasked with metric-informed journal rankings, take into account quality dimensions that are not covered by JIFs.

 

Who games metrics and rankings? Institutional niches and journal impact factor inflation – ScienceDirect

Abstract:  Ratings and rankings are omnipresent and influential in contemporary society. Individuals and organizations strategically respond to incentives set by rating systems. We use academic publishing as a case study to examine organizational variation in responses to influential metrics. The Journal Impact Factor (JIF) is a prominent metric linked to the value of academic journals, as well as career prospects of researchers. Since scholars, institutions, and publishers alike all have strong interests in affiliating with high JIF journals, strategic behaviors to ‘game’ the JIF metric are prevalent. Strategic self-citation is a common tactic employed to inflate JIF values. Based on empirical analyses of academic journals indexed in the Web of Science, we examine institutional characteristics conducive to strategic self-citation for JIF inflation. Journals disseminated by for-profit publishers, with lower JIFs, published in academically peripheral countries and with more recent founding dates were more likely to exhibit JIF-inflating self-citation patterns. Findings reveal the importance of status and institutional logics in influencing metrics gaming behaviors, as well as how metrics can affect work outcomes in different types of institutions. While quantitative rating systems affect many who are being evaluated, certain types of people and organizations are more prone to being influenced by rating systems than others.

The End of Journal Impact Factor Purgatory (and Numbers to the Thousandths) – The Scholarly Kitchen

“Clarivate Analytics announced today that they are granting all journals in the Web of Science Core Collection an Impact Factor with the 2023 release….

In 2015, Clarivate launched the ESCI. It was initially described as an index of journals that are up-and-coming — meaning new journals, or established journals in niche areas that are growing in impact. At the time of launch, publishers were told that a journal selected for ESCI will likely get an Impact Factor within a few years.

The model for ESCI seemed to shift a few years later and there are many journals in ESCI that have been there since 2015 that still don’t have Impact Factors. In fact, Clarivate includes content for indexed journals back to 2005 so there clearly were journals older than 10 years in the database when it launched.

Clarivate reports that ESCI has over 7800 journals with 3 million records. A little over a third of those records are open access records.

The inclusion criteria for all four indices include 24 quality measures and four “impact” measures. Those journals that meet all 28 criteria are included in SCIE, SSCI, and AHCI. Those that only meet the 24 quality measures were relegated to the ESCI….

The second big announcement today is that with the 2023 release, Clarivate will “display” Impact Factors with only one decimal place instead of three! …”

 

This change announced today indicates that the four impact measures are no longer required in order to get an Impact Factor….”

Gardner et al. (2022) Implementing the Declaration on Research Assessment: a publisher case study

Gardner, Victoria, Mark Robinson, and Elisabetta O’Connell. 2022. “Implementing the Declaration on Research Assessment: A Publisher Case Study”. Insights 35: 7. DOI: http://doi.org/10.1629/uksg.573

Abstract

There has been much debate around the role of metrics in scholarly communication, with particular focus on the misapplication of journal metrics, such as the impact factor in the assessment of research and researchers. Various initiatives have advocated for a change in this culture, including the Declaration on Research Assessment (DORA), which invites stakeholders throughout the scholarly communication ecosystem to sign up and show their support for practices designed to address the misuse of metrics. This case study provides an overview of the process undertaken by a large academic publisher (Taylor & Francis Group) in signing up to DORA and implementing some of its key practices in the hope that it will provide some guidance to others considering becoming a signatory. Our experience suggests that research, consultation and flexibility are crucial components of the process. Additionally, approaching signing with a project mindset versus a ‘sign and forget’ mentality can help organizations to understand the practical implications of signing, to anticipate and mitigate potential obstacles and to support cultural change.