Abstract: We seek a unified and distinctive citation description of both journals and individuals. The journal impact factor has a restrictive definition that constrains its extension to individuals, whereas the h-index for individuals can easily be applied to journals. Going beyond any single parameter, the shape of each negative slope Hirsch curve of citations vs. rank index is distinctive. This shape can be described through five minimal parameters or ‘flags’: the h-index itself on the curve; the average citation of each segment on either side of h; and the two axis endpoints. We obtain the five flags from real data for two journals and 10 individual faculty, showing they provide unique citation fingerprints, enabling detailed comparative assessments. A computer code is provided to calculate five flags as the output, from citation data as the input. Since papers (citations) can form nodes (links) of a network, Hirsch curves and five flags could carry over to describe local degree sequences of general networks.
“Universities, scientific academies, funding institutions and other organizations around the world will have the option to sign a document that would oblige signatories to change how they assess researchers for jobs, promotions and grants.
Signatories would commit to moving away from standard metrics such as impact factors, and adopting a system that rewards researchers for the quality of their work and their full contributions to science. “People are questioning the way they are being evaluated,” says Stephane Berghmans, director of research and innovation at the European University Association (EUA). The Brussels-based group helped to draft the agreement, which is known as the Agreement on Reforming Researcher Assessment. “This was the time.”
Universities and other endorsers will be able to sign the agreement from 28 September. The European Commission (EC) announced plans last November for putting together the agreement; it proposed that assessment criteria reward ethics and integrity, teamwork and a variety of outputs, along with ‘research quality’ and impact. In January, the commission began to draft the agreement with the EUA and others….”
“Open Access usage is a complex topic. In this webinar, we’ll look at what metrics can be collected, and whether we should look at the data globally, or at an institutional level, possibly to evaluate affiliated institutions’ APC payments or open access agreements.
We will discuss the topic both from a publisher and a library perspective, with panelists sharing their experiences and opinion on the feasibility of conducting a usage-based analysis of open access articles to determine their value to institutions and libraries….”
Abstract: The increase in the availability of data relevant to research performance evaluation over the past ten years has been transformational. Despite this and the parallel increase in the power of computational tools the actual indicators used in practice remain stubbornly limited to citation counts and simple derivatives such as impact factors, h-indices and field-normalised counts. The lack of diversity in indicators, along with a lack of diversity in data sources, has aligned with a lack of diversity in the academy to strengthen and perpetuate a status quo where high prestige researchers at high prestige institutions gain greater resources, leading to more outputs in which they cite each other, increasing citation counts and propelling the whole cycle forward.
This talk will propose some simple, yet radical, shifts in how we think about research performance indicators. Using open data and transparent analysis it will imagine a world in which we stop asking how to count more beans, but instead how different they are.
“The Open Science Observatory (https://osobservatory.openaire.eu) is an OpenAIRE platform showcasing a collection of indicators and visualisations that help policy makers and research administrators better understand the Open Science landscape in Europe, across and within countries.
The broader context: As the number of Open Science mandates have been increasing across countries and scientific fields, so has the need to track Open Science practices and uptake in a timely and granular manner. The Open Science Observatory assists the monitoring, and consequently the enhancing, of open science policy uptake across different dimensions of interest, revealing weak spots and hidden potential. Its release comes in a timely fashion, in order to support UNESCO’s global initiative for Open Science and the European Open Science Cloud (the current development and enhancement is co-funded by the EOSC Future H2020 project and will appear in the EOSC Portal). …
How does it work: Based on the OpenAIRE Research Graph, following open science principles and an evidence-based approach, the Open Science Observatory provides simple metrics and more advanced composite indicators which cover various aspects of open science uptake such us
different openness metrics
Plan S compatibility & transformative agreements
as well as measures related to the outcomes of Open Access research output as they relate to
network & collaborations
usage statistics and citations
Sustainable Development Goals
Abstract: New methods of judging the impact of academic articles now include alternative metrics, and the goal of this study was to provide an insight into the journals and papers with top Altmetric attention scores (AAS) in the field of journalism. Scopus and Dimensions were used as the primary data sources. Fifteen journalism journals were identified from Scopus, and papers from these journals with an Altmetric Attention Score of over 100 were collected from Dimensions as the study’s sample, which comprised 87 papers. Most of the papers with high AAS were published after 2017, and five were published in 2022. The sample included a larger number of closed access articles ( n = 50) than open access ( n = 37), although analysis revealed that open access articles had higher median Tweets than closed access. Articles on journalism practice were more likely to receive attention from news outlets. None of the papers with high AAS are highly cited, which may be due to the limited time to accumulate citations. The journal with the highest impact factor (Digital Journalism) did not have the greatest number of papers with high AAS, but had far higher scores on Twitter engagement than the other journals. The results do not show any correlation between impact factors and citation metrics and social metrics.
Abstract: Leading open access publishing advocate and pioneer Professor Martin Paul Eve considers several topics in an interview with WPCC special issue editor Andrew Lockett. These include the merits of considering publishing in the context of commons theory and communing, digital platforms as creative and homogenous spaces, cosmolocalism, the work of intermediaries or boundary organisations and the differing needs of library communities. Eve is also asked to reflect on research culture, the academic prestige economy, the challenges facing the humanities, digital models in trade literature markets and current influences in terms of work in scholarly communications and recent academic literature. Central concerns that arise in the discussion are the importance of values and value for money in an environment shaped by increasing demands for policies determined by crude data monitoring that are less than fully thought through in terms of their impact and their implications for academics and their careers.
While the term “usage data” most often refers to webpage views and downloads associated with a given book or book chapter, scholarly communications stakeholders have identified a near future where linked open access (OA) scholarship usage data analytics could directly inform publishing, discovery, and collections development in addition to impact reporting.
In the 2020-2022 Exploring Open Access Ebook Usage research project supported by the Mellon Foundation, publisher and library representatives expressed their interests in using OA eBook Usage (OAeBU) data analytics to inform overall OA program investment, strategy and fundraising. A report summarizing a year of virtual focus groups noted multiple operational use cases for OA book usage analytics, spanning book marketing, sales, and editorial strategy; collections development and hosting; institutional OA program strategy, reporting, and investment; and OA impact reporting for institutions and authors to support reporting to their funding agencies, donors, and policy-makers.
“ALLEA welcomes the adoption of the Conclusions on Research Assessment and Implementation of Open Science by the Council of the European Union on 10 June.
The Conclusions are in agreement with points that ALLEA has made over the years, in particular on the necessity of appropriately implementing and rewarding open science practices and the development of research assessment criteria that follow principles of excellence, research integrity and trustworthy science.
At the same time, ALLEA continues to stress that it matters how we open knowledge, as the push for Open Access publishing has also paved the way for various unethical publishing practices. The inappropriate use of journal- and publication-based metrics in funding, hiring and promotion decisions has been one of the obstacles in the transition to a more open science, and furthermore fails to recognize and reward the diverse set of competencies, activities, and outputs needed for our research ecosystem to flourish….”
Citation skew is a phenomenon that refers to the unequal citation distribution of articles in a journal. The objective of this study was to establish whether citation skew exists in Otolaryngology—Head and Neck Surgery (OHNS) journals and to elucidate whether journal impact factor (JIF) was an accurate indicator of citation rate of individual articles.
Journals in the field of OHNS were identified using Journal Citation Reports. After extraction of the number of citations in 2020 for all primary research articles and review articles published in 2018 and 2019, a detailed citation analysis was performed to determine citation distribution. The main outcome of this study was to establish whether citation skew exists within OHNS literature and whether JIF was an accurate prediction of individual article citation rate.
Thirty-one OHNS journals were identified. Citation skew was prevalent across OHNS literature with 65% of publications achieving citation rates below the JIF. Furthermore, 48% of publications gathered either zero or one citation. The mean and median citations for review articles, 3.66 and 2, respectively, were higher than the mean and median number of citations for primary research articles, 1 and 2.35, respectively (P < .001). A statistically significant correlation was found between citation rate and JIF (r = 0.394, P = 0.028).
The current results demonstrate a citation skew among OHNS journals, which is in keeping with findings from other surgical subspecialties. The majority of publications did not achieve citation rates equal to the JIF. Thus, the JIF should not be used to measure the quality of individual articles. Otolaryngologists should assess the quality of research through the use of other metrics, such as the evaluation of sound scientific methodology, and the relevance of the articles.
Abstract: Scholars and university administrators have a vested interest in building equitable valuation systems of academic work for both practical (e.g., resource distribution) and more lofty purposes (e.g., what constitutes “good” research). Well-established inequalities in science pose a difficult challenge to those interested in constructing a parsimonious and fair method for valuation as stratification occurs within academic disciplines, but also between them. Despite warnings against the practice, the popular h-index has been formally used as one such metric of valuation. In this article, we use the case of the h-index to examine how within and between discipline inequalities extend from the reliance of metrics, an illustration of the risk involved in the so-called “tyranny of metrics.” Using data from over 42,000 high performing scientists across 120 disciplines, we construct multilevel models predicting the h-index. Results suggest significant within-discipline variation in several forms, including a female penalty, as well as significant between discipline variation. Conclusions include recommendations to avoid using the h-index or similar metrics for valuation purposes.
Abstract: This paper investigates different uses of the Journal Impact Factor (JIF) in national journal rankings and discusses the merits of supplementing metrics with expert assessment. Our focus is national journal rankings used as evidence to support decisions about the distribution of institutional funding or career advancement. The seven countries under comparison are China, Denmark, Finland, Italy, Norway, Poland, and Turkey—and the region of Flanders in Belgium. With the exception of Italy, top-tier journals used in national rankings include those classified at the highest level, or according to tier, or points implemented. A total of 3,565 (75.8%) out of 4,701 unique top-tier journals were identified as having a JIF, with 55.7% belonging to the first Journal Impact Factor quartile. Journal rankings in China, Flanders, Poland, and Turkey classify journals with a JIF as being top-tier, but only when they are in the first quartile of the Average Journal Impact Factor Percentile. Journal rankings that result from expert assessment in Denmark, Finland, and Norway regularly classify journals as top-tier outside the first quartile, particularly in the social sciences and humanities. We conclude that experts, when tasked with metric-informed journal rankings, take into account quality dimensions that are not covered by JIFs.
Abstract: Background: Given the increasing interest and potential use of social media for the promotion of orthopedic literature, there is a need to better understand Altmetrics. Purposes: We sought to determine the relationship between the Altmetric Attention Score (AAS) and the number of citations for articles on total joint arthroplasty (TJA) published in orthopedics journals. We also sought to determine the predictors of greater social media attention for these articles. Methods: Articles on TJA published in Bone and Joint Journal (BJJ), Journal of Bone and Joint Surgery (JBJS), Clinical Orthopedics and Related Research (CORR), Journal of Arthroplasty, Journal of Knee Surgery, Hip International, and Acta Orthopaedica in 2016 were extracted (n = 498). One-way analysis of variance with Bonferroni corrections was used to compare AAS and citations across journals. Multivariate regressions were used to determine predictors of social media attention and number of citations. Results: The mean AAS and number of citations were 7.5 (range: 0–289) and 16.7 (range: 0–156), respectively. Significant between-group effects were observed according to journal for AAS and number of citations. Publishing an article in JBJS was the strongest predictor of higher number of citations. Publishing an article in BJJ was the only independent predictor of higher AAS, while publishing an article in JBJS or CORR trended toward statistical significance. A higher AAS was a significant predictor of a higher number of citations. Number of citations and number of study references were positive predictors of greater social media attention on Twitter and Facebook. Conclusions: In articles on TJA published in 7 journals in 2016, a higher AAS was a associated with a higher number of citations. Various bibliometric characteristics were found to be significantly associated with greater social media attention; the most common influences were number of citations and number of references. Researchers in orthopedics can use this information when considering how to assess the impact of their work.
What are the most influential articles in reproductive biology journals from 1980 to 2019 according to Altmetric Attention Score (AAS), number of citations and Relative Citation Ratio (RCR)?
Cross-sectional study of reproductive biology articles indexed in the National Institutes of Health Open Citation Collection from 1980 to 2019. Data were downloaded on 20 May 2021. The 100 articles with highest AAS, RCR and number of citations were analysed.
Twenty-one reproductive biology journals were identified, including 120,069 articles published from 1980 to 2019. In total 227 reproductive biology classics were identified due to some overlap between the three lists. Compared with the 100 articles with the highest AAS (after excluding articles featured on both lists), the 100 top-cited articles were older (2014 versus 2001, mean difference [95% confidence interval] 13.5 [11.5, 15.5]), less likely to be open access (64% versus 85%), more likely to be reviews (42% versus 12%) and less likely to be observational studies (9% versus 51%) and randomized clinical trials (0% versus 5%). These same trends were observed in analyses comparing the 100 articles with highest AAS to the 100 articles with highest RCR. The most common topic was assisted reproduction, but prominent topics included infertility for top AAS articles, reproductive technology in animals for top-cited articles, and polycystic ovary syndrome for top RCR articles.
Formerly, influential articles in reproductive biology journals were evaluated by absolute citation rates and subject to limitations of conventional bibliometric analysis. This is the first comprehensive study to use altmetrics and citation-based metrics to identify reproductive biology classics.
Abstract: This case study examines the outcomes of an altmetric analysis of open access (OA) and non-open access (non-OA) publications from the Rutgers Business School, Rutgers University, Newark and New Brunswick. It explains the magnitude of the 2014–2020 business faculty OA and non-OA publications and their relative scholarly impact and metrics. The continued increase in the volume of OA articles suggests that professors are gradually accepting these article types, and that altmetric and CiteScore journal ranking metrics data may strengthen strategic initiatives for business librarians to assist faculties and university libraries in collective decision-making processes.