How Frequent is the Use of Misleading Metrics? A Case Study of Business Journals: The Serials Librarian: Vol 0, No 0

Abstract:  There are many misleading scientific metrics that are not known to the scientific community, particularly novice researchers. There is limited research in the area of misleading metrics, particularly related to business journals. Therefore, this research aims to examine the use of misleading metrics by business journals, the most popular misleading metrics, and countries contributing to the website traffic for such metrics. We used Scimago ranking for business journals and examined the website of each for the use of misleading metrics. Further, we used a domain-based approach by gathering data from Search Engine Optimization websites (i.e., Alexa and Ubersuggest). Only a few Scopus-indexed, low-quality business journals used misleading metrics on their website. The most common misleading metrics were International Scientific Institute, Open Academic Journals Index, CiteFactor, IndexCopernicus, and International Scientific Indexing. In addition, Indian authors were the most frequent visitors of the websites of misleading metrics.

 

Market forces influence editorial decisions – ScienceDirect

“In this issue of Cortex Huber et al. recount their experience in attempting to update the scientific record through an independent replication of a published study (Huber, Potter, & Huszar, 2019). In general, publishers resist issuing retractions, refutations or corrections to their stories or papers for fear of losing public trust, diminishing their brand and possibly ceding their market share (Sullivan, 2018). Unfortunately, this is just one way that market logic – retaining a competitive advantage among peers – explicitly or implicitly influences editorial priorities and decisions more broadly….

There’s the well-known tautology that news is what newsrooms decide to cover and what’s “newsworthy” is influenced by market logic. That news organizations, charged with relating truth and facts, are subject to market-based decisions is a major source of contention among the discerning public. It should be even more contentious that the stewards of scientific knowledge, academic publishers, are also beholden to it….

Although top journals are loathe to admit they ‘chase cites’ (Editorial, 2018), market forces make this unavoidable. One example is a strategy akin to product cost cross subsidization such as when in journalism profitable traffic-driving, click-bait articles subsidize more costly and in-depth, long-form investigative reporting. In order to attract the ‘best’ science, top journals must maintain a competitive impact factor. If the impact factor strays too far from the nearest competitor, then the journal will have trouble publishing the science it deems as most important because of the worth coveted researchers place on perceived impact….

Although publishers tout the value of replications and pay lip service to other reformative practices, their policies in this regard are often vague and non-committal….

Most professional editors are committed to advancing strong science, but however well-intentioned and sought in good faith reforms are, they are necessarily hamstrung by market forces. This includes restrained requirements for more rigorous and responsible research conduct. Journals do not want to put in place policies that are seemingly so onerous that authors decide to instead publish in competing but less demanding journals. Researchers need incentives for and enforcement of more rigorous research practices, but they want easier paths to publication. The result is that new policies at top journals allow publishers to maintain a patina of progressiveness in the absence of real accountability….

The reforms suggested by Huber et al. are welcome short-term fixes, but the community should demand longer-term solutions that break up the monopoly of academic publishers and divorce the processes of evaluation, publication and curation (Eisen and Polka, 2018). Only then may we wrest the power of science’s stewardship from the heavy hand of the market.”

New to the SCN: Publishing Values-based Scholarly Communication | OER + ScholComm

This is the latest post in a series announcing resources created for the Scholarly Communication Notebook, or SCN. The SCN is a hub of open teaching and learning content on scholcomm topics that is both a complement to an open book-level introduction to scholarly communication librarianship and a disciplinary and course community for inclusively sharing models and practices. IMLS funded the SCN in 2019, permitting us to pay creators for their labor while building a solid initial collection. These works are the result of one of three calls for proposals (our first CFP was issued in fall 2020; the second in late spring ‘21, and the third in late fall 2021).

 

Research assessment reform: From rhetoric to reality | Science|Business

“At the same time, there is a growing consensus – both in Europe and elsewhere in the world – that the current assessment system needs to be rethought for an age of open science, big data, digitalisation and the demand for cross-disciplinary methods and skills. There are calls to improve the use of metrics, better balance quantitative and qualitative factors, and broaden the scope of assessment to reflect the full diversity of inputs, outputs and practices in 21st century science. The ultimate goal? To move away from inappropriate use of journal- and publication-based metrics in research assessment, towards a combination of metrics and narratives that reflect the value of research outputs and (researchers’ activities) in a more nuanced way….”

The Effect of Open Access on Scholarly and Societal Metrics of Impact in the ASHA Journals | Journal of Speech, Language, and Hearing Research

Abstract:  Purpose:

 This study examined the effect of open access (OA) status on scholarly and societal metrics of impact (citation counts and altmetric scores, respectively) across manuscripts published in the American Speech-Language-Hearing Association (ASHA) Journals.

Method:

 

Three thousand four hundred nineteen manuscripts published in four active ASHA Journals were grouped across three access statuses based on their availability to the public: Gold OA, Green OA, and Closed Access. Two linear mixed-effects models tested the effects of OA status on citation counts and altmetric scores of the manuscripts.

Results: 

Both Green OA and Gold OA significantly predicted a 2.70 and 5.21 respective increase in citation counts compared with Closed Access manuscripts (p < .001). Gold OA was estimated to predict a 25.7-point significant increase in altmetric scores (p < .001), but Green OA was only marginally significant (p = .68) in predicting a 1.44 increase in altmetric scores relative to Closed Access manuscripts.

Discussion:

 

Communication sciences and disorders (CSD) research that is fully open receives more online attention and, overall, more scientific attention than research that is paywalled or available through Green OA methods. Additional research is needed to understand secondary variables affecting these and other scholarly and societal metrics of impact across studies in CSD. Ongoing support and incentives to reduce the inequities of OA publishing are critical for continued scientific advancement.

The Effect of Open Access on Scholarly and Societal Metrics of Impact in the ASHA Journals | Journal of Speech, Language, and Hearing Research

Abstract:  Purpose:

 This study examined the effect of open access (OA) status on scholarly and societal metrics of impact (citation counts and altmetric scores, respectively) across manuscripts published in the American Speech-Language-Hearing Association (ASHA) Journals.

Method:

 

Three thousand four hundred nineteen manuscripts published in four active ASHA Journals were grouped across three access statuses based on their availability to the public: Gold OA, Green OA, and Closed Access. Two linear mixed-effects models tested the effects of OA status on citation counts and altmetric scores of the manuscripts.

Results: 

Both Green OA and Gold OA significantly predicted a 2.70 and 5.21 respective increase in citation counts compared with Closed Access manuscripts (p < .001). Gold OA was estimated to predict a 25.7-point significant increase in altmetric scores (p < .001), but Green OA was only marginally significant (p = .68) in predicting a 1.44 increase in altmetric scores relative to Closed Access manuscripts.

Discussion:

 

Communication sciences and disorders (CSD) research that is fully open receives more online attention and, overall, more scientific attention than research that is paywalled or available through Green OA methods. Additional research is needed to understand secondary variables affecting these and other scholarly and societal metrics of impact across studies in CSD. Ongoing support and incentives to reduce the inequities of OA publishing are critical for continued scientific advancement.

Thoughts on the Many Different Paths to Achieving Open Access: Keynote with Dr. Ross Mounce – Library Events Calendar – LJMU Library

“Professor George Talbot, Pro-Vice Chancellor (Research) and Dean of Arts and Sciences, Edge Hill University will begin Open Research Week 2023 and welcome our keynote speaker, Dr. Ross Mounce. 

In this talk, Ross will reflect on how progress towards providing open access to all academic research is going; the good, the bad, and the ugly. 

The good is: we’re starting to realise that a lot of the problem boils down to copyright issues. The emergence and normalisation of rights retention is undoubtedly healthy.?The bad news is: there are significant problems in the way that money is being spent to enable open access e.g. “transformative agreements” (sic). Transformative for whom??The ugly: Journal Impact Factor™?is statistically illiterate, negotiable, and irreproducible, but some researchers are still making decisions using it.? ?The real question now is not can we get universal open access to research, but how.”

Research on the relationships between discourse leading indicators and citations: perspectives from altmetrics indicators of international multidisciplinary academic journals | Emerald Insight

Abstract:  Purpose

This paper aims to analyze the relationships between discourse leading indicators and citations from perspectives of integrating altmetrics indicators and tries to provide references for comprehending the quantitative indicators of scientific communication in the era of open science, constructing the evaluation indicator system of the discourse leading for academic journals and then improving the discourse leading of academic journals.

Design/methodology/approach

Based on the theory of communication and the new pattern of scientific communication, this paper explores the formation process of academic journals’ discourse leading. This paper obtains 874,119 citations and 6,378,843 altmetrics indicators data from 65 international multidisciplinary academic journals. The relationships between indicators of discourse leading (altmetrics) and citations are studied by using descriptive statistical analysis, correlation analysis, principal component analysis, negative binomial regression analysis and marginal effects analysis. Meanwhile, the connotation and essential characteristics of the indicators, the strength and influence of the relationships are further analyzed and explored. It is proposed that academic journals’ discourse leading is composed of news discourse leading, social media discourse leading, peer review discourse leading, encyclopedic discourse leading, video discourse leading and policy discourse leading.

Findings

It is discovered that the 15 altmetrics indicators data have a low degree of centralization to the center and a high degree of polarization dispersion overall; their distribution patterns do not follow the normal distributions, and their distributions have the characteristics of long-tailed right-peaked curves. Overall, 15 indicators show positive correlations and wide gaps exist in the number of mentions and coverage. The academic journals’ discourse leading significantly affects total cites. When altmetrics indicators of international mainstream academic and social media platforms are used to explore the connotation and characteristics of academic journals’ discourse leading, the influence or contribution of social media discourse, news discourse, video discourse, policy discourse, peer review discourse and encyclopedia discourse on the citations decreases in turn.

Originality/value

This study is innovative from the academic journal level to analyze the deep relationships between altmetrics indicators and citations from the perspective of correlation. First, this paper explores the formation process of academic journals’ discourse leading. Second, this paper integrates altmetrics indicators to study the correlation between discourse leading indicators and citations. This study will help to enrich and improve basic theoretical issues and indicators’ composition, provide theoretical support for the construction of the discourse leading evaluation system for academic journals and provide ideas for the evaluation practice activities.

Ranking the openness of criminology units: An attempt to incentivize the use of librarians, institutional repositories, and unit-dedicated subpages to increase scholarly impact and justice · CrimRxiv

Abstract:  In this article, I describe and explain a way for criminologists—as individuals, groups and, especially, as university units (e.g., colleges, departments, schools)—to increase the quantity and quality of open criminology: ask university librarians to make their outputs open access on their “unit repositories” (URs), which are unit-dedicated subpages on universities’ institutional repositories (IR). I try to advance this practice by devising and employing a metric, the “URscore,” to document, analyze, and rank criminology units’ contributions to open criminology, as prescribed. To illustrate the metric’s use, I did a study of 45 PhD-granting criminology units in the United States (US). I find almost all of them (98%) have access to an IR; less than two-thirds (62%) have a UR; less than one-third (29%) have used it this decade (up to August 11, 2022); their URs have a total of 190 open outputs from the 2020s, with 78% emanating from the top-three “most open”—per my ranking—PhD-granting criminology units in the US: those of the University of California, Irvine (with 72 open outputs), the John Jay College of Criminal Justice (with 47 such outputs), and the University of Nebraska, Omaha (with 30 such outputs). Each URscore reflects a criminology unit’s scholarly productivity and scholarly justice. I hope they see the ranking as a reward or opportunity for improvement. Toward that end, I conclude with a discussion of critical issues, instructions, and futures.

H index, journal citation indicator, and other impact factors in neurosurgical publications – Is there a ‘cost factor’ that determines the quality? – ScienceDirect

Abstract:  Objective

There has been an increase in number of Neurosurgical publications including open access approach over the recent years. We aim to compare the Journal’s performance and its relationship to the submission fee incurred in publication. We have performed an in-depth analysis of various Neurosurgical journals’ performance in terms of the bibliometrics and have attempted to determine if there is any impact of the cost incurred to the quality of Journal’s output.

Methods

We identified 53 journals issuing neurosurgical-related work. Quantitative analysis from various search engines involved obtaining H indices, journal citations indicators, and other journal’s metrics such as immediacy index and 5-year impact factor utilising Journal Citation Reports from Clarivate software. Open access fees, coloured print costs, and individual subscription fees were collected. Correlations were produced using Spearmen Rho (?), p<0.05.

Results

Median H indices for 53 journals is 54 (range: 0-292), with journal citation indicators median reported at 0.785 (range: 0-2.45). Median immediacy indices are 0.797 (range: 0-4.076) and the median for 5-year impact factor is 2.76 (range: 0-12.704). There is a very strong positive correlation between JCI and immediacy indices, JCI and 5-year impact factor and 5-year impact factor and immediacy indices (? >0.7, p < .05). There is a moderate positive correlation between the H index and JCI (?= 0.399, p = 0.004). It is unclear whether there is any correlation between the indices and the OA costs and subscription costs for personal usage respectively (p > 0.05).

Conclusions

Our analysis indicates that larger costs incurred for open access fees and subscription costs for personal use are not clearly reflected upon the journals’ performance and this is quantified by utilising various indices. There appears to be a strong association within performance across the journals’ metrics. It would be beneficial to include learning about the bibliometric indices’ impact for research publications in the medical education training to maximise the quality of the scientific work produced and increase the visibility of the information produced. The potential full movement to OA exclusive journals would form a significant barrier for junior researchers, small institutions, or full time-trainee doctors with limited funding available. This study suggests the need for a robust measurement of the journals’ output and the quality of the work produced.

[2212.07811] Do altmetric scores reflect article quality? Evidence from the UK Research Excellence Framework 2021

Abstract:  Altmetrics are web-based quantitative impact or attention indicators for academic articles that have been proposed to supplement citation counts. This article reports the first assessment of the extent to which mature altmetrics from this http URL and Mendeley associate with journal article quality. It exploits expert norm-referenced peer review scores from the UK Research Excellence Framework 2021 for 67,030+ journal articles in all fields 2014-17/18, split into 34 Units of Assessment (UoAs). The results show that altmetrics are better indicators of research quality than previously thought, although not as good as raw and field normalised Scopus citation counts. Surprisingly, field normalising citation counts can reduce their strength as a quality indicator for articles in a single field. For most UoAs, Mendeley reader counts are the best, tweet counts are also a relatively strong indicator in many fields, and Facebook, blogs and news citations are moderately strong indicators in some UoAs, at least in the UK. In general, altmetrics are the strongest indicators of research quality in the health and physical sciences and weakest in the arts and humanities. The Altmetric Attention Score, although hybrid, is almost as good as Mendeley reader counts as a quality indicator and reflects more non-scholarly impacts.

 

[2212.05416] In which fields are citations indicators of research quality?

Abstract:  Citation counts are widely used as indicators of research quality to support or replace human peer review and for lists of top cited papers, researchers, and institutions. Nevertheless, the extent to which citation counts reflect research quality is not well understood. We report the largest-scale evaluation of the relationship between research quality and citation counts, correlating them for 87,739 journal articles in 34 field-based Units of Assessment (UoAs) from the UK. We show that the two correlate positively in all academic fields examined, from very weak (0.1) to strong (0.5). The highest correlations are in health, life sciences and physical sciences and the lowest are in the arts and humanities. The patterns are similar for the field classification schemes of Scopus and this http URL. We also show that there is no citation threshold in any field beyond which all articles are excellent quality, so lists of top cited articles are not definitive collections of excellence. Moreover, log transformed citation counts have a close to linear relationship with UK research quality ranked scores that is shallow in some fields but steep in others. In conclusion, whilst appropriately field normalised citations associate positively with research quality in all fields, they never perfectly reflect it, even at very high values.

 

Responsible Research Assessment I: Implementing DORA for hiring and promotion in psychology | PsychArchives

Abstract:  The use of journal impact factors and other metric indicators of research productivity, such as the h-index, has been heavily criticized for being invalid for the assessment of individual researchers and for fueling a detrimental “publish or perish” culture. Multiple initiatives call for developing alternatives to existing metrics that better reflect quality (instead of quantity) in research assessment. This report, written by a task force established by the German Psychological Society, proposes how responsible research assessment could be done in the field of psychology. We present four principles of responsible research assessment in hiring and promotion and suggest a two-step assessment procedure that combines the objectivity and efficiency of indicators with a qualitative, discursive assessment of shortlisted candidates. The main aspects of our proposal are (a) to broaden the range of relevant research contributions to include published data sets and research software, along with research papers, and (b) to place greater emphasis on quality and rigor in research evaluation.

 

Open Science: Emergency Response or the New Normal? | Acta Médica Portuguesa

From Google’s English:  “To align with open science, the assessment of research and researchers has to be broader, valuing all contributions and results (and not just publications), and adopting an essentially qualitative perspective, based on the review by peers, with limited and responsible use of quantitative indicators. There has also been slow progress in this domain, but it is hoped that the recently presented Agreement on Reforming Research Assessment and the Coali-tion for Advancing Research Assessment10 will speed up and give greater breadth to the transformation of the assessment process. If the three conditions mentioned above are met in the coming years, open science will no longer be just the science of emergencies. And open and collaborative research practices, with rapid dissemination of results, could become dominant, being considered the correct way of doing science, without the need to designate them as open science.”

Guest Post – How Do We Measure Success for Open Science? – The Scholarly Kitchen

“If the success of an innovation relates to the practice of Open Science – which at PLOS is about much more than reputation; it’s central to our mission – then what does success look like? And how do you measure it at the publisher scale? Indeed, to make progress towards any goal, good data are needed, including a view of your current and desired future states. Unfortunately, as recently as last year, there were no tools or services that could tell us everything we wanted to know, at PLOS, about Open Science practices. Benefits of Open Science – economic, societal, research impact and for researcher careers – are often highlighted, and to deliver these long-term benefits, measurably increasing adoption of Open Science practices is a prerequisite goal….”