Distortion of journal impact factors in the era of paper mills: Molecular Therapy

Abstract:  Academia’s obsession with the journal impact factor has been a subject of debate for some time. Most would probably agree that it is useful as a crude measure of a journal’s prestige, quality, and general influence on a scientific or medical field but should not be overinterpreted. Nonetheless, some institutions go as far as disregarding a student’s or faculty member’s publications in journals with impact factors less than a certain number (often the magic number is 5) when it comes to performance evaluation, promotion, graduation, or hiring. Such overemphasis ignores that one journal with a lower impact factor may actually have more rigorous standards for acceptance of a paper than another with a higher impact factor. This situation may be observed for a variety of reasons, such as the degree of specialization of a journal or the ratio of review articles vs. original research papers. Another more nefarious contributor to a journal’s impact factor, manipulated citations, is also growing and threatening to expose the deepening cracks in the foundation of academia’s favorite metric.

 

Article-level metrics: A new approach to quantify reach and impact of published research – ScienceDirect

Abstract:  A spectrum of measuring tools are available to evaluate the impact of published literature and the journals they are published in. Journal Level Metrics (JLM) such as Journal Impact Factor (JIF) or CiteScore assess the reputation of peer-reviewed journals based on citation analysis. Whereas, Article Level Metrics (ALM) quantify the importance, reach and impact of a particular article, and are a new approach to quantifying the reach and impact of published research. Traditionally JLM has served as a proxy for an individual publication’s significance, however, the introduction of contemporary and evolution of Alternative metrics measuring digital or societal influence of a particular article has gained popularity in recent times. These metrics help in rapid dissemination of research, development of newer research strategies and individual academic progress. We highlight the characteristics and importance of currently available ALM, and the newer ones influenced by social media, digital media and Open Access publishing models.

 

 

 

Altmetrics analysis of selected articles in the field of social sciences | Emerald Insight

Abstract:  Purpose

This study aims to measure the impact of the selected papers in the field of social sciences indexed in Scopus using altmetrics tools.

Design/methodology/approach

The research community consists of the articles of the Iranian researchers in the field of social sciences indexed in the Scopus database in 2014–2018. Some of the most important altmetric service providers have been used to assess the presence of the research outputs in the social media and their impact assessment. Also, the relationship between variables such as scientific collaboration of researchers, open access journals and the quality of research journals with altmetric activity have been investigated through appropriate correlation tests.

Findings

The findings indicated that the most important social media publishing Iranian articles are Mendeley, Twitter and Facebook. The results of the correlation test showed a statistically significant positive and weak relationship between the scientific collaboration of researchers and their altmetric activity. Also, there is a significant and weak statistical relation between journal openness and the altmetric scores. In this study, the findings suggest that the published articles in the journals with higher quality indicators have higher altmetric scores and are more likely to be present in social media.

Research implications

In this study, the social network indicators have been introduced as a solution to examine the effectiveness of research activities on social media. These indicators can be used to evaluate the impact and usefulness of the articles and other scientific outputs with the aim of completing and eliminating the shortcomings of traditional scientometrics indicators. What distinguishes altmetric criteria from other criteria related to the scientometric studies is the speed, ease and transparency of these scales. This allows the publications to be evaluated regardless of their formal form and in the shortest possible time, and in addition to the scientific impact, the social impact of the works is also measured.

Originality/value

The results of these studies show that using altmetric service providers not only reflects the social impact of publications on authors in different subject areas but also helps libraries, universities, research organizations and politicians in planning, budgeting and allocating resources.

Altmetrics and their relationship with citation counts: a case of journal articles in physics | Emerald Insight

Abstract:  Purpose

The first purpose of the present study is to investigate the coverage of journal articles in Physics in various sources of altmetrics. Secondly, the study investigates the relationship between altmetrics and citations. Finally, the study also investigates whether the relationship between citations and altmetrics was stronger or weaker for those articles that had been mentioned at least once in the sources of altmetrics.

Design/methodology/approach

The journal articles in Physics having at least one author from an Indian Institution and published during 2014–2018 in sources of altmetrics have been investigated. Altmetric.com was used for collecting altmetrics data. Spearman’s rank correlation coefficient (?) has been used as the data found to be skewed.

Findings

The highest coverage was found on Twitter (22.68%), followed by Facebook (3.62%) and blogs (2.18%). The coverage in the rest of the sources was less than 1%. The average Twitter mentions for journal articles tweeted at least once was found to be 4 (3.99) and for Facebook mentions, it was found to be 1.48. Correlations between Twitter mentions–citations and Facebook mentions–citation were found to be statistically significant but low to weak positive.

Research limitations/implications

The study concludes that due to the low coverage of journal articles, altmetrics should be used cautiously for research evaluation keeping in mind the disciplinary differences. The study also suggests that altmetrics can function as complementary to citation-based metrics.

Originality/value

The study is one of the first large scale altmetrics studies dealing with research in Physics. Also, Indian research has not been attended to in the altmetrics literature and the present study shall fill that void.

Research assessment exercises are necessary — but we need to learn to do them better

“Research evaluation at the Australian Research Council (ARC), one of the country’s main funding bodies, is set to get a makeover. Last month, an independent expert review recommended that the ARC scrap its 13-year-old research-scoring system, known as Excellence in Research for Australia (ERA), and its companion exercise, the Engagement and Impact assessment, which grades the real-world benefits of institutions’ work. Both had been on hold since last August, awaiting the findings of the review.

This is a rare case in which an evaluation system can be rewritten from scratch. The ARC should take this opportunity to improve how it measures and communicates the value of Australia’s research workforce, on the basis of not just lessons learnt from the ERA’s deficiencies, but also principles that have been developed and implemented elsewhere in the world. In doing so, it will help to create a research culture that reflects the best possible values that research should represent….”

Harnessing the Metric Tide: indicators, infrastructures & priorities for UK responsible research assessment

“This review was commissioned by the joint UK higher education (HE) funding bodies as part of the Future Research Assessment Programme (FRAP). It revisits the findings of the 2015 review The Metric Tide to take a fresh look at the use of indicators in research management and assessment. 

While this review feeds into the larger FRAP process, the authors have taken full advantage of their independence and sought to stimulate informed and robust discussion about the options and opportunities of future REF exercises. The report should be read in that spirit: as an input to ongoing FRAP deliberations, rather than a reflection of their likely or eventual conclusions. 

The report is written in three sections. Section 1 plots the development of the responsible research assessment agenda since 2015 with a focus on the impact of The Metric Tide review and progress against its recommendations. Section 2 revisits the potential use of metrics and indicators in any future REF exercise, and proposes an increased uptake of ‘data for good’. Section 3 considers opportunities to further support the roll-out of responsible research assessment policies and practices across the UK HE sector. Appendices include an overview of progress against the recommendations of The Metric Tide and a literature review. 

We make ten recommendations targeted at different actors in the UK research system, summarised as: 

1: Put principles into practice. 

2: Evaluate with the evaluated. 

3: Redefine responsible metrics. 

4: Revitalise the UK Forum. 

5: Avoid all-metric approaches to REF. 

6: Reform the REF over two cycles. 

7: Simplify the purposes of REF. 

8: Enhance environment statements. 

9: Use data for good. 

10: Rethink university rankings….”

What Should Impact Assessment Look Like for Social Science? — Sage

“A decade ago, the San Francisco Declaration on Research Assessment, or DORA, tackled the pressing need to improve how funders, institutions, policy makers and others evaluated scientific research and its outputs. Existing measures, centered on scholarly citation, tended to use where the outputs were published as a proxy for the research’s quality, utility, and impact, measuring all disciplines with the same yardstick.?

In the 10 years since, various efforts to improve assessment and measure societal impact have launched that downplay or even eliminate literature-based measurements. Ideas for these new measures focus on impact in the real world, address disciplinary differences such as those between social science and physical science, and offer useful tools for researchers and end-users alike.?

This panel will engage representatives of various social and behavioral science disciplines, as well as publishers, to discuss:?

What does impact assessment look like from their perch?

What should it look like??

How have their perspectives on impact changed over the last decade?

What changes would they like to see 10 years from now??

What necessary next steps should be taken – whether immediately practical or aspirational?…”

Saudi universities entice top scientists to switch affiliations — sometimes with cash

“Research institutions in Saudi Arabia are gaming global university rankings by encouraging top researchers to change their main affiliations, sometimes in exchange for cash, and often with little obligation to do meaningful work. That’s the conclusion of a report that shows how, over the past decade, dozens of the world’s most highly cited researchers have switched their primary affiliations to universities in the country. That, in turn, has boosted the standing of Saudi Arabian institutions in university ranking tables, which consider the citation impacts of an institution’s researchers….”

How can altmetrics improve the Public Communication of Science and Technology? An analysis on universities and altmetrics

Abstract:  In current research evaluation models, monitoring and impact evaluation are extended beyond peer-reviewed articles to include Public Communication of Science and Technology activities. Through an online survey, we analyzed the perceptions of relevance and degree of application of the altmetric indicators for the PCST of 51 sampled Brazilian federal universities. Perceptions of relevance and application of altmetrics proved to be an outlier in 26 indicators. 66.7% of respondents said they did not know the relevance of altmetrics for the PCST or considered it not applicable to the field. Regarding the perception of relevance, the indicator “Mentions tracked by altmetrics” received high relevance scores (7 and 9) from 21.5% of respondents. The indicator was also the least applied, with only one university (1.9%) using it. In addition, 45% of respondents reported having no intention of applying it, 41.1% intend to apply it in the long term, and 11.7% in the short term.

ACS Environmental Au?How to Improve the Reach of Your Open Access Research | ACS Environmental Au

“Researchers at universities and other organizations are increasingly expected to demonstrate not only the scholarly impact of their research but also to show that the research has a broader reach and societal impact. Various metrics measure the impact of a research article. Many researchers are accustomed to assessing the impact of their articles by counting the number of citations after publication using online databases. While the number of citations provides one measure of the scholarly impact of an article, it does not necessarily provide information on whether the article is reaching a wider audience.

An additional metric available in ACS Environmental Au and all ACS journals is the Altmetric score. The web page for articles in ACS Environmental Au displays the number of “Article Views,” which is the total number of full-text article downloads (both PDF and HTML) across all institutions and individuals, the Altmetric score, and the number of citations since the publication of the article. The full-text article download number itself is a key indicator of the growing influence of an article. The Altmetric score records the attention an article has received online by measuring the number of times an article is reported in news outlets and articles, commented on in blogs, posted on social media (generally Twitter and Reddit), saved in reference managers such as Mendeley, or listed in an online encyclopedia (Wikipedia). An overall score is attributed to each article based on these measures. The makeup of the score is revealed by clicking on the Altmetric score or “doughnut” on the article web page….”

Indian PhDs, professors are paying to publish in real-sounding, fake journals. It’s a racket

This newspaper article describes the publishing behavior of a large section of Indian researchers who publish their research in predatory journals. Pressure to publish, lack of awareness, and career progression are some of the reasons.

[2304.05157] The Many Publics of Science: Using Altmetrics to Identify Common Communication Channels by Scientific field

Abstract:  Altmetrics have led to new quantitative studies of science through social media interactions. However, there are no models of science communication that respond to the multiplicity of non-academic channels. Using the 3653 authors with the highest volume of altmetrics mentions from the main channels (Twitter, News, Facebook, Wikipedia, Blog, Policy documents, and Peer reviews) to their publications (2016-2020), it has been analyzed where the audiences of each discipline are located. The results evidence the generalities and specificities of these new communication models and the differences between areas. These findings are useful for the development of science communication policies and strategies.

 

Indicators of Open Research: UKRN call for priorities

“Today UKRN is launching a call for members of the research community to help us prioritise which aspects of open research are most important for us to monitor. Our particular focus is on helping institutions to monitor those aspects of openness and transparency in research that are most relevant to their development as organisations, rather than to assess individual researchers or research teams (although there may not be a clear line between those two purposes in practice). The relevant aspects of openness and transparency will be different for different kinds of research and for different kinds of institution. Our aim is to develop a palette of potential indicators that can be the basis for working with a group of UKRN institutions and a group of solutions providers, so that we can plan pilots (where that is feasible) and explore longer term options (where pilots are not yet feasible).

You can read the call for priorities here, and respond here. Responses are particularly sought from staff and research students at UK institutions, and are welcome before the end of April….”

Anchoring effects in the assessment of papers: An empirical survey of citing authors | PLOS ONE

Abstract:  In our study, we have empirically studied the assessment of cited papers within the framework of the anchoring-and-adjustment heuristic. We are interested in the question whether the assessment of a paper can be influenced by numerical information that act as an anchor (e.g. citation impact). We have undertaken a survey of corresponding authors with an available email address in the Web of Science database. The authors were asked to assess the quality of papers that they cited in previous papers. Some authors were assigned to three treatment groups that receive further information alongside the cited paper: citation impact information, information on the publishing journal (journal impact factor) or a numerical access code to enter the survey. The control group did not receive any further numerical information. We are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation impact or journal impact), but also by numbers that are not related to quality, i.e. the access code. Our results show that the quality assessments of papers seem to depend on the citation impact information of single papers. The other information (anchors) such as an arbitrary number (an access code) and journal impact information did not play a (important) role in the assessments of papers. The results point to a possible anchoring bias caused by insufficient adjustment: it seems that the respondents assessed cited papers in another way when they observed paper impact values in the survey. We conclude that initiatives aiming at reducing the use of journal impact information in research evaluation either were already successful or overestimated the influence of this information.