[2309.15884] The strain on scientific publishing

Abstract:  Scientists are increasingly overwhelmed by the volume of articles being published. Total articles indexed in Scopus and Web of Science have grown exponentially in recent years; in 2022 the article total was 47% higher than in 2016, which has outpaced the limited growth, if any, in the number of practising scientists. Thus, publication workload per scientist (writing, reviewing, editing) has increased dramatically. We define this problem as the strain on scientific publishing. To analyse this strain, we present five data-driven metrics showing publisher growth, processing times, and citation behaviours. We draw these data from web scrapes, requests for data from publishers, and material that is freely available through publisher websites. Our findings are based on millions of papers produced by leading academic publishers. We find specific groups have disproportionately grown in their articles published per year, contributing to this strain. Some publishers enabled this growth by adopting a strategy of hosting special issues, which publish articles with reduced turnaround times. Given pressures on researchers to publish or perish to be competitive for funding applications, this strain was likely amplified by these offers to publish more articles. We also observed widespread year-over-year inflation of journal impact factors coinciding with this strain, which risks confusing quality signals. Such exponential growth cannot be sustained. The metrics we define here should enable this evolving conversation to reach actionable solutions to address the strain on scientific publishing.

 

 

You do not receive enough recognition for your influential science | bioRxiv

Abstract:  During career advancement and funding allocation decisions in biomedicine, reviewers have traditionally depended on journal-level measures of scientific influence like the impact factor. Prestigious journals are thought to pursue a reputation of exclusivity by rejecting large quantities of papers, many of which may be meritorious. It is possible that this process could create a system whereby some influential articles are prospectively identified and recognized by journal brands but most influential articles are overlooked. Here, we measure the degree to which journal prestige hierarchies capture or overlook influential science. We quantify the fraction of scientists’ articles that would receive recognition because (a) they are published in journals above a chosen impact factor threshold, or (b) are at least as well-cited as articles appearing in such journals. We find that the number of papers cited at least as well as those appearing in high-impact factor journals vastly exceeds the number of papers published in such venues. At the investigator level, this phenomenon extends across gender, racial, and career stage groupings of scientists. We also find that approximately half of researchers never publish in a venue with an impact factor above 15, which under journal-level evaluation regimes may exclude them from consideration for opportunities. Many of these researchers publish equally influential work, however, raising the possibility that the traditionally chosen journal-level measures that are routinely considered under decision-making norms, policy, or law, may recognize as little as 10-20% of the work that warrants recognition.

 

Why article-level metrics are better than JIF if you value talent over privilege – The Ideophone

“The enormous difference in sheer volume means that an OA megajournal is likely to have quite a few papers with more cites than the Nature median — high impact work that we would miss entirely if we focused only on the JIF. The flip side is where we find the halo effect: there are, in any given year, hundreds of Nature papers that underperform quite a bit relative to the IF (indeed half of them underperform relative to the median). This —the skewed distributions for both the megajournal and the glamour journal— shows why it is a bad idea to ascribe properties to individual papers based on how other papers published under the same flag have been cited….”

Influence of Publication Capacity on Journal Impact Factor for International Open Access Journals from China: Insights from Microeconomic Analysis

Abstract:  The evolving landscape of open access (OA) journal publishing holds significant importance for policymakers and stakeholders who seek to make informed decisions and develop strategies that foster sustainable growth and advancements in open access initiatives within China. This study addressed the shortcomings of the current journal evaluation system and recognized the necessity of researching the elasticity of annual publication capacity (PUB) in relation to the Journal Impact Factor (JIF). By constructing an economic model of elasticity, a comparative analysis of the characteristics and dynamics of international OA journals from China and overseas was conducted. The analysis categorized OA journals based on their respective elasticity values and provided specific recommendations tailored to each category. These recommendations offer valuable insights into the development and growth potential of both OA journals from China and overseas. Moreover, the findings underscore the importance of strategic decision-making to strike a balance between quantity and quality in OA journal management. By comprehending the dynamic nature of elasticity, China can enhance its OA journal landscape, effectively meet the academic demand from domestic researchers, minimize the outflow of OA publications to overseas markets, and fortify its position within the global scholarly community.

 

Relationship between journal impact factor and the thoroughness and helpfulness of peer reviews | PLOS Biology

Abstract:  The Journal Impact Factor is often used as a proxy measure for journal quality, but the empirical evidence is scarce. In particular, it is unclear how peer review characteristics for a journal relate to its impact factor. We analysed 10,000 peer review reports submitted to 1,644 biomedical journals with impact factors ranging from 0.21 to 74.7. Two researchers hand-coded sentences using categories of content related to the thoroughness of the review (Materials and Methods, Presentation and Reporting, Results and Discussion, Importance and Relevance) and helpfulness (Suggestion and Solution, Examples, Praise, Criticism). We fine-tuned and validated transformer machine learning language models to classify sentences. We then examined the association between the number and percentage of sentences addressing different content categories and 10 groups defined by the Journal Impact Factor. The median length of reviews increased with higher impact factor, from 185 words (group 1) to 387 words (group 10). The percentage of sentences addressing Materials and Methods was greater in the highest Journal Impact Factor journals than in the lowest Journal Impact Factor group. The results for Presentation and Reporting went in the opposite direction, with the highest Journal Impact Factor journals giving less emphasis to such content. For helpfulness, reviews for higher impact factor journals devoted relatively less attention to Suggestion and Solution than lower impact factor journals. In conclusion, peer review in journals with higher impact factors tends to be more thorough, particularly in addressing study methods while giving relatively less emphasis to presentation or suggesting solutions. Differences were modest and variability high, indicating that the Journal Impact Factor is a bad predictor of the quality of peer review of an individual manuscript.

 

Flukt fra tidsskrift: Redaktører og flertall i redaksjonsråd trekker seg

From Google’s English:  This is not a problematic journal, it is not a rogue journal, but a journal that is about publishing cannot be suspected of doing anything wrong.

The words come from university librarian Jan Erik Frantsvåg at UiT Norway’s Arctic University.

They come after key people have resigned from the journal Publications, which is published by the publisher MDPI, the world’s largest in open publishing, also known as open access.

The university librarian is one of those who has resigned as a member of the journal’s editorial board. Senior advisor Craig Aaen-Stockdale at BI and Professor Oscar Westlund at OsloMet have the same opinion.

They are not alone.

“When seemingly insurmountable conflicts arise between publishers and academics over the direction of a journal, withdrawing support is often the only course of action we are left with,” says a letter from 23 of those who have resigned from the editorial board, including the three Norwegians.” …

So what’s behind it?

According to Frantsvåg, it is about the editors feeling that they were not heard when they raised problems, on behalf of the editorial board. In an article in Khrono today, the three Norwegians on the editorial board write that, among other things, it is about ensuring that the reputation of the journal should not depend on what happens in other MDPI journals.

They further write that the editors experienced being measured by “simple measures of success, such as the Journal Impact Factor and other bibliometric measures”. The editorial board must have repeatedly stated that the use of such measures was contrary to the so-called Dora declaration, which already ten years ago pointed the finger at the use of quantitative measures, not least the journals’ impact factor (Journal Impact Factor)….”

An Index, A Publisher and An Unequal Global Research Economy | CGHE

“This is the story of how a publisher and a citation index turned the science communication system into a highly profitable global industry. Over the course of seventy years, academic journal articles have become commodities, and their meta-data a further source of revenue. It begins in Washington at the end of a second World War, when the US Government agrees a massive increase in funding for research, after Vannevar Bush champions basic research as the ‘pacemaker of technological progress’. The resulting post-war growth in scientific publishing creates opportunities for information scientists and publishers alike. During the 1950s, two men – Robert Maxwell and Eugene Garfield – begin to experiment with their blueprint for the research economy. Maxwell created an ‘international’ publisher – Pergamon Press – charming the editors of elite, not-for-profit society journals into signing commercial contracts. Garfield invented the science citation index to help librarians manage this growing flow of knowledge. Over time, the index gradually became commercially viable as universities and publishers used it to measure the ‘impact’ of their researchers and journals.

Sixty years later, the global science system has become a citation economy, with academic credibility mediated by the currency produced by the two dominant commercial citation indexes: Elsevier’s Scopus and Clarivate’s Web of Science. The reach of these citation indexes and their data analytics is amplified by digitisation, computing power and financial investment. Scholarly reputation is now increasingly measured by journal rankings, ‘impact factors’ and ‘h-indexes’. Non-Anglophone journals are disproportionately excluded from these indexes, reinforcing the stratification of academic credibility geographies and endangering long established knowledge ecosystems. Researchers in the majority world are left marginalised and have no choice but to go ever faster, resorting to research productivism to keep up. The result is an integrity-technology ‘arms race’. Responding to media stories about a crisis of scientific fraud, publishers and indexes turn to AI tools to deal with what is seen as an epidemic of academic ‘gaming’ and manipulation.

Does the unfettered growth in publishing ‘outputs’, moral panics over research integrity and widening global divides signal a science system in crisis? And is the ‘Open Science’ vision under threat, as the ‘author-pays’ publishing business model becomes dominant? With the scientific commons now largely reliant on citations as its currency, the future of science communication is far from certain.”

Open Access Advantages as a Function of the Discipline: Mixed-methods Study – ScienceDirect

Abstract:  Purpose

This mixed-methods study integrates bibliometric and altmetric investigation with a qualitative method in order to assess the prevalence and societal-impact of Open-Access (OA) publications, and to reveal the considerations behind researchers’ decision to publish articles in closed and open-access.

Design/methodology/approach

The bibliometric-altmetric study analyzed 584 OA and closed publications published between 2014 and 2019 by 40 Israeli researchers: 20 from STEM (Science, Technology, Engineering, Math) and 20 from SSH (Social Sciences and Humanities) discipline. We used a multistage cluster sampling method to select a representative sample for the STEM disciplines group (engineering, computer science, biology, mathematics, and physics), and for the SSH disciplines group (sociology, economics, psychology, political science, and history). Required data were extracted from Scopus and Unpaywall databases, and the PlumX-platform. Among the 40 researchers who were selected for the bibliometric-altmetric study, 20 researchers agreed to be interviewed for this study.

Findings

Comparing bibliometrics and altmetrics for the general publications did not reveal any significant differences between OA and closed publications. These were found only when comparing OA and closed publications across disciplines. STEM-researchers published 59 % of their publications in OA, compared to just 29 % among those in SSH, and they received significantly more bibliometric and altmetric citations from SSH OA publications and from their own closed-access publications. The altmetrics findings indicate that researchers are well acquainted and active in social media. However, according to the interviewees, there is no academic contribution for sharing research findings on social-media; it is viewed as a “public-service”. Researchers’ primary consideration for publishing in closed or OA was the journal impact-factor.

Research limitations/implications

Our findings contribute to the increasing body of research that addresses OA citations and societal-impact advantages. The findings suggest the need to adopt an OA-policy after a thorough assessment of the consequences for SSH disciplines.

Journal impact factors and the future of open access publishing – Richardson – Journal of Applied Clinical Medical Physics – Wiley Online Library

“There are many challenges faced by publishers and scientific journals by the wide-spread use of the internet and the development of open access. It is not a perfect system, and many criticisms are valid. Reviews take a long time and are subject to bias. Reviewers are unrewarded for their efforts. Journal impact factors are becoming archaic, but no metric is perfect. New tools are being developed but editors are not yet sure how to incorporate them into the process. All of these challenges will be faced by the JACMP and other open access journals. On an aspirational note, in 2018, the European Commission and European Research Council launched “cOAlition S,” an initiative (Plan S) that supports worldwide open access for research funded by public grants.41 Among others, the World Health Organization and the Bill and Melinda Gates foundation are funders of Plan S. If enough entities agree that this is the correct path forward, we may see all journal platforms becoming open access and solve some of the financial problems therein.”

A Review and Assessment of Open Access Surgery Journals – Journal of Surgical Research

Abstract:  Introduction

Open access publishing has exhibited rapid growth in recent years. However, there is uncertainty surrounding the quality of open access journals and their ability to reach target audiences. This study reviews and characterizes open access surgical journals.

Materials and methods

The directory of open access journals was used to search for open access surgical journals. PubMed indexing status, impact factor, article processing charge (APC), initial year of open access publishing, average weeks from manuscript submission to publication, publisher, and peer-review processes were evaluated.

Results

Ninety-two open access surgical journals were identified. Most (n = 49, 53.3%) were indexed in PubMed. Journals established >10 y were more likely to be indexed in PubMed compared to journals established <5 y (28 of 41 [68.3%] versus 4 of 20 [20%], P < 0.001). 44 journals (47.8%) used a double-blind review method. 49 (53.2%) journals received an impact factor for 2021, ranging from <0.1 to 10.2 (median 1.4). The median APC was $362 United States dollar [interquartile range $0 – 1802 United States dollar]. 35 journals (38%) did not charge a processing fee. There was a significant positive correlation between the APC and impact factor (r = 0.61, P < 0.001). If accepted, the median time from manuscript submission to publication was 12 wk.

Conclusions

Open access surgical journals are largely indexed on PubMed, have transparent review processes, employ variable APCs (including no publication fees), and proceed efficiently from submission to publication. These results should increase readers’ confidence in the quality of surgical literature published in open access journals.

Distortion of journal impact factors in the era of paper mills: Molecular Therapy

Abstract:  Academia’s obsession with the journal impact factor has been a subject of debate for some time. Most would probably agree that it is useful as a crude measure of a journal’s prestige, quality, and general influence on a scientific or medical field but should not be overinterpreted. Nonetheless, some institutions go as far as disregarding a student’s or faculty member’s publications in journals with impact factors less than a certain number (often the magic number is 5) when it comes to performance evaluation, promotion, graduation, or hiring. Such overemphasis ignores that one journal with a lower impact factor may actually have more rigorous standards for acceptance of a paper than another with a higher impact factor. This situation may be observed for a variety of reasons, such as the degree of specialization of a journal or the ratio of review articles vs. original research papers. Another more nefarious contributor to a journal’s impact factor, manipulated citations, is also growing and threatening to expose the deepening cracks in the foundation of academia’s favorite metric.

 

Citation pattern of open access and toll-based research articles in the field of biological and physical sciences: a comparative study | Emerald Insight

Purpose

The purpose of this paper is to determine the relationship between the access mode of research articles [Open Access (OA) and Toll-Access (TA)] and their subsequent citation counts in Biological and Physical Sciences in three Impact factor zones (High, Medium and Low).

Design/methodology/approach

Three subjects each from Biological Sciences (Biochemistry, Cell Biology and Genetics) and Physical Sciences (Astronomy, Oceanography and Optics) were selected for the study. A comprehensive list of journals (TA and OA) in select subjects of Biological and Physical Sciences was prepared by consulting Journal Citation Report’s Master Journal List (for the compilation of both Open Access and Toll Access journal list) and Directory of Open Access Journals (for the compilation of Open Access journal list). For each journal, essential details like content language, format, year of publication, access mode (Open Access or Toll Access), etc. were obtained from Ulrich’s Periodical Directory. Web of Science (WoS) was used as citations indexing tool in this study. The data set was run on the WoS to collect the citation data.

Findings

The results of the study indicate that open mode of access is not a prerequisite for higher citation boost as in the majority of the cases in this study, TA articles have garnered a greater number of citations as compared to open access articles in different Impact factor zones in Biological and Physical Sciences.

Originality/value

A novel approach has been adopted to understand and compare the research impact of open access (OA) and toll access (TA) journal articles in the field of Biological and Physical Sciences at three Impact factor zone levels to reveal the citation metrics encompassing three parameters, i.e. citedness, average citation count and year wise distribution of citations in select subjects of Biological and Physical Sciences.

Anchoring effects in the assessment of papers: An empirical survey of citing authors | PLOS ONE

Abstract:  In our study, we have empirically studied the assessment of cited papers within the framework of the anchoring-and-adjustment heuristic. We are interested in the question whether the assessment of a paper can be influenced by numerical information that act as an anchor (e.g. citation impact). We have undertaken a survey of corresponding authors with an available email address in the Web of Science database. The authors were asked to assess the quality of papers that they cited in previous papers. Some authors were assigned to three treatment groups that receive further information alongside the cited paper: citation impact information, information on the publishing journal (journal impact factor) or a numerical access code to enter the survey. The control group did not receive any further numerical information. We are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation impact or journal impact), but also by numbers that are not related to quality, i.e. the access code. Our results show that the quality assessments of papers seem to depend on the citation impact information of single papers. The other information (anchors) such as an arbitrary number (an access code) and journal impact information did not play a (important) role in the assessments of papers. The results point to a possible anchoring bias caused by insufficient adjustment: it seems that the respondents assessed cited papers in another way when they observed paper impact values in the survey. We conclude that initiatives aiming at reducing the use of journal impact information in research evaluation either were already successful or overestimated the influence of this information.

 

Fast-growing open-access journals stripped of coveted impact factors | Science | AAAS

“Nearly two dozen journals from two of the fastest growing open-access publishers, including one of the world’s largest journals by volume, will no longer receive a key scholarly imprimatur. On 20 March, the Web of Science database said it delisted the journals along with dozens of others, stripping them of an impact factor, the citation-based measure of quality that, although controversial, carries weight with authors and institutions. The move highlights continuing debate about a business model marked by high volumes of articles, ostensibly chosen for scientific soundness rather than novelty, and the practice by some open-access publishers of recruiting large numbers of articles for guest-edited special issues.

The Web of Science Master Journal List, run by the analytics company Clarivate, lists journals based on 24 measures of quality, including effective peer review and adherence to ethical publishing practices, and periodically checks that listed journals meet the standards. Clarivate calculates impact factors for a select subset of journals on the list. The company expanded quality checks this year because of “increasing threats to the integrity of the scholarly record,” Web of Science’s Editor-in-Chief Nandita Quaderi says. The company removed 50 journals from the list, an unusually large number for a single year, and Clarivate said it is continuing to review 450 more, assisted by an artificial intelligence (AI) tool….”

Fast-growing open-access journals stripped of coveted impact factors | Science | AAAS

“Nearly two dozen journals from two of the fastest growing open-access publishers, including one of the world’s largest journals by volume, will no longer receive a key scholarly imprimatur. On 20 March, the Web of Science database said it delisted the journals along with dozens of others, stripping them of an impact factor, the citation-based measure of quality that, although controversial, carries weight with authors and institutions. The move highlights continuing debate about a business model marked by high volumes of articles, ostensibly chosen for scientific soundness rather than novelty, and the practice by some open-access publishers of recruiting large numbers of articles for guest-edited special issues.

The Web of Science Master Journal List, run by the analytics company Clarivate, lists journals based on 24 measures of quality, including effective peer review and adherence to ethical publishing practices, and periodically checks that listed journals meet the standards. Clarivate calculates impact factors for a select subset of journals on the list. The company expanded quality checks this year because of “increasing threats to the integrity of the scholarly record,” Web of Science’s Editor-in-Chief Nandita Quaderi says. The company removed 50 journals from the list, an unusually large number for a single year, and Clarivate said it is continuing to review 450 more, assisted by an artificial intelligence (AI) tool….”