Fast-growing open-access journals stripped of coveted impact factors | Science | AAAS

“Nearly two dozen journals from two of the fastest growing open-access publishers, including one of the world’s largest journals by volume, will no longer receive a key scholarly imprimatur. On 20 March, the Web of Science database said it delisted the journals along with dozens of others, stripping them of an impact factor, the citation-based measure of quality that, although controversial, carries weight with authors and institutions. The move highlights continuing debate about a business model marked by high volumes of articles, ostensibly chosen for scientific soundness rather than novelty, and the practice by some open-access publishers of recruiting large numbers of articles for guest-edited special issues.

The Web of Science Master Journal List, run by the analytics company Clarivate, lists journals based on 24 measures of quality, including effective peer review and adherence to ethical publishing practices, and periodically checks that listed journals meet the standards. Clarivate calculates impact factors for a select subset of journals on the list. The company expanded quality checks this year because of “increasing threats to the integrity of the scholarly record,” Web of Science’s Editor-in-Chief Nandita Quaderi says. The company removed 50 journals from the list, an unusually large number for a single year, and Clarivate said it is continuing to review 450 more, assisted by an artificial intelligence (AI) tool….”

Fast-growing open-access journals stripped of coveted impact factors | Science | AAAS

“Nearly two dozen journals from two of the fastest growing open-access publishers, including one of the world’s largest journals by volume, will no longer receive a key scholarly imprimatur. On 20 March, the Web of Science database said it delisted the journals along with dozens of others, stripping them of an impact factor, the citation-based measure of quality that, although controversial, carries weight with authors and institutions. The move highlights continuing debate about a business model marked by high volumes of articles, ostensibly chosen for scientific soundness rather than novelty, and the practice by some open-access publishers of recruiting large numbers of articles for guest-edited special issues.

The Web of Science Master Journal List, run by the analytics company Clarivate, lists journals based on 24 measures of quality, including effective peer review and adherence to ethical publishing practices, and periodically checks that listed journals meet the standards. Clarivate calculates impact factors for a select subset of journals on the list. The company expanded quality checks this year because of “increasing threats to the integrity of the scholarly record,” Web of Science’s Editor-in-Chief Nandita Quaderi says. The company removed 50 journals from the list, an unusually large number for a single year, and Clarivate said it is continuing to review 450 more, assisted by an artificial intelligence (AI) tool….”

Nearly 20 Hindawi journals delisted from leading index amid concerns of papermill activity – Retraction Watch

“Nineteen journals from the open-access publisher Hindawi were removed from Clarivate’s Web of Science Monday when the indexer refreshed its Master Journal List. 

The delistings follow a disclosure by Wiley, which bought Hindawi in 2021, that the company suspended publishing special issues for three months because of “compromised articles.” That lost the company $9 million in revenue….

Delisting 50 journals at once is more than usual for Clarivate, and may be the beginning of a larger culling. Quaderi wrote that the company developed an AI tool “to help us identify outlier characteristics that indicate that a journal may no longer meet our quality criteria.” The tool flagged more than 500 journals at the beginning of this year, according to her blog post, and Web of Science’s editors continue to investigate them….”

Changes in the absolute numbers and proportions of open access articles from 2000 to 2021 based on the Web of Science Core Collection: a bibliometric study

Purpose:
The ultimate goal of current open access (OA) initiatives is for library services to use OA resources. This study aimed to assess the infrastructure for OA scholarly information services by tabulating the number and proportion of OA articles in a literature database.
Method:
We measured the absolute numbers and proportions of OA articles at different time points across various disciplines based on the Web of Science (WoS) database.
Results:
The number (proportion) of available OA articles between 2000 and 2021 in the WoS database was 12 million (32.4%). The number (proportion) of indexed OA articles in 1 year was 0.15 million (14.6%) in 2000 and 1.5 million (48.0%) in 2021. The proportion of OA by subject categories in the cumulative data was the highest in the multidisciplinary category (2000–2021, 79%; 2021, 89%), high in natural sciences (2000–2021, 21%–46%; 2021, 41%–62%) and health and medicine (2000–2021, 37%–40%; 2021, 52%–60%), and low in social sciences and others (2000–2021, 23%–32%; 2021, 36%–44%), engineering (2000–2021, 17%–33%; 2021, 31%–39%) and humanities and arts (2000–2021, 11%–22%; 2021, 28%–38%).
Conclusion:
Our study confirmed that increasingly many OA research papers have been published in the last 20 years, and the recent data show considerable promise for better services in the future. The proportions of OA articles differed among scholarly disciplines, and designing library services necessitates several considerations with regard to the customers’ demands, available OA resources, and strategic approaches to encourage the use of scholarly OA articles.

Escaping ‘bibliometric coloniality’, ‘epistemic inequality’

“Africa’s scholarly journals compete on an unequal playing field because of a lack of funding and the struggle to sustain academic credibility.

“These inequalities are exacerbated by the growing influence of the major citation indexes, leading to what we have called bibliometric coloniality,” say the authors of the book, Who Counts? Ghanaian academic publishing and global science, published by African Minds at the start of 2023.

“The rules of the game continue to be defined outside the continent. We hope that, in some small way, this book contributes to the renaissance and renewal of African-centred research and publishing infrastructures,” the authors say….”

Recalibrating the Scope of Scholarly Publishing: A Modest Step in a Vast Decolonization Process | Quantitative Science Studies | MIT Press

Abstract:  By analyzing 25,671 journals largely absent from common journal counts, as well as Web of Science and Scopus, this study demonstrates that scholarly communication is more of a global endeavor than is commonly credited. These journals, employing the open source publishing platform Open Journal Systems (OJS), have published 5.8 million items; they are in 136 countries, with 79.9% in the Global South and 84.2% following the OA diamond model (charging neither reader nor author). A substantial proportion of journals operate in more than one language (48.3%), with research published in a total of 60 languages (led by English, Indonesian, Spanish, and Portuguese). The journals are distributed across the social sciences (45.9%), STEM (40.3%), and the humanities (13.8%). For all their geographic, linguistic, and disciplinary diversity, 1.2% are indexed in the Web of Science and 5.7% in Scopus. On the other hand, 1.0% are found in Cabells Predatory Reports, while 1.4% show up in Beall’s questionable list. This paper seeks to both contribute and historically situate expanded scale and diversity of scholarly publishing in the hope that this recognition may assist humankind in taking full advantage of what is increasingly a global research enterprise.

 

Analyzing Your Institution’s Publishing Output

Abstract:  Understanding institutional publishing output is crucial to scholarly communications work. This class will equip participants to analyze article publishing by authors at an institution.

After completing the course, participants will be able to

Gain an understanding of their institution’s publishing output, such as number of publications per year, open access status of the publications, major funders of the research, and estimates of how much funding might be spent toward article processing charges (APCs).
Think critically about institutional publishing data to make sustainable and values-driven scholarly communications decisions.

This course will build on open infrastructure, including Unpaywall and OpenRefine. We will provide examples of how to do analyses in both OpenRefine and Microsoft Excel. 

The course will consist of two parts. In the first, participants will learn how to build a dataset. We will provide lessons about downloading data from different sources: Web of Science, Scopus, and The Lens. (Web of Science and Scopus are subscription databases; The Lens is freely available.) 

In the second part of the course, participants will learn data analysis methods that can help answer questions such as:

Should you cancel or renew a subscription?
Who is funding your institution’s researchers?
Are your institution’s authors using an institutional repository?
Should you accept a publisher’s open access publishing offer?

Library agreements with publishers are at a crucial turning point, as they more and more often include OA publishing. By learning to do these analyses for themselves, participants will be better prepared to enter into negotiations with a publisher. The expertise developed through this course can make the uneven playing field of library-publisher negotiations slightly more even.

Course materials will be openly available. This will be a facilitated course taught by the authors.

The effect of data sources on the measurement of open access: A comparison of Dimensions and the Web of Science

Abstract:  With the growing number of open access (OA) mandates, the accurate measurement of OA publishing is an important policy issue. Existing studies have provided estimates of the prevalence of OA publications ranging from 27.9% to 53.7%, depending on the data source and period of investigation. This paper aims at providing a comparison of the proportion of OA publishing as represented in two major bibliometric databases, Web of Science (WoS) and Dimensions, and assesses how the choice of database affects the measurement of OA across different countries. Results show that a higher proportion of publications indexed in Dimensions are OA than those indexed by WoS, and that this is particularly true for publications originating from outside North America and Europe. The paper concludes with a discussion of the cause and consequences of these differences, motivating the use of more inclusive databases when examining OA, especially for publications originating beyond North America and Europe.

 

Data sources and their effects on the measurement of open access. Comparing Dimensions with the Web of Science

With the amount of open access (OA) mandates at the funder and institution level growing, the accurate  measurement of OA publishing is an important policy question. Existing studies have provided estimates of the prevalence of OA publications ranging from 27.9% to 53.7%, depending on the data source and period of investigation. This paper aims at providing a comparison of the proportion of OA publishing as represented in two bibliometric databases, Web of Science (WoS) and Dimensions, and assess how it affects the measurement of OA across different countries. Results show that publications indexed in Dimensions have a higher percentage of OA than those indexed by the WoS, especially for publications from outside North America and Europe. The paper concludes with a discussion of the cause and consequences of these differences, motivating for the use of more inclusive databases when examining OA, especially for publications beyond North America and Europe.

Open search tools need sustainable funding – Research Professional News

“The Covid-19 pandemic has triggered an explosion of knowledge, with more than 200,000 papers published to date. At one point last year, scientific output on the topic was doubling every 20 days. This huge growth poses big challenges for researchers, many of whom have pivoted to coronavirus research without experience or preparation.

Mainstream academic search engines are not built for such a situation. Tools such as Google Scholar, Scopus and Web of Science provide long, unstructured lists of results with little context.

These work well if you know what you are looking for. But for anyone diving into an unknown field, it can take weeks, even months, to identify the most important topics, publication venues and authors. This is far too long in a public health emergency.

The result has been delays, duplicated work, and problems with identifying reliable findings. This lack of tools to provide a quick overview of research results and evaluate them correctly has created a crisis in discoverability itself. …

Building on these, meta-aggregators such as Base, Core and OpenAIRE have begun to rival and in some cases outperform the proprietary search engines. …”

Google Scholar, Web of Science, and Scopus: Which is best for me? | Impact of Social Sciences

“Being able to find, assess and place new research within a field of knowledge, is integral to any research project. For social scientists this process is increasingly likely to take place on Google Scholar, closely followed by traditional scholarly databases. In this post, Alberto Martín-Martín, Enrique Orduna-Malea , Mike Thelwall, Emilio Delgado-López-Cózar, analyse the relative coverage of the three main research databases, Google Scholar, Web of Science and Scopus, finding significant divergences in the social sciences and humanities and suggest that researchers face a trade-off when using different databases: between more comprehensive, but disorderly systems and orderly, but limited systems….”

The journal coverage of Web of Science, Scopus and Dimensions: A comparative analysis | SpringerLink

Abstract:  Traditionally, Web of Science and Scopus have been the two most widely used databases for bibliometric analyses. However, during the last few years some new scholarly databases, such as Dimensions, have come up. Several previous studies have compared different databases, either through a direct comparison of article coverage or by comparing the citations across the databases. This article aims to present a comparative analysis of the journal coverage of the three databases (Web of Science, Scopus and Dimensions), with the objective to describe, understand and visualize the differences in them. The most recent master journal lists of the three databases is used for analysis. The results indicate that the databases have significantly different journal coverage, with the Web of Science being most selective and Dimensions being the most exhaustive. About 99.11% and 96.61% of the journals indexed in Web of Science are also indexed in Scopus and Dimensions, respectively. Scopus has 96.42% of its indexed journals also covered by Dimensions. Dimensions database has the most exhaustive journal coverage, with 82.22% more journals than Web of Science and 48.17% more journals than Scopus. This article also analysed the research outputs for 20 selected countries for the 2010–2018 period, as indexed in the three databases, and identified database-induced variations in research output volume, rank, global share and subject area composition for different countries. It is found that there are clearly visible variations in the research output from different countries in the three databases, along with differential coverage of different subject areas by the three databases. The analytical study provides an informative and practically useful picture of the journal coverage of Web of Science, Scopus and Dimensions databases.