Wakeling & Abbasi (2022) Why do journals discontinue? A study of Australian ceased journals – Jamali – 2022 – Learned Publishing – Wiley Online Library

Jamali, H.R., Wakeling, S. and Abbasi, A. (2022), Why do journals discontinue? A study of Australian ceased journals. Learned Publishing, 35: 219-228. https://doi.org/10.1002/leap.1448

 

Abstract: Little is known about why journals discontinue despite its significant implications. We present an analysis of 140 Australian journals that ceased from 2011 to mid-2021 and present the results of a survey of editors of 53 of them. The death age of journals was 19.7 (median = 16) with 57% being 10?years or older. About 54% of them belonged to educational institutions and 34% to non-profit organizations. In terms of subject, 75% of the journals belonged to social sciences, humanities and arts. The survey showed that funding was an important reason for discontinuation, and lack of quality submission and lack of support from the owners of the journal also played a role. Too much reliance on voluntary works appeared to be an issue for editorial processes. The dominant metric culture in the research environment and pressure for journals to perform well in journal rankings negatively affect local journals in attracting quality submissions. A fifth of journals indicated that they did not have a plan for the preservation of articles at the time of publication and the current availability of the content of ceased journals appeared to be sub-optimal in many cases with reliance on the website of ceased journals or web-archive platforms.

 

 

Key points

 

One hundred and forty Australian journals ceased publishing between 2011 and 2020, with an average age of 19?years on cessation.
The majority of Australian journals that ceased publication 2011–2020 were in the social sciences, humanities and arts where local journals play an important role.
Funding was found to be a key reason for journal discontinuation followed by lack of support and quality submissions and over-reliance on voluntary work.
Metric driven culture and journal rankings adversely impact local journals and can lead to discontinuation.
Many journals have neither sustainable business models (or funding), nor a preservation plan, both of which jeopardize journal continuation and long-term access to archive content.

 

Rankings could undermine research-evaluation reforms – Research Professional News

“These European-level efforts will add to the gathering momentum for more rigorous and fairer ways to evaluate research. Thanks to initiatives such as the San Francisco Declaration on Research Assessment, more and more institutions are turning away from simplistic indicators such as journal impact factors as a measure of the quality of research.

The omission of rankings from the debate is worrying, given that the current design of rankings risks hindering efforts to improve research evaluation.

University rankings are meant to provide a comparison between institutions based on indicators such as citation counts and student-to-staff ratios. 

In reality, they have a tremendous impact on the public perception of the quality of institutions and their research. …

Rankings may not be directly linked to institutional funding and resources, but they affect these indirectly by swaying the choices of students, researchers, staff, institutions’ leaders, companies and global partners. 

Similar to journal impact factors, however, rankings provide an at-best-incomplete picture a university’s quality. Simple changes in the way their data—mostly quantitative indicators and methods—are populated, collected and analysed can affect the final results, as shown by institutions’ differing positions in different rankings.

Nuances and caveats are easily lost. To the public and the academic community, the visible thing is that a given university is “number one”. What this actually means often goes unquestioned—the only thing that matters is where one’s institution stands and how to climb higher.

In this quest for performance, leaders and managers in universities look at the different indicators used and how they might improve them within their institutions. One such indicator is the number of publications in high-impact journals….”

Gadd (2022) Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up | Frontiers – Research Metrics and Analytics

by Elizabeth Gadd

Draws parallels between the problematic use of GDP to evaluate economic success with the use of global university rankings to evaluate university success. Inspired by Kate Raworth’s Doughnut Economics, this perspective argues that the pursuit of growth as measured by such indicators creates universities that ‘grow’ up the rankings rather than those which ‘thrive’ or ‘mature.’ Such growth creates academic wealth divides within and between countries, despite the direction of growth as inspired by the rankings not truly reflecting universities’ critical purpose or contribution. Highlights the incompatibility between universities’ alignment with socially responsible practices and continued engagement with socially irresponsible ranking practices. Proposes four possible ways of engendering change in the university rankings space. Concludes by calling on leaders of ‘world-leading’ universities to join together to ‘lead the world’ in challenging global university rankings, and to set their own standards for thriving and maturing universities.

COIs Informational Sessions, 12 Jan. 2022 | Catalog of Open Infrastructure Services | Invest in Open Infrastructure

“Catalog of Open Infrastructure Services (COIs) Informational Sessions

Join us for one of our upcoming informational session where we’ll share more about our process in developing COIs and solicit your thoughts. Participation is open; registration is required.

January 12, 2022 | 16.00 UTC / 24.00 UTC | Online

We recently announced the launch of our Catalog of Open Infrastructure Services (COIs). This resource is the culmination of research, interviews, and analysis of a sampling of open infrastructure projects serving the research community….”

The emergence of university rankings: a historical?sociological account | SpringerLink

Wilbers, S., Brankovic, J. The emergence of university rankings: a historical?sociological account. High Educ (2021). https://doi.org/10.1007/s10734-021-00776-7

Abstract

Nowadays, university rankings are a familiar phenomenon in higher education all over the world. But how did rankings achieve this status? To address this question, we bring in a historical-sociological perspective and conceptualize rankings as a phenomenon in history. We focus on the United States and identify the emergence of a specific understanding of organizational performance in the postwar decades. We argue that the advent of this understanding constituted a discursive shift, which was made possible—most notably but not solely—by the rise of functionalism to the status of a dominant intellectual paradigm. The shift crystallized in the rankings of graduate departments, which were commissioned by the National Science Foundation and produced by the American Council on Education (ACE) in 1966 and 1970. Throughout the 1970s, social scientists became increasingly more interested in the methods and merits of ranking higher education institutions, in which they would explicitly refer to the ACE rankings. This was accompanied by a growing recognition, already in the 1970s, that rankings had a place and purpose in the higher education system—a trend that has continued into the present day.

What’s Your Tier? Introducing Library Partnership (LP) Certification for Journal Publishers · Series 1.3: Global Transition to Open

“Four categories and an open-ended response are the heart of LP [Library Partnership] certification for journal publishers: 

Access examines when and how the public can view an article and what barriers exist to author participation in publishing. There is substantial nuance in this category, in part because publishers often approach access very differently. In broad strokes, publishers that provide full and immediate open access (OA) across all journals earn more points than those that simply allow author-led open archiving. Publishers with no or low APCs earn points, as do publishers offering APC waivers for any of their journals (not forcing waiver-eligible authors to publish only in particular journals, that is, fully OA journals). 16 possible points. 

Rights focuses on author rights and reuse rights. Publishers that allow authors to retain all rights, or use Creative Commons licenses, earn points. 11 possible points.

Community considers ethical and business aspects of publishing. Points are awarded to nonprofit and society publishers. Legal actions against libraries or lobbying against OA do not earn points for a publisher, while evidence of transparency and responsible handling of user data do earn points. Membership in COPE is a plus. 12 possible points. 

Discoverability deals with the technical side of publishing. Publishers earn points through accessibility, ORCiD integration, participation in preservation organizations, and similar practices. 15 possible points.

The Open-Ended Response allows publishers to describe other actions they take to support equitable and open science/scholarship. 3 possible points. …”

Gadd (2021) Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up | Frontiers in Research Metrics and Analytics

Gadd, Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up. Frontiers in Research Metrics and Analytics. https://doi.org/10.3389/frma.2021.680023

Abstract: Draws parallels between the problematic use of GDP to evaluate economic success with the use of global university rankings to evaluate university success. Inspired by Kate Raworth’s Doughnut Economics, this perspective argues that the pursuit of growth as measured by such indicators creates universities that ‘grow’ up the rankings rather than those which ‘thrive’ or ‘mature.’ Such growth creates academic wealth divides within and between countries, despite the direction of growth as inspired by the rankings not truly reflecting universities’ critical purpose or contribution. Highlights the incompatibility between universities’ alignment with socially responsible practices and continued engagement with socially irresponsible ranking practices. Proposes four possible ways of engendering change in the university rankings space. Concludes by calling on leaders of ‘world-leading’ universities to join together to ‘lead the world’ in challenging global university rankings, and to set their own standards for thriving and maturing universities.

TRANSPARENT RANKING: All Repositories (August 2021) | Ranking Web of Repositories

During the last months, we realized the indexing of records of several open access repositories by Google Scholar is not as complete as previously without a clear reason. From the experience of a few cases, it looks that GS penalizes error in the metadata descriptions, so it is important to the affected repositories to check their level of indexing and to try to identify potential problems. Please, consider the following Indexing GS guidelines https://scholar.google.com/intl/en/scholar/inclusion.html https://www.or2015.net/wp-content/uploads/2015/06/or-2015-anurag-google-scholar.pdf and the following material: Exposing Repository Content to Google Scholar A few suggestions for improving the web visibility of the contents of your institutional OA repository “Altmetrics of the Open Access Institutional Repositories: A Webometrics Approach” As a service for the OA community we are providing five lists of repositories (all (institutional+subject), institutional, portals, data, and CRIS) with the raw numbers of records in GS for their web domains (site:xxx.yyy.zz excluding citations and patents) ranked by decreasing number of items as collected during the second week of AUGUST 2021. The list is still incomplete as we are still adding new repositories.

Open Science rankings: yes, no, or not this way? A debate on developing and implementing transparency metrics. – JOTE | Journal of Trial and Error

“The Journal of Trial and Error is proud to present an exciting and timely event: a three-way debate on the topic of Open Science metrics, specifically, transparency metrics. Should we develop these metrics? What purposes do they fulfil? How should Open Science practices be encouraged? Are (transparency) rankings the best solution? These questions and more will be addressed in a dynamic and interactive debate with three researchers of different backgrounds: Etienne LeBel (Independent Meta-Scientist and founder of ERC-funded project ‘Curate Science’), Sarah de Rijcke (Professor of Science and Evaluation Studies and director of the Centre for Science and Technology Studies at Leiden University), and Juliëtte Schaafsma (Professor of Cultural Psychology at Tilburg University and fierce critic of rankings and audits). This is an event organized by the Journal of Trial and Error, and supported by the Open Science Community Tilburg, the Centre for Science and Technology Studies (CWTS, Leiden University), and the Open Science Community Utrecht.”

 

Ural federal university: The University’s Open Archive has Risen Again in Repository Rankings – India Education | Latest Education News | Global Educational News | Recent Educational News

“In the ranking of institutional repositories, the university’s archive has risen two positions and is ranked 26th in the world out of more than 3,100 other resources. Moreover, the Ural Federal University archive continues to hold first place in Russia among institutional archives….”

University Rankings and Governance by Metrics and Algorithms | Zenodo

Abstract:  This paper looks closely at how data analytic providers leverage rankings as a part of their strategies to further extract rent and assets from the university beyond their traditional roles as publishers and citation data providers. Multinational publishers such as Elsevier, with over 2,500 journals in its portfolio, has transitioned to become a data analytic firm. Rankings expand their abilities to monetize further their existing journal holdings, as there is a strong association between publication in high-impact journals and improvement in rankings.  The global academic publishing industry has become highly oligopolistic, and a small handful of legacy multinational firms are now publishing the majority of the world’s research output (See Larivière et. al. 2015; Fyfe et. al. 2017; Posada & Chen, 2018). It is therefore crucial that their roles and enormous market power in influencing university rankings be more closely scrutinized. We suggest that due to a combination of a lack of transparency regarding, for example, Elsevier’s data services and products and their self-positioning as a key intermediary in the commercial rankings business, they have managed to evade the social responsibilities and scrutiny that come with occupying such a critical public function in university evaluation. As the quest for ever-higher rankings often works in conflict with universities’ public missions, it is critical to raise questions about the governance of such private digital platforms and the compatibility between their private interests and the maintenance of universities’ public values.

 

Indonesia nomor 1 untuk publikasi jurnal akses terbuka di dunia: apa artinya bagi ekosistem riset lokal

“With the largest number of OA journals in the world, the knowledge of Indonesian researchers should be able to freely reach the public.

The government has started to realize this.

This is evidenced by the recent Law on National Science and Technology System ( UU Sisnas Science and Technology ) which also began requiring the application of this open access system for research publications to ensure that research results can be enjoyed by the public.

Through this obligation, the government hopes to encourage not only the transparency of the research process, but also innovations and new findings that benefit society….

According to our records, the research publication system in Indonesia since the 1970s has implemented the non-profit principle. At that time research publications were sold for a subscription fee which was usually calculated from the cost of printing only. This system is different from that found in developed countries which are dominated by commercial publishing companies.

This is where Indonesia triumphs over any research ecosystem.

Some that can match it are the Scielo research ecosystem in Brazil, the African Journal Online (AJOL) scientific publishing ecosystem and the Africaxiv from the African continent…..”

Gaming the Metrics | The MIT Press

“The traditional academic imperative to “publish or perish” is increasingly coupled with the newer necessity of “impact or perish”—the requirement that a publication have “impact,” as measured by a variety of metrics, including citations, views, and downloads. Gaming the Metrics examines how the increasing reliance on metrics to evaluate scholarly publications has produced radically new forms of academic fraud and misconduct. The contributors show that the metrics-based “audit culture” has changed the ecology of research, fostering the gaming and manipulation of quantitative indicators, which lead to the invention of such novel forms of misconduct as citation rings and variously rigged peer reviews. The chapters, written by both scholars and those in the trenches of academic publication, provide a map of academic fraud and misconduct today. They consider such topics as the shortcomings of metrics, the gaming of impact factors, the emergence of so-called predatory journals, the “salami slicing” of scientific findings, the rigging of global university rankings, and the creation of new watchdogs and forensic practices.”

OA Monitoring: why do we get different results? – Digital Scholarship Leiden

“The differing percentages of OA can be explained by several factors: different stakeholders use different definitions of OA, different data sources, and different inclusion and exclusion criteria. But the precise nature of these differences is not always obvious to the casual reader.

In the next paragraphs we will look into the reports produced by three different monitors of institutional OA, namely, CWTS Leiden Ranking, the national monitoring in The Netherlands, and Leiden University Libraries’ own monitoring.

The EU Open Science Monitor also monitors trends for open access to publications but because it does so only at a country level and not at an individual institution level, we have not included it in our comparison, however, the EU Monitor’s methodological note (including the annexes) explains their choice of sources.

We will end this blog post with a conclusion and our principles and recommendations….”