Wenaas (2022) Choices of immediate open access and the relationship to journal ranking and publish-and-read deals | Frontiers

Wenaas L (2022) Choices of immediate open access and the relationship to journal ranking and publish-and-read deals. Front. Res. Metr. Anal. 7:943932. doi: 10.3389/frma.2022.943932

The role of academic journals is significant in the reward system of science, which makes their rank important for the researcher’s choice in deciding where to submit. The study asks how choices of immediate gold and hybrid open access are related to journal ranking and how the uptake of immediate open access is affected by transformative publish-and-read deals, pushed by recent science policy. Data consists of 186,621 articles published with a Norwegian affiliation in the period 2013–2021, all of which were published in journals ranked in a National specific ranking, on one of two levels according to their importance, prestige, and perceived quality within a discipline. The results are that researchers chose to have their articles published as hybrid two times as often in journals on the most prestigious level compared with journals on the normal level. The opposite effect was found with gold open access where publishing on the normal level was chosen three times more than on the high level. This can be explained by the absence of highly ranked gold open access journals in many disciplines. With the introduction of publish-and-read deals, hybrid open access has boosted and become a popular choice enabling the researcher to publish open access in legacy journals. The results confirm the position of journals in the reward system of science and should inform policymakers about the effects of transformative arrangements and their costs against the overall level of open access.

 

Uses of the Journal Impact Factor in national journal rankings in China and Europe – Kulczycki – Journal of the Association for Information Science and Technology – Wiley Online Library

Abstract:  This paper investigates different uses of the Journal Impact Factor (JIF) in national journal rankings and discusses the merits of supplementing metrics with expert assessment. Our focus is national journal rankings used as evidence to support decisions about the distribution of institutional funding or career advancement. The seven countries under comparison are China, Denmark, Finland, Italy, Norway, Poland, and Turkey—and the region of Flanders in Belgium. With the exception of Italy, top-tier journals used in national rankings include those classified at the highest level, or according to tier, or points implemented. A total of 3,565 (75.8%) out of 4,701 unique top-tier journals were identified as having a JIF, with 55.7% belonging to the first Journal Impact Factor quartile. Journal rankings in China, Flanders, Poland, and Turkey classify journals with a JIF as being top-tier, but only when they are in the first quartile of the Average Journal Impact Factor Percentile. Journal rankings that result from expert assessment in Denmark, Finland, and Norway regularly classify journals as top-tier outside the first quartile, particularly in the social sciences and humanities. We conclude that experts, when tasked with metric-informed journal rankings, take into account quality dimensions that are not covered by JIFs.

 

Barnett & Gadd (2022) University league tables have no legs to stand on | Significance

Barnett, A. and Gadd, E. (2022), University league tables have no legs to stand on. Significance, 19: 4-7. https://doi.org/10.1111/1740-9713.01663

What really makes one higher education institution “better” than another? The ranking of the world’s universities is big business built on a flimsy statistical approach, say Adrian Barnett and Elizabeth Gadd

 

Who games metrics and rankings? Institutional niches and journal impact factor inflation – ScienceDirect

Abstract:  Ratings and rankings are omnipresent and influential in contemporary society. Individuals and organizations strategically respond to incentives set by rating systems. We use academic publishing as a case study to examine organizational variation in responses to influential metrics. The Journal Impact Factor (JIF) is a prominent metric linked to the value of academic journals, as well as career prospects of researchers. Since scholars, institutions, and publishers alike all have strong interests in affiliating with high JIF journals, strategic behaviors to ‘game’ the JIF metric are prevalent. Strategic self-citation is a common tactic employed to inflate JIF values. Based on empirical analyses of academic journals indexed in the Web of Science, we examine institutional characteristics conducive to strategic self-citation for JIF inflation. Journals disseminated by for-profit publishers, with lower JIFs, published in academically peripheral countries and with more recent founding dates were more likely to exhibit JIF-inflating self-citation patterns. Findings reveal the importance of status and institutional logics in influencing metrics gaming behaviors, as well as how metrics can affect work outcomes in different types of institutions. While quantitative rating systems affect many who are being evaluated, certain types of people and organizations are more prone to being influenced by rating systems than others.

Successful Implementation of Open Access Strategies at Universities of Science & Technology – Strathprints

Abstract:  While the CWTS Leiden ranking has been available since 2011/2012, it is only in 2019 that a first attempt was made at ranking institutions by Open Access-related indicators. This was due to the arrival of Unpaywall as a tool to measure openly available institutional research outputs – either via the Green or the Gold OA routes – for a specific institution. The CWTS Leiden ranking by percentage of the institutional research output published Open Access effectively meant the first opportunity for institutions worldwide to be ranked by the depth of their Open Access implementation strategies brushing aside aspects like their size. This provided an interesting way to map the progress of CESAER Member institutions that were part of the Task Force Open Science 2020-2021 Open Access Working Group (OAWG) towards the objective stated by Plan S of achieving 100% Open Access of research outputs. The OAWG then set out to map the situation of the Member institutions represented in it on this Open Access ranking and to track their evolution on subsequent editions of this ranking. The idea behind this analysis was not so much to introduce an element of competition across institutions but to explore whether progress was taking place in the percentage of openly available institutional research outputs year on year. The results of this analysis – shown in figures within this paper for the 2019, 2020 and 2021 editions – show strong differences across Member institutions that are part of the OAWG. From internal discussions within the group, it became evident that these differences could be explained through a number of factors that contributed to a successful Open Access implementation at an institutional level. This provided the basis for this work. The document identifies four key factors that contribute to a successful OA implementation at institutions, and hence to achieving a good position on the CWTS Leiden ranking for Open Access.

 

Agreement on Reforming Research Assessment

“As signatories of this Agreement, we agree on the need to reform research assessment practices. Our vision is that the assessment of research, researchers and research organisations recognises the diverse outputs, practices and activities that maximise the quality and impact of research. This requires basing assessment primarily on qualitative judgement, for which peer review is central, supported by responsible use of quantitative indicators. Among other purposes, this is fundamental for: deciding which researchers to recruit, promote or reward, selecting which research proposals to fund, and identifying which research units and organisations to support….”

 

Mills (2022) Decolonial perspectives on global higher education: Disassembling data infrastructures, reassembling the field

David Mills (2022) Decolonial perspectives on global higher education: Disassembling data infrastructures, reassembling the field, Oxford Review of Education, DOI: 10.1080/03054985.2022.2072285

Abstract:The expansion of university systems across the planet over the last fifty years has led to the emergence of a new policy assemblage – ‘global higher education’ that depends on the collection, curation and representation of quantitative data. In this paper I explore the use of data by higher education policy actors to sustain ‘epistemic coloniality’. Building on a rich genealogy of anticolonial, postcolonial and feminist scholarship, I show how decolonial theory can be used to critique dominant global higher education imaginaries and the data infrastructures they depend on. Tracing the history of these infrastructures, I begin with OECD’s creation of decontextualised educational ‘indicators’. I go on to track the policy impact of global university league tables owned by commercial organisations. They assemble and commensurate institutional data into rankings that become taken-for-granted ‘global’ policy knowledge. I end by exploring the policy challenge of building alternative socio-technical infrastructures, and finding new ways to value higher education.

 

Wakeling & Abbasi (2022) Why do journals discontinue? A study of Australian ceased journals – Jamali – 2022 – Learned Publishing – Wiley Online Library

Jamali, H.R., Wakeling, S. and Abbasi, A. (2022), Why do journals discontinue? A study of Australian ceased journals. Learned Publishing, 35: 219-228. https://doi.org/10.1002/leap.1448

 

Abstract: Little is known about why journals discontinue despite its significant implications. We present an analysis of 140 Australian journals that ceased from 2011 to mid-2021 and present the results of a survey of editors of 53 of them. The death age of journals was 19.7 (median = 16) with 57% being 10?years or older. About 54% of them belonged to educational institutions and 34% to non-profit organizations. In terms of subject, 75% of the journals belonged to social sciences, humanities and arts. The survey showed that funding was an important reason for discontinuation, and lack of quality submission and lack of support from the owners of the journal also played a role. Too much reliance on voluntary works appeared to be an issue for editorial processes. The dominant metric culture in the research environment and pressure for journals to perform well in journal rankings negatively affect local journals in attracting quality submissions. A fifth of journals indicated that they did not have a plan for the preservation of articles at the time of publication and the current availability of the content of ceased journals appeared to be sub-optimal in many cases with reliance on the website of ceased journals or web-archive platforms.

 

 

Key points

 

One hundred and forty Australian journals ceased publishing between 2011 and 2020, with an average age of 19?years on cessation.
The majority of Australian journals that ceased publication 2011–2020 were in the social sciences, humanities and arts where local journals play an important role.
Funding was found to be a key reason for journal discontinuation followed by lack of support and quality submissions and over-reliance on voluntary work.
Metric driven culture and journal rankings adversely impact local journals and can lead to discontinuation.
Many journals have neither sustainable business models (or funding), nor a preservation plan, both of which jeopardize journal continuation and long-term access to archive content.

 

Rankings could undermine research-evaluation reforms – Research Professional News

“These European-level efforts will add to the gathering momentum for more rigorous and fairer ways to evaluate research. Thanks to initiatives such as the San Francisco Declaration on Research Assessment, more and more institutions are turning away from simplistic indicators such as journal impact factors as a measure of the quality of research.

The omission of rankings from the debate is worrying, given that the current design of rankings risks hindering efforts to improve research evaluation.

University rankings are meant to provide a comparison between institutions based on indicators such as citation counts and student-to-staff ratios. 

In reality, they have a tremendous impact on the public perception of the quality of institutions and their research. …

Rankings may not be directly linked to institutional funding and resources, but they affect these indirectly by swaying the choices of students, researchers, staff, institutions’ leaders, companies and global partners. 

Similar to journal impact factors, however, rankings provide an at-best-incomplete picture a university’s quality. Simple changes in the way their data—mostly quantitative indicators and methods—are populated, collected and analysed can affect the final results, as shown by institutions’ differing positions in different rankings.

Nuances and caveats are easily lost. To the public and the academic community, the visible thing is that a given university is “number one”. What this actually means often goes unquestioned—the only thing that matters is where one’s institution stands and how to climb higher.

In this quest for performance, leaders and managers in universities look at the different indicators used and how they might improve them within their institutions. One such indicator is the number of publications in high-impact journals….”

Gadd (2022) Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up | Frontiers – Research Metrics and Analytics

by Elizabeth Gadd

Draws parallels between the problematic use of GDP to evaluate economic success with the use of global university rankings to evaluate university success. Inspired by Kate Raworth’s Doughnut Economics, this perspective argues that the pursuit of growth as measured by such indicators creates universities that ‘grow’ up the rankings rather than those which ‘thrive’ or ‘mature.’ Such growth creates academic wealth divides within and between countries, despite the direction of growth as inspired by the rankings not truly reflecting universities’ critical purpose or contribution. Highlights the incompatibility between universities’ alignment with socially responsible practices and continued engagement with socially irresponsible ranking practices. Proposes four possible ways of engendering change in the university rankings space. Concludes by calling on leaders of ‘world-leading’ universities to join together to ‘lead the world’ in challenging global university rankings, and to set their own standards for thriving and maturing universities.

COIs Informational Sessions, 12 Jan. 2022 | Catalog of Open Infrastructure Services | Invest in Open Infrastructure

“Catalog of Open Infrastructure Services (COIs) Informational Sessions

Join us for one of our upcoming informational session where we’ll share more about our process in developing COIs and solicit your thoughts. Participation is open; registration is required.

January 12, 2022 | 16.00 UTC / 24.00 UTC | Online

We recently announced the launch of our Catalog of Open Infrastructure Services (COIs). This resource is the culmination of research, interviews, and analysis of a sampling of open infrastructure projects serving the research community….”

The emergence of university rankings: a historical?sociological account | SpringerLink

Wilbers, S., Brankovic, J. The emergence of university rankings: a historical?sociological account. High Educ (2021). https://doi.org/10.1007/s10734-021-00776-7

Abstract

Nowadays, university rankings are a familiar phenomenon in higher education all over the world. But how did rankings achieve this status? To address this question, we bring in a historical-sociological perspective and conceptualize rankings as a phenomenon in history. We focus on the United States and identify the emergence of a specific understanding of organizational performance in the postwar decades. We argue that the advent of this understanding constituted a discursive shift, which was made possible—most notably but not solely—by the rise of functionalism to the status of a dominant intellectual paradigm. The shift crystallized in the rankings of graduate departments, which were commissioned by the National Science Foundation and produced by the American Council on Education (ACE) in 1966 and 1970. Throughout the 1970s, social scientists became increasingly more interested in the methods and merits of ranking higher education institutions, in which they would explicitly refer to the ACE rankings. This was accompanied by a growing recognition, already in the 1970s, that rankings had a place and purpose in the higher education system—a trend that has continued into the present day.

What’s Your Tier? Introducing Library Partnership (LP) Certification for Journal Publishers · Series 1.3: Global Transition to Open

“Four categories and an open-ended response are the heart of LP [Library Partnership] certification for journal publishers: 

Access examines when and how the public can view an article and what barriers exist to author participation in publishing. There is substantial nuance in this category, in part because publishers often approach access very differently. In broad strokes, publishers that provide full and immediate open access (OA) across all journals earn more points than those that simply allow author-led open archiving. Publishers with no or low APCs earn points, as do publishers offering APC waivers for any of their journals (not forcing waiver-eligible authors to publish only in particular journals, that is, fully OA journals). 16 possible points. 

Rights focuses on author rights and reuse rights. Publishers that allow authors to retain all rights, or use Creative Commons licenses, earn points. 11 possible points.

Community considers ethical and business aspects of publishing. Points are awarded to nonprofit and society publishers. Legal actions against libraries or lobbying against OA do not earn points for a publisher, while evidence of transparency and responsible handling of user data do earn points. Membership in COPE is a plus. 12 possible points. 

Discoverability deals with the technical side of publishing. Publishers earn points through accessibility, ORCiD integration, participation in preservation organizations, and similar practices. 15 possible points.

The Open-Ended Response allows publishers to describe other actions they take to support equitable and open science/scholarship. 3 possible points. …”

Gadd (2021) Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up | Frontiers in Research Metrics and Analytics

Gadd, Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up. Frontiers in Research Metrics and Analytics. https://doi.org/10.3389/frma.2021.680023

Abstract: Draws parallels between the problematic use of GDP to evaluate economic success with the use of global university rankings to evaluate university success. Inspired by Kate Raworth’s Doughnut Economics, this perspective argues that the pursuit of growth as measured by such indicators creates universities that ‘grow’ up the rankings rather than those which ‘thrive’ or ‘mature.’ Such growth creates academic wealth divides within and between countries, despite the direction of growth as inspired by the rankings not truly reflecting universities’ critical purpose or contribution. Highlights the incompatibility between universities’ alignment with socially responsible practices and continued engagement with socially irresponsible ranking practices. Proposes four possible ways of engendering change in the university rankings space. Concludes by calling on leaders of ‘world-leading’ universities to join together to ‘lead the world’ in challenging global university rankings, and to set their own standards for thriving and maturing universities.

TRANSPARENT RANKING: All Repositories (August 2021) | Ranking Web of Repositories

During the last months, we realized the indexing of records of several open access repositories by Google Scholar is not as complete as previously without a clear reason. From the experience of a few cases, it looks that GS penalizes error in the metadata descriptions, so it is important to the affected repositories to check their level of indexing and to try to identify potential problems. Please, consider the following Indexing GS guidelines https://scholar.google.com/intl/en/scholar/inclusion.html https://www.or2015.net/wp-content/uploads/2015/06/or-2015-anurag-google-scholar.pdf and the following material: Exposing Repository Content to Google Scholar A few suggestions for improving the web visibility of the contents of your institutional OA repository “Altmetrics of the Open Access Institutional Repositories: A Webometrics Approach” As a service for the OA community we are providing five lists of repositories (all (institutional+subject), institutional, portals, data, and CRIS) with the raw numbers of records in GS for their web domains (site:xxx.yyy.zz excluding citations and patents) ranked by decreasing number of items as collected during the second week of AUGUST 2021. The list is still incomplete as we are still adding new repositories.