Assessing Open Access Friendliness of National Institutes of Technology (NITs) A Data Carpentry Approach | DESIDOC Journal of Library & Information Technology, 2022-10

“Abstract: This research study aims to measure the Open Access (OA) friendliness of National Institutes of Technology (NITs) of India that are listed in the overall category of NIRF (National Institutional Ranking Framework), 2021 by taking into consideration four important OA parameters – i) OA publication share; ii) OA licensing scenario; iii) citation impact of OA publications; and iv) altmetric scores of OA publications. It deals with 64,485 publications of the selected 11 NITs during the period from 2012 to 2021 (10 years), citations received by these publications (5,42,638 citations), and altmetric attention scores of the documents (5,213 publications) during the period under study. A data carpentry tool, namely OpenRefine, and open access bibliographic/citation data sources such as Unpaywall, Dimensions, and Altmetric.com have been deployed to accomplish this large-scale study for ranking NITs by their Open Access Friendliness (OAF). The OAF indicator, as applied in this study, is a distributed weightage based 100-point scale built on top of the aforesaid OA parameters. The ranking framework shows that Sardar Vallabhbhai National Institute of Technology, Surat (est. in 1961) has achieved the top position with a score of 52.12 (out of 100), but in totality only 3 NITs (out of the selected 11 NITs) crossed the 50 per cent mark in the adapted OAF scale.”

Roy, A., & Mukhopadhyay, P. (2022). Assessing Open Access Friendliness of National Institutes of Technology (NITs) A Data Carpentry Approach. DESIDOC Journal of Library & Information Technology, 42(5), 331-338. https://doi.org/10.14429/djlit.42.5.18263

Indonesian research access: quantity over quality?

“Prior to the open access movement and the proliferation of the internet, almost all Indonesian higher education institutions made thesis and dissertation collections closed, accessible only with certain permissions….

Prior to the open access movement and the proliferation of the internet, almost all Indonesian higher education institutions made thesis and dissertation collections closed, accessible only with certain permissions….

The lack of selection process and quality control for the scholarly resources uploaded to the institutional repositories had led to some unhelpful material making it way into them: documents with supervisor’s comments still visible; documents that were compressed or password protected; documents that were uploaded as multiple image files; documents that were available only partially; and so on….

 

When quantity trumps quality, the repositories become less effective as a means of disseminating scholarly works….”

Chirikov (2022) Does conflict of interest distort global university rankings? | Higher Education, SpringerLink

Chirikov, I. Does conflict of interest distort global university rankings?. High Educ (2022). https://doi.org/10.1007/s10734-022-00942-5

Abstract:

Global university rankings influence students’ choices and higher education policies throughout the world. When rankers not only evaluate universities but also provide them with consulting, analytics, or advertising services, rankers are vulnerable to conflicts of interest that may potentially distort their rankings. The paper assesses the impact of contracting with rankers on university ranking outcomes using a difference-in-difference research design. The study matches data on the positions of 28 Russian universities in QS World University Rankings between 2016 and 2021 with information on contracts these universities had for services from QS—the company that produces these rankings. The study compares the fluctuations in QS rankings with data obtained from the Times Higher Education rankings and data recorded by national statistics. The results suggest that the universities with frequent QS-related contracts had an increase of 0.75 standard deviations (~?140 positions) in QS World University Rankings and an increase of 0.9 standard deviations in reported QS faculty-student ratio scores over 5 years, regardless of changes in the institutional characteristics. The observed distortions could be explained by university rankers’ self-serving bias that benefits both rankers and prestige-seeking universities and reinforces the persistence of rankings in higher education.

 

 

The Curtin Open Knowledge Initiative | LIBER Quarterly: The Journal of the Association of European Research Libraries

Abstract:  In the current era of worldwide competition in higher education, universities are caught up in market processes that encourage compliance with the measurement systems applied by world university rankings. Despite questions about the rankings’ methodologies and data sources, universities continue to adopt assessment and evaluation practices that require academic researchers to publish in sources indexed by the major commercial bibliographic databases used by world rankings. Building on a critique of the limited bibliometric measures and underlying assumptions of rankings, the Curtin Open Knowledge Initiative interdisciplinary research project aggregates and analyses scholarly research data including open access output from multiple open sources for more than 20,000 institutions worldwide. To understand who is creating knowledge and how diversity is enacted through the transmission of knowledge we analyse workforce demographic data. In this article, we discuss the project’s rationale, methodologies and examples of data analysis that can enable universities to make independent assessments, ask questions about rankings, and contribute to open knowledge-making and sharing.  Expanding on our presentation to the LIBER Online 2021 Conference, we discuss collaboration with academic libraries and other scholarly communication stakeholders to develop and extend the open knowledge project.

 

The Curtin Open Knowledge Initiative | LIBER Quarterly: The Journal of the Association of European Research Libraries

Abstract:  In the current era of worldwide competition in higher education, universities are caught up in market processes that encourage compliance with the measurement systems applied by world university rankings. Despite questions about the rankings’ methodologies and data sources, universities continue to adopt assessment and evaluation practices that require academic researchers to publish in sources indexed by the major commercial bibliographic databases used by world rankings. Building on a critique of the limited bibliometric measures and underlying assumptions of rankings, the Curtin Open Knowledge Initiative interdisciplinary research project aggregates and analyses scholarly research data including open access output from multiple open sources for more than 20,000 institutions worldwide. To understand who is creating knowledge and how diversity is enacted through the transmission of knowledge we analyse workforce demographic data. In this article, we discuss the project’s rationale, methodologies and examples of data analysis that can enable universities to make independent assessments, ask questions about rankings, and contribute to open knowledge-making and sharing.  Expanding on our presentation to the LIBER Online 2021 Conference, we discuss collaboration with academic libraries and other scholarly communication stakeholders to develop and extend the open knowledge project.

 

Wenaas (2022) Choices of immediate open access and the relationship to journal ranking and publish-and-read deals | Frontiers

Wenaas L (2022) Choices of immediate open access and the relationship to journal ranking and publish-and-read deals. Front. Res. Metr. Anal. 7:943932. doi: 10.3389/frma.2022.943932

The role of academic journals is significant in the reward system of science, which makes their rank important for the researcher’s choice in deciding where to submit. The study asks how choices of immediate gold and hybrid open access are related to journal ranking and how the uptake of immediate open access is affected by transformative publish-and-read deals, pushed by recent science policy. Data consists of 186,621 articles published with a Norwegian affiliation in the period 2013–2021, all of which were published in journals ranked in a National specific ranking, on one of two levels according to their importance, prestige, and perceived quality within a discipline. The results are that researchers chose to have their articles published as hybrid two times as often in journals on the most prestigious level compared with journals on the normal level. The opposite effect was found with gold open access where publishing on the normal level was chosen three times more than on the high level. This can be explained by the absence of highly ranked gold open access journals in many disciplines. With the introduction of publish-and-read deals, hybrid open access has boosted and become a popular choice enabling the researcher to publish open access in legacy journals. The results confirm the position of journals in the reward system of science and should inform policymakers about the effects of transformative arrangements and their costs against the overall level of open access.

 

Uses of the Journal Impact Factor in national journal rankings in China and Europe – Kulczycki – Journal of the Association for Information Science and Technology – Wiley Online Library

Abstract:  This paper investigates different uses of the Journal Impact Factor (JIF) in national journal rankings and discusses the merits of supplementing metrics with expert assessment. Our focus is national journal rankings used as evidence to support decisions about the distribution of institutional funding or career advancement. The seven countries under comparison are China, Denmark, Finland, Italy, Norway, Poland, and Turkey—and the region of Flanders in Belgium. With the exception of Italy, top-tier journals used in national rankings include those classified at the highest level, or according to tier, or points implemented. A total of 3,565 (75.8%) out of 4,701 unique top-tier journals were identified as having a JIF, with 55.7% belonging to the first Journal Impact Factor quartile. Journal rankings in China, Flanders, Poland, and Turkey classify journals with a JIF as being top-tier, but only when they are in the first quartile of the Average Journal Impact Factor Percentile. Journal rankings that result from expert assessment in Denmark, Finland, and Norway regularly classify journals as top-tier outside the first quartile, particularly in the social sciences and humanities. We conclude that experts, when tasked with metric-informed journal rankings, take into account quality dimensions that are not covered by JIFs.

 

Barnett & Gadd (2022) University league tables have no legs to stand on | Significance

Barnett, A. and Gadd, E. (2022), University league tables have no legs to stand on. Significance, 19: 4-7. https://doi.org/10.1111/1740-9713.01663

What really makes one higher education institution “better” than another? The ranking of the world’s universities is big business built on a flimsy statistical approach, say Adrian Barnett and Elizabeth Gadd

 

Who games metrics and rankings? Institutional niches and journal impact factor inflation – ScienceDirect

Abstract:  Ratings and rankings are omnipresent and influential in contemporary society. Individuals and organizations strategically respond to incentives set by rating systems. We use academic publishing as a case study to examine organizational variation in responses to influential metrics. The Journal Impact Factor (JIF) is a prominent metric linked to the value of academic journals, as well as career prospects of researchers. Since scholars, institutions, and publishers alike all have strong interests in affiliating with high JIF journals, strategic behaviors to ‘game’ the JIF metric are prevalent. Strategic self-citation is a common tactic employed to inflate JIF values. Based on empirical analyses of academic journals indexed in the Web of Science, we examine institutional characteristics conducive to strategic self-citation for JIF inflation. Journals disseminated by for-profit publishers, with lower JIFs, published in academically peripheral countries and with more recent founding dates were more likely to exhibit JIF-inflating self-citation patterns. Findings reveal the importance of status and institutional logics in influencing metrics gaming behaviors, as well as how metrics can affect work outcomes in different types of institutions. While quantitative rating systems affect many who are being evaluated, certain types of people and organizations are more prone to being influenced by rating systems than others.

Successful Implementation of Open Access Strategies at Universities of Science & Technology – Strathprints

Abstract:  While the CWTS Leiden ranking has been available since 2011/2012, it is only in 2019 that a first attempt was made at ranking institutions by Open Access-related indicators. This was due to the arrival of Unpaywall as a tool to measure openly available institutional research outputs – either via the Green or the Gold OA routes – for a specific institution. The CWTS Leiden ranking by percentage of the institutional research output published Open Access effectively meant the first opportunity for institutions worldwide to be ranked by the depth of their Open Access implementation strategies brushing aside aspects like their size. This provided an interesting way to map the progress of CESAER Member institutions that were part of the Task Force Open Science 2020-2021 Open Access Working Group (OAWG) towards the objective stated by Plan S of achieving 100% Open Access of research outputs. The OAWG then set out to map the situation of the Member institutions represented in it on this Open Access ranking and to track their evolution on subsequent editions of this ranking. The idea behind this analysis was not so much to introduce an element of competition across institutions but to explore whether progress was taking place in the percentage of openly available institutional research outputs year on year. The results of this analysis – shown in figures within this paper for the 2019, 2020 and 2021 editions – show strong differences across Member institutions that are part of the OAWG. From internal discussions within the group, it became evident that these differences could be explained through a number of factors that contributed to a successful Open Access implementation at an institutional level. This provided the basis for this work. The document identifies four key factors that contribute to a successful OA implementation at institutions, and hence to achieving a good position on the CWTS Leiden ranking for Open Access.

 

Agreement on Reforming Research Assessment

“As signatories of this Agreement, we agree on the need to reform research assessment practices. Our vision is that the assessment of research, researchers and research organisations recognises the diverse outputs, practices and activities that maximise the quality and impact of research. This requires basing assessment primarily on qualitative judgement, for which peer review is central, supported by responsible use of quantitative indicators. Among other purposes, this is fundamental for: deciding which researchers to recruit, promote or reward, selecting which research proposals to fund, and identifying which research units and organisations to support….”

 

Mills (2022) Decolonial perspectives on global higher education: Disassembling data infrastructures, reassembling the field

David Mills (2022) Decolonial perspectives on global higher education: Disassembling data infrastructures, reassembling the field, Oxford Review of Education, DOI: 10.1080/03054985.2022.2072285

Abstract:The expansion of university systems across the planet over the last fifty years has led to the emergence of a new policy assemblage – ‘global higher education’ that depends on the collection, curation and representation of quantitative data. In this paper I explore the use of data by higher education policy actors to sustain ‘epistemic coloniality’. Building on a rich genealogy of anticolonial, postcolonial and feminist scholarship, I show how decolonial theory can be used to critique dominant global higher education imaginaries and the data infrastructures they depend on. Tracing the history of these infrastructures, I begin with OECD’s creation of decontextualised educational ‘indicators’. I go on to track the policy impact of global university league tables owned by commercial organisations. They assemble and commensurate institutional data into rankings that become taken-for-granted ‘global’ policy knowledge. I end by exploring the policy challenge of building alternative socio-technical infrastructures, and finding new ways to value higher education.

 

Wakeling & Abbasi (2022) Why do journals discontinue? A study of Australian ceased journals – Jamali – 2022 – Learned Publishing – Wiley Online Library

Jamali, H.R., Wakeling, S. and Abbasi, A. (2022), Why do journals discontinue? A study of Australian ceased journals. Learned Publishing, 35: 219-228. https://doi.org/10.1002/leap.1448

 

Abstract: Little is known about why journals discontinue despite its significant implications. We present an analysis of 140 Australian journals that ceased from 2011 to mid-2021 and present the results of a survey of editors of 53 of them. The death age of journals was 19.7 (median = 16) with 57% being 10?years or older. About 54% of them belonged to educational institutions and 34% to non-profit organizations. In terms of subject, 75% of the journals belonged to social sciences, humanities and arts. The survey showed that funding was an important reason for discontinuation, and lack of quality submission and lack of support from the owners of the journal also played a role. Too much reliance on voluntary works appeared to be an issue for editorial processes. The dominant metric culture in the research environment and pressure for journals to perform well in journal rankings negatively affect local journals in attracting quality submissions. A fifth of journals indicated that they did not have a plan for the preservation of articles at the time of publication and the current availability of the content of ceased journals appeared to be sub-optimal in many cases with reliance on the website of ceased journals or web-archive platforms.

 

 

Key points

 

One hundred and forty Australian journals ceased publishing between 2011 and 2020, with an average age of 19?years on cessation.
The majority of Australian journals that ceased publication 2011–2020 were in the social sciences, humanities and arts where local journals play an important role.
Funding was found to be a key reason for journal discontinuation followed by lack of support and quality submissions and over-reliance on voluntary work.
Metric driven culture and journal rankings adversely impact local journals and can lead to discontinuation.
Many journals have neither sustainable business models (or funding), nor a preservation plan, both of which jeopardize journal continuation and long-term access to archive content.

 

Rankings could undermine research-evaluation reforms – Research Professional News

“These European-level efforts will add to the gathering momentum for more rigorous and fairer ways to evaluate research. Thanks to initiatives such as the San Francisco Declaration on Research Assessment, more and more institutions are turning away from simplistic indicators such as journal impact factors as a measure of the quality of research.

The omission of rankings from the debate is worrying, given that the current design of rankings risks hindering efforts to improve research evaluation.

University rankings are meant to provide a comparison between institutions based on indicators such as citation counts and student-to-staff ratios. 

In reality, they have a tremendous impact on the public perception of the quality of institutions and their research. …

Rankings may not be directly linked to institutional funding and resources, but they affect these indirectly by swaying the choices of students, researchers, staff, institutions’ leaders, companies and global partners. 

Similar to journal impact factors, however, rankings provide an at-best-incomplete picture a university’s quality. Simple changes in the way their data—mostly quantitative indicators and methods—are populated, collected and analysed can affect the final results, as shown by institutions’ differing positions in different rankings.

Nuances and caveats are easily lost. To the public and the academic community, the visible thing is that a given university is “number one”. What this actually means often goes unquestioned—the only thing that matters is where one’s institution stands and how to climb higher.

In this quest for performance, leaders and managers in universities look at the different indicators used and how they might improve them within their institutions. One such indicator is the number of publications in high-impact journals….”

Gadd (2022) Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up | Frontiers – Research Metrics and Analytics

by Elizabeth Gadd

Draws parallels between the problematic use of GDP to evaluate economic success with the use of global university rankings to evaluate university success. Inspired by Kate Raworth’s Doughnut Economics, this perspective argues that the pursuit of growth as measured by such indicators creates universities that ‘grow’ up the rankings rather than those which ‘thrive’ or ‘mature.’ Such growth creates academic wealth divides within and between countries, despite the direction of growth as inspired by the rankings not truly reflecting universities’ critical purpose or contribution. Highlights the incompatibility between universities’ alignment with socially responsible practices and continued engagement with socially irresponsible ranking practices. Proposes four possible ways of engendering change in the university rankings space. Concludes by calling on leaders of ‘world-leading’ universities to join together to ‘lead the world’ in challenging global university rankings, and to set their own standards for thriving and maturing universities.