Abstract: In this paper we present “superior identification index” (SII), a metric to quantify the capability of academic journals to recognize top papers restricted by specific time window and study field. Intuitively, SII is the percentage of papers from a journal in the top p% papers in the field. SII provides flexible framework to make trade-offs on journal quality and quantity, as p rises it puts more weight on quantity and less weight on quality. Concerns on the p selection are discussed, and extended metrics of SII, including superior identification efficiency (SIE) and paper rank percentile (PRP), were proposed to sketch other dimensions of journal performance. Based on bibliometric data from ecological field, we find that as p increases, the correlation between SIE and JIF first rises then drops, indicating that JIF might most likely reflect “how well a journal identifies the top 26~34% papers in the field”. Hopefully, the new proposed SII metric and its extensions could promote the quality awareness and provide flexible tools for research evaluation.
Abstract: In order to assess the progress of Open Science in France, the French Ministry of Higher Education, Research and Innovation published the French Open Science Monitor in 2019. Even if this tool has a bias, for only the publications with a DOI can be considered, thus promoting article-dominant research communities, its indicators are trustworthy and reliable. The University of Lorraine was the very first institution to reuse the National Monitor in order to create a new version at the scale of one university in 2020. Since its release, the Lorraine Open Science Monitor has been reused by many other institutions. In 2022, the French Open Science Monitor further evolved, enabling new insights on open science. The Lorraine Open Science Monitor has also evolved since it began. This paper details how the initial code for the Lorraine Open Science Monitor was developed and disseminated. It then outlines plans for development in the next few years.
Abstract: Citation metrics have value because they aim to make scientific assessment a level playing field, but urgent transparency-based adjustments are necessary to ensure that measurements yield the most accurate picture of impact and excellence. One problematic area is the handling of self-citations, which are either excluded or inappropriately accounted for when using bibliometric indicators for research evaluation. In this talk, in favour of openly tracking self-citations, I report on a study of self-referencing behaviour among various academic disciplines as captured by the curated bibliometric database Web of Science. Specifically, I examine the behaviour of thousands of authors grouped into 15 subject areas like Biology, Chemistry, Science and Technology, Engineering, and Physics. In this talk, I focus on the methodological set-up of the study and discuss data science related problems like author name disambiguation and bibliometric indicator modelling. This talk bases on the following publication: Kacem, A., Flatt, J. W., & Mayr, P. (2020). Tracking self-citations in academic publishing. Scientometrics, 123(2), 1157–1165. https://doi.org/10.1007/s11192-020-03413-9
Abstract: RPYS is a bibliometric method originally introduced in order to reveal the historical roots of research topics or fields. RPYS does not identify the most highly cited papers of the publication set being studied (as is usually done by bibliometric analyses in research evaluation), but instead it indicates most frequently referenced publications – each within a specific reference publication year. In this study, we propose to use the method to identify important researchers, institutions and countries in the context of breakthrough research. To demonstrate our approach, we focus on research on physical modeling of Earth’s climate and the prediction of global warming as an example. Klaus Hasselmann and Syukuro Manabe were both honored with the Nobel Prize in 2021 for their fundamental contributions to this research. Our results reveal that RPYS is able to identify most important researchers, institutions, and countries. For example, all the relevant authors’ institutions are located in the USA. These institutions are either research centers of two US National Research Administrations (NASA and NOAA) or universities: the University of Arizona, Princeton University, the Massachusetts Institute of Technology (MIT), and the University of Stony Brook.
This Handbook provides a comprehensive overview of current developments, issues and good practices regarding assessment in social science research. It pays particular attention to the challenges in evaluation policies in the social sciences, as well as to the specificities of publishing in the area. The Handbook discusses the current societal challenges facing researchers, from digital societies, to climate change and sustainability, to trust in democratic societies. Chapters provide ways to strengthen research assessment in the social sciences for the better, by offering a diverse range of experiences and views of experts from all continents. The Handbook also outlines major data sources that can be used to assess social sciences research, as well as looking at key dimensions of research quality in the social sciences including journal peer review, the issue of identifying research quality, and gender disparities in social science research. This book will be an essential read for scholars interested in research assessment in the social sciences. It will also be useful to policy makers looking to understand the key position of the social sciences in science and society and provide appropriate frameworks for key societal challenges.
Mikael Laakso, & Anna-Maija Multas. (2022). European scholarly journals from small- and mid-size publishers in times of Open Access: Mapping journals and public funding mechanisms (Version 1). Zenodo. https://doi.org/10.5281/zenodo.5909512
Open Access (OA) publishing omits reader-side fees, which requires that resources to sustain journals can not originate from the reader-side. Large international publishers have had capability to monetize OA publishing from national consortia and institutions, but small- and mid-sized publishers have not succeeded to the same degree. There is currently a lack of information concerning to what degree small- and mid-sized publishers are present in European countries, to what degree their journals are already OA, and how the countries are supporting these journals financially or technically to publish their materials OA. The methods for this study include bibliometric analysis, document analysis of web information, inquiries to OA experts in European countries, and web-survey to a small sample of journals in each country. The study found that there are 16387 journals from small-and mid-sized publishers being published in European countries (incl. transcontinental states) of which 36% are already publishing OA. The vast majority of journals published in Europe are by single-publisher journals (77% of all publishers publish only one journal). Journals from smalland mid-sized publishers were found to be multilingual or non-English to a higher degree than journals from large publishers (44% and 43% vs 6% and 5%). According to our observations there is large diversity in how (and if) countries reserve and distribute funds to journals active in the countries, ranging from continuous inclusive subsidies to competitive grant funding or nothing at all. Funding information was often difficult to discover and efforts to make such information more easily available would likely facilitate policy development in this area. We call out for additions and corrections to journal funding instrument information in order to make the data as comprehensive and accurate as possible.
Introduction. The study investigates whether online attention, carried out on social media or by video tutorials, affects the popularity of these tools in the research community.
Method. We collected data from the Web of Science, Scopus, YouTube, Facebook, Twitter, and Instagram, using web-scraping tools. Bibliometrics, altmetrics and webometrics were applied to process the data and to analyse Gephi, Sci2 Tool, VOSviewer, Pajek, CiteSpace and HistCite.
Analysis. Statistical and network analyses, and YouTube analytics, were used. The tools’ interfaces were assessed in the preliminary stage of the comparison. The results were plotted on charts and graphs, and compared.
Results. Social media and video tutorials had minimal influence on the popularity of different tools, as reflected by the number of papers within the Web of Science and Scopus where they featured. However, the small but constant growth of publications mentioning Gephi could be a result of Twitter promotion and a high number of video tutorials. The authors proposed four directions for further comparisons of science mapping software.
Conclusions. This work shows that biblio- and scientometricians are not influenced by social media visibility or accessibility of video tutorials. Future research on this topic could focus on evaluating the tools, their features and usability, or the availability of workshops.
Abstract: This essay develops the idea of surveillance publishing, with special attention to the example of Elsevier. A scholarly publisher can be defined as a surveillance publisher if it derives a substantial proportion of its revenue from prediction products, fueled by data extracted from researcher behavior. The essay begins by tracing the Google search engine’s roots in bibliometrics, alongside a history of the citation analysis company that became, in 2016, Clarivate. The point is to show the co-evolution of scholarly communication and the surveillance advertising economy. The essay then refines the idea of surveillance publishing by engaging with the work of Shoshana Zuboff, Jathan Sadowski, Mariano-Florentino Cuéllar, and Aziz Huq. The recent history of Elsevier is traced to describe the company’s research-lifecycle data-harvesting strategy, with the aim to develop and sell prediction products to universities and other customers. The essay concludes by considering some of the potential costs of surveillance publishing, as other big commercial publishers increasingly enter the predictive-analytics mark. It is likely, I argue, that windfall subscription-and-APC profits in Elsevier’s “legacy” publishing business have financed its decade-long acquisition binge in analytics, with the implication that university customers are budgetary victims twice over. The products’ purpose, I stress, is to streamline the top-down assessment and evaluation practices that have taken hold in recent decades, in tandem with the view that the university’s main purpose is to grow regional and national economies. A final pair of concerns is that publishers’ prediction projects may camouflage and perpetuate existing biases in the system—and that scholars may internalize an analytics mindset, one already encouraged by citation counts and impact factors.
On a regular basis, we look at the download data of the OAPEN Library and where it comes from. While examining the data from January to August 2021, we focused on the usage originating from libraries and academic institutions. Happily, we found that more than 1,100 academic institutions and libraries have used the OAPEN Library.
Of course, we do not actively track individual users. Instead we use a more general approach: we look at the website from which the download from the OAPEN Library originated. How does that work? For instance, when someone in the library of the University of Leipzig clicks on the download link of a book in the OAPEN library, two things happen: first, the book is directly available on the computer that person is working on, and second, the OAPEN server notes the ‘return address’: https://katalog.ub.uni-leipzig.de/. We have no way of knowing who the person is that started the download, we just know the request originated from the Leipzig University Library. Furthermore, some organisations choose to suppress sending their ‘return address’, making them anonymous.
What is helpful to us, is the fact that aggregators such as ExLibris, EBSCO or SerialSolutions use a specific return address. Examples are “west-sydney-primo.hosted.exlibrisgroup.com” – pointing to the library of the Western City University – or “sfx.unibo.it”– coming from the library of the Università di Bologna. And in this way, many academic libraries can also be identified from their web address. Some academic institutions only display their ‘general’ address.
Abstract: Citation indexes are by now part of the research infrastructure in use by most scientists: a necessary tool in order to cope with the increasing amounts of scientific literature being published. Commercial citation indexes are designed for the sciences and have uneven coverage and unsatisfactory characteristics for humanities scholars, while no comprehensive citation index is published by a public organization. We argue that an open citation index for the humanities is desirable, for four reasons: it would greatly improve and accelerate the retrieval of sources, it would offer a way to interlink collections across repositories (such as archives and libraries), it would foster the adoption of metadata standards and best practices by all stakeholders (including publishers) and it would contribute research data to fields such as bibliometrics and science studies. We also suggest that the citation index should be informed by a set of requirements relevant to the humanities. We discuss four: source coverage must be comprehensive, including books and citations to primary sources; there needs to be chronological depth, as scholarship in the humanities remains relevant over time; the index should be collection-driven, leveraging the accumulated thematic collections of specialized research libraries; and it should be rich in context in order to allow for the qualification of each citation, for example by providing citation excerpts. We detail the fit-for-purpose research infrastructure which can make the humanities citation index a reality. Ultimately, we argue that a citation index for the humanities can be created by humanists, via a collaborative, distributed and open effort.
The purpose of this paper is to know whether the authors’ productivity pattern of library and information science (LIS) open access journals adheres to Lotka’s inverse square law of scientific productivity. Since the law was introduced, it has been tested in various fields of knowledge, and results have varied. This study has closely followed Lotka’s inverse square law in the field of LIS open access journals to find a factual result and set a baseline for future studies on author productivity of LIS open access journals.
The publication data of selected ten LIS open access journals pertain to authorship, citations were downloaded from the Scopus database and analysed using bibliometric indicators like authorship pattern, collaborative index (CI), degree of collaboration (DC), collaborative coefficient (CC) and citation counts. This study has applied Lotka’s inverse square law to assess authors’ productivity pattern of LIS open access journals and further Kolmogorov-Smirnov (K-S) goodness-of-fit test applied for testing of observed and expected author productivity data.
Inferences were drawn for the set objectives on authorship pattern, collaboration trend and authors’ productivity pattern of LIS open access journals covered in this study. The single authorship pattern is dominant in LIS open access journals covered in this study. The CI, DC and CC are found to be 1.95, 0.47 and 0.29, respectively. The expected values as per Lotka’s law (n = ?2) significantly vary from the observed values as per the chi-square test and K-S goodness-of-fit test. Hence, this study does not adhere to Lotka’s inverse square law of scientific productivity.
Researchers may find an idea about the authors’ productivity patterns of LIS open access journals. This study has used the K-S goodness-of-fit test and the chi-square test to validate the authors’ productivity data. The inferences found out from this study will be a baseline for future research on author productivity of LIS open access journals.
This study is significant from the viewpoint of the growing research on open access journals in the field of LIS and to identify the authorship pattern, collaboration trend and author productivity pattern of such journals.
LIS-Bibliometrics Committee members Barbara S. Lancho Barrantes, Hannelore Vanhaverbeke and Silvia Dobre discuss the new bibliometrics competencies model, why it was updated, and the changes made to it.
Our VOSviewer software enables visualizations of bibliometric networks to be explored interactively. Nevertheless, VOSviewer visualizations often end up as static images in blog posts, research articles, policy reports, and PowerPoint presentations. In this way the visualizations lose a lot of their value, and in the end they may indeed be “just nice to look at but not useful or helpful”.
To address this problem, we have developed VOSviewer Online, a web-based version of VOSviewer released today. Using VOSviewer Online, visualizations of bibliometric networks can be explored interactively in a web browser. This makes it much easier to share interactive visualizations, and it reduces the need to show static images.
Die Niedersächsische Staats- und Universitätsbibliothek Göttingen (SUB Göttingen) engagiert sich seit Jahren in nationalen und internationalen Projekten für die Schaffung von Infrastrukturen und Services in den Bereichen wissenschaftliches Publizieren und Umsetzung von Open Access.
In diesem Kontext ist für das BMBF-Projekt „indi:oa – Verantwortungsbewusste Bewertung und Qualitätssicherung von Open-Access-Publikationen mittels bibliometrischer Indikatoren“ die Stelle als
Entgeltgruppe 13 TV-L, Teilzeit, befristet
zum nächstmöglichen Zeitpunkt an der SUB Göttingen in Teilzeit (75%, zurzeit 29,85 Wochenstunden) zu besetzen. Die Stelle ist befristet bis zum Projektende Ende Juli 2023.
Das BMBF-Verbundprojekt „indi:oa – Verantwortungsbewusste Bewertung und Qualitätssicherung von Open-Access-Publikationen mittels bibliometrischer Indikatoren“ möchte einer uninformierten Verwendung bibliometrischer Kennzahlen im Kontext der Open-Access-Transformation an wissenschaftlichen Einrichtungen in Deutschland entgegenwirken. Gemeinsam mit dem Deutschen Zentrum für Hochschul- und Wissenschaftsforschung (DZHW) kombiniert das Projekt bibliometrische Studien mit Awareness-Aktivitäten. Dadurch soll die Bereitschaft an wissenschaftlichen Einrichtungen gestärkt werden, innovative und qualitätsgesicherte Open-Access-Publikationsangebote als Alternative zu klassischen Journalen der großen Verlage wahrzunehmen.
Erhebung der Informationsbedürfnisse von Open-Access-Beauftragten und Forschungsreferaten in Bezug auf Open Access und Bibliometrie
Erstellung von zielgruppenspezifischen Handlungsempfehlungen und Trainingsmaterialien auf Grundlage empirischer Evidenz
Aufbau und Begleitung eines informellen Mentoringnetzwerkes
Koordinierung des Projektverbundes einschließlich Öffentlichkeitsarbeit
Wissenschaftlicher Hochschulabschluss (Master oder äquivalent) in einem sozial- oder bibliotheks- und informationswissenschaftlichen Fach (oder vergleichbar)
Interesse an Fragestellungen der quantitativen Wissenschaftsforschung und/oder Erfahrungen im Bereich des Wissenschaftsmanagements einschließlich Publikationsberatung
Hohe Problemlösungskompetenz, Eigenständigkeit und Teamfähigkeit
Gute Deutsch- und Englischkenntnisse in Wort und Schrift
Kenntnisse über das wissenschaftliche Publikationswesen mit Schwerpunkt auf Open Access und Open Science
Erfahrung mit qualitativen oder quantitativen Erhebungen
Erfahrungen in der Datenanalyse und -visualisierung unter Einsatz einer statistischen Programmierumgebung (zum Beispiel R oder Python)
Gesellschaftliches Engagement, zum Beispiel in Open-Science-Communities
Wir bieten Ihnen eine bereichsübergreifende und spannende Projektarbeit in einem engagierten internationalen Team in einer kooperativen Arbeitsatmosphäre. Zudem unterstützen wir flexible Arbeitsformen wie Telearbeit und innovative Trainings zur professionellen Weiterentwicklung. Teilzeit ist möglich. Aufstockung auf Vollzeit ist gegebenenfalls möglich durch Mitarbeit in verwandten Projekten der SUB Göttingen.
Für eventuelle Rückfragen stehen Ihnen Herr Najko Jahn (E-Mail) und Frau Dr. Birgit Schmidt (E-Mail), +49 551 39-33181 (Tel.) zur Verfügung.
Die Universität Göttingen strebt in den Bereichen, in denen Frauen unterrepräsentiert sind, eine Erhöhung des Frauenanteils an und fordert daher qualifizierte Frauen nachdrücklich zur Bewerbung auf. Sie versteht sich zudem als familienfreundliche Hochschule und fördert die Vereinbarkeit von Wissenschaft, Beruf und Familie. Die Universität hat sich zum Ziel gesetzt, mehr schwerbehinderte Menschen zu beschäftigen. Bewerbungen Schwerbehinderter erhalten bei gleicher Qualifikation den Vorzug.
Bitte reichen Sie Ihre Bewerbung mit allen wichtigen Unterlagen in einem Dokument zusammengefasst bis zum 19.07.2021 ausschließlich über das Bewerbungsportal ein.
Publishers, libraries, and a diverse array of scholarly communications platforms and services generate information about how OA books are accessed online. Since its launch in 2015, the OA eBook Usage Data Trust (@OAEBU_project) effort has brought together these thought leaders to document the barriers facing OA eBook usage analytics. To start addressing these challenges and to understand the role of a usage data trust, the effort has spent the last year studying and documenting the usage data ecosystem. Interview-based research led to the documentation of the OA book data supply chain, which maps related metadata and usage data standards and workflows. Dozens worldwide have engaged in human-centered design workshops and communities of practice that went virtual during 2020. Together these communities revealed how OA book publishers, platforms, and libraries are looking beyond their need to provide usage and impact reports. Workshop findings are now documented within use-cases that list the queries and activities where usage data analytics can help scholars and organizations to be more effective and strategic. Public comment is invited for the OA eBook Usage Data Analytics and Reporting Use Cases Report through July 10, 2021.