Comparison of subscription access and open access obstetrics and gynecology journals in the SCImago database | Özay | Ginekologia Polska

Abstract:  Objectives: The aim of this study is to compare the annual SJR and to evaluate the other parameters that show the scientific effect of journals in terms of open access (OA) or subscription access (SA) in the field of obstetrics and gynecology according to the SCImago database. Material and methods: This study was conducted between September-December 2019 at Near East University. The SCImago Journal & Country Rank database was used to collect information about the journals. We evaluated and compared the changes in the one-year SJR (SCImago Journal Rank) and journal impact factor (JIF) of OA and SA journals. Results: Data from 183 scientific journals in the field of obstetrics and gynecology from the period between 1999 and 2018 were evaluated, where 140 of these journals were SA and 43 were OA. The average SJR of OA journals in 1999 was 0.17, while it was 0.38 for SA journals. In 2018, these values were 0.31 and 0.78 for OA and SA journals, respectively. In the comparison of JIF, the average of the OA journals in 1999 was 0.09, while it was 0.66 for SA journals. In 2018, these values were 0.80 and 1.93 for OA and SA journals, respectively. Conclusions: Access to information has become easier due to technological developments and this will continue to affect the access policies of journals. Despite the disadvantages of predator journals, the rise of OA journals in terms of number and quality is likely to continue. Key words: open access journal; impact factor; subscription access journal; SCImago; obstetrics; gynecology.

Is rapid scientific publication also high quality? Bibliometric analysis of highly disseminated COVID?19 research papers – Khatter – – Learned Publishing – Wiley Online Library

Abstract:  The impact of COVID-19 has underlined the need for reliable information to guide clinical practice and policy. This urgency has to be balanced against disruption to journal handling capacity and the continued need to ensure scientific rigour. We examined the reporting quality of highly disseminated COVID-19 research papers using a bibliometric analysis examining reporting quality and risk of bias (RoB) amongst 250 top scoring Altmetric Attention Score (AAS) COVID-19 research papers between January and April 2020. Method-specific RoB tools were used to assess quality. After exclusions, 84 studies from 44 journals were included. Forty-three (51%) were case series/studies, and only one was an randomized controlled trial. Most authors were from institutions based in China (n =?44, 52%). The median AAS and impact factor was 2015 (interquartile range [IQR] 1,105–4,051.5) and 12.8 (IQR 5–44.2) respectively. Nine studies (11%) utilized a formal reporting framework, 62 (74%) included a funding statement, and 41 (49%) were at high RoB. This review of the most widely disseminated COVID-19 studies highlights a preponderance of low-quality case series with few research papers adhering to good standards of reporting. It emphasizes the need for cautious interpretation of research and the increasingly vital responsibility that journals have in ensuring high-quality publications.

 

Recognition and rewards – Open Science – Universiteit Utrecht

“Open science means action. And the way we offer recognition and reward to academics and university staff is key in bringing about the transition that Utrecht University aims for. Over the course of the past year the working group on Recognition and Rewards, part of the Open Science Programme, has reflected and thoroughly debated a novel approach to ensuring that we offer room for everyone’s talent, resulting in a new vision (pdf)….

In the current system, researchers and their research are judged by journal impact factors, publisher brands and H-indices, and not by actual quality, real use, real impact and openness characteristics….

Under those circumstances, at best open science practices are seen as posing an additional burden without rewards. At worst, they are seen as actively damaging chances of future funding and promotion & tenure. Early career researchers are perhaps the most dependent on traditional evaluation culture for career progression, a culture held in place by established researchers, as well as by institutional, national and international policies, including funder mandates….”

 

 

Utrecht University Recognition and Rewards Vision

“By embracing Open Science as one of its five core principles1, Utrecht University aims to accelerate and improve science and scholarship and its societal impact. Open science calls for a full commitment to openness, based on a comprehensive vision regarding the relationship with society. This ongoing transition to Open Science requires us to reconsider the way in which we recognize and reward members of the academic community. It should value teamwork over individualism and calls for an open academic culture that promotes accountability, reproducibility, integrity and transparency, and where sharing (open access, FAIR data and software) and public engagement are normal daily practice. In this transition we closely align ourselves with the national VSNU program as well as developments on the international level….”

Journal Citation Indicator. Just Another Tool in Clarivate’s Metrics Toolbox? – The Scholarly Kitchen

“The JCI has several benefits when compared against the standard Journal Impact Factor (JIF): It is based on a journal’s citation performance across three full years of citation data rather than a single year’s snapshot of a journal’s performance across the previous two years. Clarivate also promises to provide the JCI score to all journals in its Core Collection, even those journals that do not currently receive a JIF score.

The JCI also avoids the numerator-denominator problem of the JIF, where ALL citations to a journal are counted in the numerator, but only “citable items” (Articles and Review) are counted in the denominator. The JCI only focuses on Articles and Reviews.

Finally, like a good indicator, the JCI is easy to interpret. Average performance is set to 1.0, so a journal that receives a JCI score of 2.5 performed two-and-a-half times better than average, while a journal with a score of 0.5 performed only half as well.

To me, JCI’s biggest weakness is Clarivate’s bold claim that it achieved normalization across disciplines….”

Triggle et al. (2021) Requiem for impact factors and high publication charges

Chris R Triggle, Ross MacDonald, David J. Triggle & Donald Grierson (2021) Requiem for impact factors and high publication charges, Accountability in Research, DOI: 10.1080/08989621.2021.1909481

Abstract: Journal impact factors, publication charges and assessment of quality and accuracy of scientific research are critical for researchers, managers, funders, policy makers, and society. Editors and publishers compete for impact factor rankings, to demonstrate how important their journals are, and researchers strive to publish in perceived top journals, despite high publication and access charges. This raises questions of how top journals are identified, whether assessments of impacts are accurate and whether high publication charges borne by the research community are justified, bearing in mind that they also collectively provide free peer-review to the publishers. Although traditional journals accelerated peer review and publication during the COVID-19 pandemic, preprint servers made a greater impact with over 30,000 open access articles becoming available and accelerating a trend already seen in other fields of research. We review and comment on the advantages and disadvantages of a range of assessment methods and the way in which they are used by researchers, managers, employers and publishers. We argue that new approaches to assessment are required to provide a realistic and comprehensive measure of the value of research and journals and we support open access publishing at a modest, affordable price to benefit research producers and consumers.

Are stakeholders measuring the publishing metrics that matter?: Putting research into context

“Perhaps the most fundamental aspect of compiling and implementing more meaningful research metrics that the NISO panelists discussed is the importance of putting data into context. And, as the speakers noted, there are multiple facets of context to consider, including:

The strengths and limitations of different metrics by discipline/subject matter (e.g., some metrics are better suited to certain types of research)
The intended uses and overall strengths and limitations of particular data points (e.g., altmetrics are “indicators” of impact, not measures of quality and the JIF was never meant to be used to measure the impact of individual articles or scholars)
The cultural context that a researcher is operating within and the opportunities, challenges, and biases they have experienced
How and where a research output fits within scholars’ other professional contributions (e.g., recognizing how individual research outputs are part of broader bodies of work and also measuring the impacts of scholarly outputs that do not fit within traditional publication-based assessment systems) …”

Requiem for impact factors and high publication charges: Accountability in Research: Vol 0, No ja

Abstract:  Journal impact factors, publication charges and assessment of quality and accuracy of scientific research are critical for researchers, managers, funders, policy makers, and society. Editors and publishers compete for impact factor rankings, to demonstrate how important their journals are, and researchers strive to publish in perceived top journals, despite high publication and access charges. This raises questions of how top journals are identified, whether assessments of impacts are accurate and whether high publication charges borne by the research community are justified, bearing in mind that they also collectively provide free peer-review to the publishers. Although traditional journals accelerated peer review and publication during the COVID-19 pandemic, preprint servers made a greater impact with over 30,000 open access articles becoming available and accelerating a trend already seen in other fields of research. We review and comment on the advantages and disadvantages of a range of assessment methods and the way in which they are used by researchers, managers, employers and publishers. We argue that new approaches to assessment are required to provide a realistic and comprehensive measure of the value of research and journals and we support open access publishing at a modest, affordable price to benefit research producers and consumers.

 

Novelty, Disruption, and the Evolution of Scientific Impact

Abstract:  Since the 1950s, citation impact has been the dominant metric by which science is quantitatively evaluated. But research contributions play distinct roles in the unfolding drama of scientific debate, agreement and advance, and institutions may value different kinds of advances. Computational power, access to citation data and an array of modeling techniques have given rise to a widening portfolio of metrics to extract different signals regarding their contribution. Here we unpack the complex, temporally evolving relationship between citation impact alongside novelty and disruption, two emerging measures that capture the degree to which science not only influences, but transforms later work. Novelty captures how research draws upon unusual combinations of prior work. Disruption captures how research comes to eclipse the prior work on which it builds, becoming recognized as a new scientific direction. We demonstrate that: 1) novel papers disrupt existing theories and expand the scientific frontier; 2) novel papers are more likely to become “sleeping beauties” and accumulate citation impact over the long run; 3) novelty can be reformulated as distance in journal embedding spaces to map the moving frontier of science. The evolution of embedding spaces over time reveals how yesterday’s novelty forms today’s scientific conventions, which condition the novelty–and surprise–of tomorrow’s breakthroughs.

 

The Most Widely Disseminated COVID-19-Related Scientific Publications in Online Media: A Bibliometric Analysis of the Top 100 Articles with the Highest Altmetric Attention Scores

Abstract:  The novel coronavirus disease 2019 (COVID-19) is a global pandemic. This study’s aim was to identify and characterize the top 100 COVID-19-related scientific publications, which had received the highest Altmetric Attention Scores (AASs). Hence, we searched Altmetric Explorer using search terms such as “COVID” or “COVID-19” or “Coronavirus” or “SARS-CoV-2” or “nCoV” and then selected the top 100 articles with the highest AASs. For each article identified, we extracted the following information: the overall AAS, publishing journal, journal impact factor (IF), date of publication, language, country of origin, document type, main topic, and accessibility. The top 100 articles most frequently were published in journals with high (>10.0) IF (n = 67), were published between March and July 2020 (n = 67), were written in English (n = 100), originated in the United States (n = 45), were original articles (n = 59), dealt with treatment and clinical manifestations (n = 33), and had open access (n = 98). Our study provides important information pertaining to the dissemination of scientific knowledge about COVID-19 in online media. View Full-Text