Journal impact factor gets a sibling that adjusts for scientific field | Science | AAAS

“The new Journal Citation Indicator (JCI) accounts for the substantially different rates of publication and citation in different fields, Clarivate says. But the move is drawing little praise from the critics, who say the new metric remains vulnerable to misunderstanding and misuse….”

Promoting inclusive metrics of success and impact to dismantle a discriminatory reward system in science

Abstract:  Success and impact metrics in science are based on a system that perpetuates sexist and racist “rewards” by prioritizing citations and impact factors. These metrics are flawed and biased against already marginalized groups and fail to accurately capture the breadth of individuals’ meaningful scientific impacts. We advocate shifting this outdated value system to advance science through principles of justice, equity, diversity, and inclusion. We outline pathways for a paradigm shift in scientific values based on multidimensional mentorship and promoting mentee well-being. These actions will require collective efforts supported by academic leaders and administrators to drive essential systemic change.

 

Impact factor abandoned by Dutch university in hiring and promotion decisions

“A Dutch university says it is formally abandoning the impact factor — a standard measure of scientific success — in all hiring and promotion decisions. By early 2022, every department at Utrecht University in the Netherlands will judge its scholars by other standards, including their commitment to teamwork and their efforts to promote open science, says Paul Boselie, a governance researcher and the project leader for the university’s new Recognition and Rewards scheme. “Impact factors don’t really reflect the quality of an individual researcher or academic,” he says. “We have a strong belief that something has to change, and abandoning the impact factor is one of those changes.” …”

Meet the new Faculty Opinions Score – Faculty Opinions Blog

“Traditional citation metrics, such as the journal impact factor, can frequently act as biased measurements of research quality and contribute to the broken system of research evaluation. Academic institutions and funding bodies are increasingly moving away from relying on citation metrics towards greater use of transparent expert opinion. 

Faculty Opinions has championed this cause for two decades, with over 230k recommendations made by our 8000+ Faculty Members, and we are excited to introduce the next step in our evolution – a formidable mark of research quality – the new Faculty Opinions Score….

The Faculty Opinions Score assigns a numerical value of research publications in Biology and Medicine, to quantify their impact and quality compared to other publications in their field. 

The Faculty Opinions Score is derived by combining our unique star-rated Recommendations on individual publications, made by world-class experts, with bibliometrics to produce a radically new metric in the research evaluation landscape. 

 

 

 

Key properties of the Faculty Opinions Score: 

A score of zero is assigned to articles with no citations and no recommendations. 
The average score of a set of recommended articles has an expected value of 10. However, articles with many recommendations or highly cited articles may have a much higher score. There is no upper bound. 
Non-recommended articles generally score lower than recommended articles. 
Recommendations contribute more to the score than bibliometric performance. In other words, expert recommendations increase an article’s score (much) more than citations do….”

New metric ‘leverages opinions of 8,000 experts’ | Research Information

“Faculty Opinions has introduced a new metric in the research evaluation landscape, leveraging the opinions of more than 8,000 experts. 

The Faculty Opinions Score is designed to be an early indicator of an article’s future impact and a mark of research quality. The company describes the implications for researchers, academic institutions and funding bodies as ‘promising’….”

Game over: empower early career researchers to improve research quality

Abstract:  Processes of research evaluation are coming under increasing scrutiny, with detractors arguing that they have adverse effects on research quality, and that they support a research culture of competition to the detriment of collaboration. Based on three personal perspectives, we consider how current systems of research evaluation lock early career researchers and their supervisors into practices that are deemed necessary to progress academic careers within the current evaluation frameworks. We reflect on the main areas in which changes would enable better research practices to evolve; many align with open science. In particular, we suggest a systemic approach to research evaluation, taking into account its connections to the mechanisms of financial support for the institutions of research and higher education in the broader landscape. We call for more dialogue in the academic world around these issues and believe that empowering early career researchers is key to improving research quality.

 

Social media attention and citations of published outputs from re-use of clinical trial data: a matched comparison with articles published in the same journals | BMC Medical Research Methodology | Full Text

Abstract:  Background

Data-sharing policies in randomized clinical trials (RCTs) should have an evaluation component. The main objective of this case–control study was to assess the impact of published re-uses of RCT data in terms of media attention (Altmetric) and citation rates.

Methods

Re-uses of RCT data published up to December 2019 (cases) were searched for by two reviewers on 3 repositories (CSDR, YODA project, and Vivli) and matched to control papers published in the same journal. The Altmetric Attention Score (primary outcome), components of this score (e.g. mention of policy sources, media attention) and the total number of citations were compared between these two groups.

Results

89 re-uses were identified: 48 (53.9%) secondary analyses, 34 (38.2%) meta-analyses, 4 (4.5%) methodological analyses and 3 (3.4%) re-analyses. The median (interquartile range) Altmetric Attention Scores were 5.9 (1.3—22.2) for re-use and 2.8 (0.3—12.3) for controls (p?=?0.14). No statistical difference was found on any of the components of in the Altmetric Attention Score. The median (interquartile range) numbers of citations were 3 (1—8) for reuses and 4 (1 – 11.5) for controls (p?=?0.30). Only 6/89 re-uses (6.7%) were cited in a policy source.

Conclusions

Using all available re-uses of RCT data to date from major data repositories, we were not able to demonstrate that re-uses attracted more attention than a matched sample of studies published in the same journals. Small average differences are still possible, as the sample size was limited. However matching choices have some limitations so results should be interpreted very cautiously. Also, citations by policy sources for re-uses were rare.

All the Research That’s Fit to Print: Open Access and the News Media

Abstract:  The goal of the open access (OA) movement is to help everyone access the scholarly research, not just those who can afford to. However, most studies looking at whether OA has met this goal have focused on whether other scholars are making use of OA research. Few have considered how the broader public, including the news media, uses OA research. This study sought to answer whether the news media mentions OA articles more or less than paywalled articles by looking at articles published from 2010 through 2018 in journals across all four quartiles of the Journal Impact Factor using data obtained through Altmetric.com and the Web of Science. Gold, green and hybrid OA articles all had a positive correlation with the number of news mentions received. News mentions for OA articles did see a dip in 2018, although they remained higher than those for paywalled articles.

 

 

Evaluating implementation of the Transparency and Openness Promotion (TOP) guidelines: the TRUST process for rating journal policies, procedures, and practices | Research Integrity and Peer Review | Full Text

Abstract:  Background

The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments.

Methods

We describe a process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal’s policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy.

Discussion

The TRUST Process includes systematic methods and rating instruments for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the publication of trustworthy findings to inform evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation.

Open Science rankings: yes, no, or not this way? A debate on developing and implementing transparency metrics. – JOTE | Journal of Trial and Error

“The Journal of Trial and Error is proud to present an exciting and timely event: a three-way debate on the topic of Open Science metrics, specifically, transparency metrics. Should we develop these metrics? What purposes do they fulfil? How should Open Science practices be encouraged? Are (transparency) rankings the best solution? These questions and more will be addressed in a dynamic and interactive debate with three researchers of different backgrounds: Etienne LeBel (Independent Meta-Scientist and founder of ERC-funded project ‘Curate Science’), Sarah de Rijcke (Professor of Science and Evaluation Studies and director of the Centre for Science and Technology Studies at Leiden University), and Juliëtte Schaafsma (Professor of Cultural Psychology at Tilburg University and fierce critic of rankings and audits). This is an event organized by the Journal of Trial and Error, and supported by the Open Science Community Tilburg, the Centre for Science and Technology Studies (CWTS, Leiden University), and the Open Science Community Utrecht.”

 

Comparison of subscription access and open access obstetrics and gynecology journals in the SCImago database | Özay | Ginekologia Polska

Abstract:  Objectives: The aim of this study is to compare the annual SJR and to evaluate the other parameters that show the scientific effect of journals in terms of open access (OA) or subscription access (SA) in the field of obstetrics and gynecology according to the SCImago database. Material and methods: This study was conducted between September-December 2019 at Near East University. The SCImago Journal & Country Rank database was used to collect information about the journals. We evaluated and compared the changes in the one-year SJR (SCImago Journal Rank) and journal impact factor (JIF) of OA and SA journals. Results: Data from 183 scientific journals in the field of obstetrics and gynecology from the period between 1999 and 2018 were evaluated, where 140 of these journals were SA and 43 were OA. The average SJR of OA journals in 1999 was 0.17, while it was 0.38 for SA journals. In 2018, these values were 0.31 and 0.78 for OA and SA journals, respectively. In the comparison of JIF, the average of the OA journals in 1999 was 0.09, while it was 0.66 for SA journals. In 2018, these values were 0.80 and 1.93 for OA and SA journals, respectively. Conclusions: Access to information has become easier due to technological developments and this will continue to affect the access policies of journals. Despite the disadvantages of predator journals, the rise of OA journals in terms of number and quality is likely to continue. Key words: open access journal; impact factor; subscription access journal; SCImago; obstetrics; gynecology.

Open Scholarship Support Guide.pdf(Shared)- Adobe Document Cloud

“Steps to Support Open Scholarship

Open scholarship entails a culture shift in how research is conducted in universities. It requires action on the part of university administration, working in concert with faculty, sponsors and disciplinary communities.  Universities should consider steps in three areas:

•  Policies:  Language and guidance should be reviewed for alignment with open scholarship, in particular: (1) academic hiring, review, tenure and promotion (valuing diverse types of research products; metrics that  incentivize the open dissemination of articles, data, and other research outputs; and valuing collaborative research); (2) intellectual property (ownership, licensing and distribution of data, software, materials and publications); (3) research data protection (for data to be stored and shared through repositories); (4) attribution (recognizing full range of contributions);  and (5) privacy (insuring that privacy obligations are met). 

•  Services and Training:  Researchers need support to assure that data and other research objects are managed according to FAIR Principles: findable, accessible, interoperable and reusable.  While the specific solution must be tailored to the discipline and research, common standards, including Digital Object Identifiers (DOIs), must be followed.

•  Infrastructure:  Archival storage is required for data, materials, specimens and publications to permit reuse.  Searchable portals are needed to register research products where they can be located and accessed. Universities can recognize efficiencies by utilizing external resources (including existing disciplinary repositories) and by developing shared resources that span the institution when external resources do not exist.

Presidents and provosts are encouraged to work with their academic senates to create an open scholarship initiative that promotes institution-wide actions supporting open scholarship practices, while remaining sufficiently flexible to accommodate disciplinary differences and norms….”

Is rapid scientific publication also high quality? Bibliometric analysis of highly disseminated COVID?19 research papers – Khatter – – Learned Publishing – Wiley Online Library

Abstract:  The impact of COVID-19 has underlined the need for reliable information to guide clinical practice and policy. This urgency has to be balanced against disruption to journal handling capacity and the continued need to ensure scientific rigour. We examined the reporting quality of highly disseminated COVID-19 research papers using a bibliometric analysis examining reporting quality and risk of bias (RoB) amongst 250 top scoring Altmetric Attention Score (AAS) COVID-19 research papers between January and April 2020. Method-specific RoB tools were used to assess quality. After exclusions, 84 studies from 44 journals were included. Forty-three (51%) were case series/studies, and only one was an randomized controlled trial. Most authors were from institutions based in China (n =?44, 52%). The median AAS and impact factor was 2015 (interquartile range [IQR] 1,105–4,051.5) and 12.8 (IQR 5–44.2) respectively. Nine studies (11%) utilized a formal reporting framework, 62 (74%) included a funding statement, and 41 (49%) were at high RoB. This review of the most widely disseminated COVID-19 studies highlights a preponderance of low-quality case series with few research papers adhering to good standards of reporting. It emphasizes the need for cautious interpretation of research and the increasingly vital responsibility that journals have in ensuring high-quality publications.

 

EVALUATION OF OPEN-ACCESS JOURNALS IN OBSTETRICS AND GYNECOLOGY – Journal of Obstetrics and Gynaecology Canada

Abstract:  A retrospective observational study was conducted to evaluate open-access journals in obstetrics and gynaecology, published between 2011 and 2019. Journals were classified based on their registration in open-access journal directories. Of 176 journals, 47 were not registered. Journals registered in the Directory of Open Access Journals (DOAJ) demonstrated good overall quality, and their journal metrics were significantly higher than those of non-registered journals or journals registered in other directories. The lack of editor names and indexing information on a journal’s website are the most distinctive features of non-registered journals. Non-registration in an open-access journal directory indicates a lack of transparency and may ultimately indicate that a journal is predatory.

 

 

Open Science: read our statement – News – CIVIS – A European Civic University

“CIVIS universities promote the development of new research indicators to complement the conventional indicators for research quality and impact, so as to do justice to open science practices and, going beyond pure bibliometric indicators, to promote also non-bibliometric research products. In particular, the metrics should extend the conventional bibliometric indicators in order to cover new forms of research outputs, such as research data and research software….

Incentives and Rewards for researchers to engage in Open Science activities 

Research career evaluation systems should fully acknowledge open science activities. CIVIS members encourage the inclusion of Open Science practices in their assessment mechanisms for rewards, promotion, and/or tenure, along with the Open Science Career Assessment Matrix….”