We moeten af van telzucht in de wetenschap – ScienceGuide

From Google’s English:  “On July 19, ScienceGuide published an open letter from 171 academics who are concerned about the new Recognition and Valuation of scientists. In fact, the signatories warn that the new ‘Recognize and Appreciate’ leads to more arbitrariness and loss of quality. This will jeopardize the international top position of Dutch science, argue the writers, which will adversely affect young academics in particular.  …

It is noticeable that these young scientists, whom the letter speaks of, do not seem to be involved in drafting the message. It is also striking that signatories to the open letter themselves are mainly at the top of the academic career ladder; 142 of the 171 signatories are even professors. As Young Science in Transition, PhD candidates Network Netherlands, PostDocNL, a large number of members of De Jonge Akademies and many other young researchers, we do not agree with the message they are proclaiming. In fact, we worry about these kinds of noises when it comes to our current and future careers. Young academics are eagerly waiting for a new system of Recognition and Appreciation. …”

Nieuwe Erkennen en waarderen schaadt Nederlandse wetenschap – ScienceGuide

From Google’s English:  “A group of 171 scientists, including 142 professors, warns in this open letter that the new Recognition and Valuation will harm Dutch science. The medical, exact and life sciences in particular are in danger of losing their international top position as a result of the new Recognition and Appreciation, because it is no longer clear how scientists are judged.

An article was recently published in Nature about the new policy of Utrecht University whereby the impact factors of scientific journals are no longer included in the evaluation of scientists. Measurable performance figures have been abandoned in favor of an ‘open science’ system and elevating the team above the individual.  

Here 171 academics warn that this new ‘Recognition and appreciation’ will lead to more arbitrariness and less quality and that this policy will have major consequences for the international recognition and appreciation of Dutch scientists. This will have negative consequences in particular for young researchers, who will no longer be able to compete internationally.  …”

Why the new Recognition & Rewards actually boosts excellent science

“During the last few weeks, several opinion pieces have appeared questioning the new Recognition and Rewards (R&R) and Open Science in Dutch academia. On July 13, the TU/e Cursor published interviews with professors who question the usefulness of a new vision on R&R (1). A day later, on July 14, the chairman of the board of NWO compared science to top sport, with an emphasis on sacrifice and top performance (2), a line of thinking that fits the traditional way of R&R in academia. On July 19, an opinion piece was published by 171 university (head) teachers and professors (3), this time in ScienceGuide questioning again the new vision of R&R. These articles, all published within a week, show that as the new R&R gains traction within universities, established scholars are questioning its usefulness and effectiveness. Like others before us (4), we would like to respond. …”

Jourchain: using blockchain to avoid questionable journals | SpringerLink

Abstract:  Scholarly publishing currently is faced by an upsurge in low-quality, questionable “predatory/hijacked” journals published by those whose only goal is profit. Although there are discussions in the literature warning about them, most provide only a few suggestions on how to avoid these journals. Most solutions are not generalizable or have other weaknesses. Here, we use a novel information technology, i.e., blockchains, to expose and prevent the problems produced by questionable journals. Thus, this work presented here sheds light on the advantages of blockchain for producing safe, fraud-free scholarly publishing.

 

Is rapid scientific publication also high quality? Bibliometric analysis of highly disseminated COVID?19 research papers – Khatter – – Learned Publishing – Wiley Online Library

“Key points

 

An examination of highly visible COVID-19 research articles reveals that 55% could be considered at risk of bias.
Only 11% of the evaluated early studies on COVID-19 adhered to good standards of reporting such as PRISMA or CONSORT.
There was no correlation between quality of reporting and either the journal Impact Factor or the article Altmetric Attention Score in early studies on COVID-19.
Most highly visible early articles on COVID-19 were published in the Lancet and Journal of the American Medical Association.”

A Study of the Quality of Wikidata | DeepAI

Abstract:  Wikidata has been increasingly adopted by many communities for a wide variety of applications, which demand high-quality knowledge to deliver successful results. In this paper, we develop a framework to detect and analyze low-quality statements in Wikidata by shedding light on the current practices exercised by the community. We explore three indicators of data quality in Wikidata, based on: 1) community consensus on the currently recorded knowledge, assuming that statements that have been removed and not added back are implicitly agreed to be of low quality; 2) statements that have been deprecated; and 3) constraint violations in the data. We combine these indicators to detect low-quality statements, revealing challenges with duplicate entities, missing triples, violated type rules, and taxonomic distinctions. Our findings complement ongoing efforts by the Wikidata community to improve data quality, aiming to make it easier for users and editors to find and correct mistakes.

 

Meet the new Faculty Opinions Score – Faculty Opinions Blog

“Traditional citation metrics, such as the journal impact factor, can frequently act as biased measurements of research quality and contribute to the broken system of research evaluation. Academic institutions and funding bodies are increasingly moving away from relying on citation metrics towards greater use of transparent expert opinion. 

Faculty Opinions has championed this cause for two decades, with over 230k recommendations made by our 8000+ Faculty Members, and we are excited to introduce the next step in our evolution – a formidable mark of research quality – the new Faculty Opinions Score….

The Faculty Opinions Score assigns a numerical value of research publications in Biology and Medicine, to quantify their impact and quality compared to other publications in their field. 

The Faculty Opinions Score is derived by combining our unique star-rated Recommendations on individual publications, made by world-class experts, with bibliometrics to produce a radically new metric in the research evaluation landscape. 

 

 

 

Key properties of the Faculty Opinions Score: 

A score of zero is assigned to articles with no citations and no recommendations. 
The average score of a set of recommended articles has an expected value of 10. However, articles with many recommendations or highly cited articles may have a much higher score. There is no upper bound. 
Non-recommended articles generally score lower than recommended articles. 
Recommendations contribute more to the score than bibliometric performance. In other words, expert recommendations increase an article’s score (much) more than citations do….”

New metric ‘leverages opinions of 8,000 experts’ | Research Information

“Faculty Opinions has introduced a new metric in the research evaluation landscape, leveraging the opinions of more than 8,000 experts. 

The Faculty Opinions Score is designed to be an early indicator of an article’s future impact and a mark of research quality. The company describes the implications for researchers, academic institutions and funding bodies as ‘promising’….”

Public feedback on preprints can unlock their full potential to accelerate science.

“Public preprint review can help authors improve their paper, find new collaborators, and gain visibility. It also helps readers find interesting and relevant papers and contextualize them with the reactions of experts in the field. Never has this been more apparent than in COVID-19, where rapid communication and expert commentary have both been in high demand. Yet, most feedback on preprints is currently exchanged privately.

Join ASAPbio in partnership with DORA, HHMI, and the Chan Zuckerberg Initiative to discuss how to create a culture of constructive public review and feedback on preprints….”

Game over: empower early career researchers to improve research quality

Abstract:  Processes of research evaluation are coming under increasing scrutiny, with detractors arguing that they have adverse effects on research quality, and that they support a research culture of competition to the detriment of collaboration. Based on three personal perspectives, we consider how current systems of research evaluation lock early career researchers and their supervisors into practices that are deemed necessary to progress academic careers within the current evaluation frameworks. We reflect on the main areas in which changes would enable better research practices to evolve; many align with open science. In particular, we suggest a systemic approach to research evaluation, taking into account its connections to the mechanisms of financial support for the institutions of research and higher education in the broader landscape. We call for more dialogue in the academic world around these issues and believe that empowering early career researchers is key to improving research quality.

 

Wikipedia: The Most Reliable Source on the Internet? | PCMag

“[Q] Which brings us to Wikipedia. Many of us consult it, slightly wary of its bias, depth, and accuracy. But, as you’ll be sharing in your speech at Intellisys, the content actually ends up being surprisingly reliable. How does that happen?

[A] The answer to “should you believe Wikipedia?” isn’t simple. In my book I argue that the content of a popular Wikipedia page is actually the most reliable form of information ever created. Think about it—a peer-reviewed journal article is reviewed by three experts (who may or may not actually check every detail), and then is set in stone. The contents of a popular Wikipedia page might be reviewed by thousands of people. If something changes, it is updated. Those people have varying levels of expertise, but if they support their work with reliable citations, the results are solid. On the other hand, a less popular Wikipedia page might not be reliable at all….”

eLife announces new approach to publishing in medicine | For the press | eLife

eLife is excited to announce a new approach to peer review and publishing in medicine, including public health and health policy.

One of the most notable impacts of the COVID-19 pandemic has been the desire to share important results and discoveries quickly, widely and openly, leading to rapid growth of the preprint server medRxiv. Despite the benefits of rapid, author-driven publication in accelerating research and democratising access to results, the growing number of clinical preprints means that individuals and institutions may act quickly on new information before it is adequately scrutinised.

To address this challenge, eLife is bringing its system of editorial oversight by practicing clinicians and clinician-investigators, and rigorous, consultative peer review to preprints. The journal’s goal is to produce ‘refereed preprints’ on medRxiv that provide readers and potential users with a detailed assessment of the research, comments on its potential impact, and perspectives on its use. By providing this rich and rapid evaluation of new results, eLife hopes peer-reviewed preprints will become a reliable indicator of quality in medical research, rather than journal impact factor.

FAIR Principles for Research Software (FAIR4RS Principles) | RDA

“Research software is a fundamental and vital part of research worldwide, yet there remain significant challenges to software productivity, quality, reproducibility, and sustainability. Improving the practice of scholarship is a common goal of the open science, open source software and FAIR (Findable, Accessible, Interoperable and Reusable) communities, but improving the sharing of research software has not yet been a strong focus of the latter.

To improve the FAIRness of research software, the FAIR for Research Software (FAIR4RS) Working Group has sought to understand how to apply the FAIR Guiding Principles for scientific data management and stewardship to research software, bringing together existing and new community efforts. Many of the FAIR Guiding Principles can be directly applied to research software by treating software and data as similar digital research objects. However, specific characteristics of software — such as its executability, composite nature, and continuous evolution and versioning — make it necessary to revise and extend the principles.

This document presents the first version of the FAIR Principles for Research Software (FAIR4RS Principles). It is an outcome of the FAIR for Research Software Working Group (FAIR4RS WG).

The FAIR for Research Software Working Group is jointly convened as an RDA Working Group, FORCE11 Working Group, and Research Software Alliance (ReSA) Task Force.”

Assessing number and quality of urology open access journals… : Current Urology

Abstract:  Background/Aims: 

There is clear evidence that publishing research in an open access (OA) journal or as an OA model is associated with higher impact, in terms of number of reads and citation rates. The development of OA journals and their quality are poorly studied in the field of urology. In this study, we aim to assess the number of OA journals, their quality in terms of CiteScore, percent cited and quartiles, and their scholarly production during the period from 2011 to 2018.

Methods: 

We obtained data about journals from www.scopus.com, and we filtered the list for urology journals. We obtained data for all Scopus indexed journals during the period from 2011 to 2018. For each journal, we extracted the following indices: CiteScore, Citations, scholarly output, and SCImago quartiles. We analyzed the difference in quality indices between OA and non-OA urology journals.

Results: 

Urology journals have increased from 66 journals in 2011 to 99 journals in 2018. The number of OA urology journals has increased from only 10 (15.2%) journals in 2011 to 33 (33.3%) journals in 2018. The number of quartile 1 (the top 25%) journals has increased from only 1 journal in 2011 to 5 journals in 2018. Non-OA urology journals had significantly higher CiteScore compared with OA journals till the year 2015, after which the mean difference in CiteScore became smaller with insignificant p-value.

Conclusion: 

Number and quality of OA journals in the field of urology have increased throughout the last few years. Despite this increase, non-OA urology journals still have higher quality and output.