View of The UGC-CARE initiative: Indian academia’s quest for research and publishing integrity | First Monday

Abstract:  This paper discusses the reasons for emergence of predatory publications in India, engendered by mandates of higher educational institutions: that require stipulated number of research publications for employment and promotions. Predatory journals have eclipsed the merits of open access publishing, compromised ethical practices, and left the research community groping for benchmarks of research integrity and publication ethics. To fight back the menace of predatory publications, University Grants Commission, India has established “Consortium for Academic Research and Ethics” (UGC-CARE) in 2018 to promote and benchmark research integrity and publication ethics among the Indian academia. The present paper discusses the UGC-CARE initiative, its structure, objectives and specifically, “UGC-CARE Reference List of Quality Journals” (UGC-CARE list) and finally, the challenges it faces.

 

Credibility of scientific information on social media: Variation by platform, genre and presence of formal credibility cues | Quantitative Science Studies | MIT Press

Abstract:  Responding to calls to take a more active role in communicating their research findings, scientists are increasingly using open online platforms, such as Twitter, to engage in science communication or to publicize their work. Given the ease with which misinformation spreads on these platforms, it is important for scientists to present their findings in a manner that appears credible. To examine the extent to which the online presentation of science information relates to its perceived credibility, we designed and conducted two surveys on Amazon’s Mechanical Turk. In the first survey, participants rated the credibility of science information on Twitter compared with the same information in other media, and in the second, participants rated the credibility of tweets with modified characteristics: presence of an image, text sentiment, and the number of likes/retweets. We find that similar information about scientific findings is perceived as less credible when presented on Twitter compared to other platforms, and that perceived credibility increases when presented with recognizable features of a scientific article. On a platform as widely distrusted as Twitter, use of these features may allow researchers who regularly use Twitter for research-related networking and communication to present their findings in the most credible formats.

 

Efficiency of “Publish or Perish” Policy—Some Considerations Based on the Uzbekistan Experience

Abstract:  Researchers from Uzbekistan are leading the global list of publications in predatory journals. The current paper reviews the principles of implementation of the “publish or perish policy” in Uzbekistan with an overarching aim of detecting the factors that are pushing more and more scholars to publish the results of their studies in predatory journals. Scientific publications have historically been a cornerstone in the development of science. For the past five decades, the quantity of publications has become a common indicator for determining academic capacity. Governments and institutions are increasingly employing this indicator as an important criterion for promotion and recruitment; simultaneously, researchers are being awarded Ph.D. and D.Sc. degrees for the number of articles they publish in scholarly journals. Many talented academics have had a pay rise or promotion declined due to a short or nonexistent bibliography, which leads to significant pressure on academics to publish. The “publish or perish” principle has become a trend in academia and the key performance indicator for habilitation in Uzbekistan. The present study makes a case for re-examining the criteria set by the Supreme Attestation Commission of the Republic of Uzbekistan for candidates applying for Ph.D. and D.Sc. as well as faculty promotion requirements in the light of current evidence for the deteriorating academic performance of scholars. View Full-Text

 

Under pressure, Uzbek researchers flood academia with nonsense | Eurasianet

“Anyone tracking scholarship on Central Asia is sure to be swamped by Uzbek research in unreputable publications

A new paper has found why: Under pressure from Uzbekistan’s government, academics are succumbing to predatory journals – publishers that, for a fee, overlook best practices like peer review or editing. Many of the researchers are forced to publish far more often than feasible if the bar were higher, and the quality shows: Uzbek academics are global leaders in spreading research that some scholars would explicitly call “bullshit.”

“Publish or perish”: It’s a global problem among academics, with eye-opening salience in Uzbekistan, find Bahtiyor Eshchanov of the Center for Economic Research and Reforms in Tashkent and his three Uzbek co-authors in a new paper in Publications, a peer-reviewed journal about scholarly publishing….”

Characteristics of scholars who review for predatory and legitimate journals: linkage study of Cabells Scholarly Analytics and Publons data | BMJ Open

Abstract

Objectives To describe and compare the characteristics of scholars who reviewed for predatory or legitimate journals in terms of their sociodemographic characteristics and reviewing and publishing behaviour.

Design Linkage of random samples of predatory journals and legitimate journals of the Cabells Scholarly Analytics’ journal lists with the Publons database, employing the Jaro-Winkler string metric. Descriptive analysis of sociodemographic characteristics and reviewing and publishing behaviour of scholars for whom reviews were found in the Publons database.

Setting Peer review of journal articles.

Participants Reviewers who submitted peer review reports to Publons.

Measurements Numbers of reviews for predatory journals and legitimate journals per reviewer. Academic age of reviewers, the total number of reviews, number of publications and number of reviews and publications per year.

Results Analyses included 183 743 unique reviews submitted to Publons by 19 598 reviewers. Six thousand and seventy-seven reviews were for 1160 predatory journals (3.31% of all reviews) and 177 666 reviews for 6403 legitimate journals (96.69%). Most scholars never submitted reviews for predatory journals (90.0% of all scholars); few scholars (7.6%) reviewed occasionally or rarely (1.9%) for predatory journals. Very few scholars submitted reviews predominantly or exclusively for predatory journals (0.26% and 0.35%, respectively). The latter groups of scholars were of younger academic age and had fewer publications and reviews than the first groups. Regions with the highest shares of predatory reviews were sub-Saharan Africa (21.8% reviews for predatory journals), Middle East and North Africa (13.9%) and South Asia (7.0%), followed by North America (2.1%), Latin America and the Caribbean (2.1%), Europe and Central Asia (1.9%) and East Asia and the Pacific (1.5%).

Conclusion To tackle predatory journals, universities, funders and publishers need to consider the entire research workflow and educate reviewers on concepts of quality and legitimacy in scholarly publishing.

Is MDPI a predatory publisher? – Paolo Crosetto

“So, is MDPI predatory or not? I think it has elements of both. I would name their methods aggressive rent extracting, rather than predatory. And I also think that their current methods & growth rate are likely to make them shift towards more predatory over time.

MDPI publishes good papers in good journals, but it also employs some strategies that are proper to predatory publishers. I think that the success of MDPI in recent years is due to the creative combination of these two apparently contradicting strategy. One — the good journals with high quality — creates a rent that the other — spamming hundreds of colleagues to solicit papers, an astonishing increase in Special Issues, publishing papers as fast as possible — exploits.This strategy makes a lot of sense for MDPI, who shows strong growth rates and is en route to become the largest open access publisher in the world. But I don’t think it is a sustainable strategy. It suffers from basic collective action problems, that might deal a lot of damage to MDPI first, and, most importantly, to scientific publishing in general….

A predatory publisher is a journal that would publish anything — usually in return for money. MDPI rejection rates make this argument hard to sustain. Yet, MDPI is using some of the same techniques of predatory journals….”

Is MDPI a predatory publisher? – Paolo Crosetto

“So, is MDPI predatory or not? I think it has elements of both. I would name their methods aggressive rent extracting, rather than predatory. And I also think that their current methods & growth rate are likely to make them shift towards more predatory over time.

MDPI publishes good papers in good journals, but it also employs some strategies that are proper to predatory publishers. I think that the success of MDPI in recent years is due to the creative combination of these two apparently contradicting strategy. One — the good journals with high quality — creates a rent that the other — spamming hundreds of colleagues to solicit papers, an astonishing increase in Special Issues, publishing papers as fast as possible — exploits.This strategy makes a lot of sense for MDPI, who shows strong growth rates and is en route to become the largest open access publisher in the world. But I don’t think it is a sustainable strategy. It suffers from basic collective action problems, that might deal a lot of damage to MDPI first, and, most importantly, to scientific publishing in general….

A predatory publisher is a journal that would publish anything — usually in return for money. MDPI rejection rates make this argument hard to sustain. Yet, MDPI is using some of the same techniques of predatory journals….”

Publishers Care about the Version of Record, Do Researchers? – The Scholarly Kitchen

“It was against this backdrop that I read Exploring Researcher Preference for the Version of Record, which reported on research Springer Nature conducted in collaboration with ResearchGate. It is perhaps obvious to caveat that it is in Springer Nature’s interests to use this study to reinforce the value of the VOR, a central position of a recent keynote by CEO Frank Vrancken Peeters at the APE 2021 conference.

The study was conducted “in situ” and leveraged the Springer Nature syndication pilot project that posted VOR articles for access on the ResearchGate platform. As Mithu Lucraft, Director for Content Marketing Strategy, of the Springer Nature Group and one of the study’s co-authors explained to me, the survey was presented to ResearchGate users that were logged in and who had interacted with at least one Springer Nature publication in the 60 days prior to the survey being live in October 2020. 

Importantly, survey participants were not only asked to choose which version of an article they prefer but also which versions they would feel comfortable using for different purposes. In many cases, participants indicated that multiple different versions would be acceptable for a given use, which indicates that a preprint or accepted manuscript can substitute for the VOR in some use cases but perhaps not all. …”

Imposters and Impersonators in Preprints: How do we trust authors in Open Science? – The Scholarly Kitchen

“The prevalence of fictitious authorship across preprints is still unknown, and the writers’ motivations are opaque in most cases. This nefarious behavior within the open science arena raises many questions in need of discussing….”

Has the pandemic changed public attitudes about science? | Impact of Social Sciences

“At a structural level, the public faith in science’s trustworthiness and value can also be ‘future proofed’ through ongoing initiatives to make scientific research open and transparent, enhanced efforts to ensure a more diverse and inclusive scientific workforce and other efforts to improve science from within. Initiatives working in this direction include increased adoption of open science policies by research funders and global public policy that promotes more socially responsible research and innovation. Indeed, this moment of strong public support may be the perfect opportunity for long-needed structural reforms to make research more socially responsible and sustainable. In other words, it’s time to fix the roof while the sun is shining!”

Guest Post – Putting Publications into Context with the DocMaps Framework for Editorial Metadata – The Scholarly Kitchen

“Trust in academic journal articles is based on similar expectations. Journals carry out editorial processes from peer review to plagiarism checks. But these processes are highly heterogeneous in how, when, and by whom they are undertaken. In many cases, it’s not always readily apparent to the outside observer that they take place at all. And as new innovations in peer review and the open research movement lead to new experiments in how we produce and distribute research products, understanding what events take place is an increasingly important issue for publishers, authors, and readers alike.

With this in mind, the DocMaps project (a joint effort of the Knowledge Futures Group, ASAPbio, and TU Graz, supported by the Howard Hughes Medical Institute), has been working with a Technical Committee to develop a machine-readable, interoperable and extensible framework for capturing valuable context about the processes used to create research products such as journal articles. This framework is being designed to capture as much (or little) contextual data about a document as desired by the publisher: from a minimum assertion that an event took place, to a detailed history of every edit to a document….”

Science Communication in the Context of Reproducibility and Replicability: How Nonscientists Navigate Scientific Uncertainty · Issue 2.4, Fall 2020

Abstract:  Scientists stand to gain in obvious ways from recent efforts to develop robust standards for and mechanisms of reproducibility and replicability. Demonstrations of reproducibility and replicability may provide clarity with respect to areas of uncertainty in scientific findings and translate into greater impact for the research. But when it comes to public perceptions of science, it is less clear what gains might come from recent efforts to improve reproducibility and replicability. For example, could such efforts improve public understandings of scientific uncertainty? To gain insight into this issue, we would need to know how those views are shaped by media coverage of it, but none of the emergent research on public views of reproducibility and replicability in science considers that question. We do, however, have the recent report on Reproducibility and Replicability in Science issued by the National Academies of Sciences, Engineering, and Medicine, which provides an overview of public perceptions of uncertainty in science. Here, I adapt that report to begin a conversation between researchers and practitioners, with the aim of expanding research on public perceptions of scientific uncertainty. This overview draws upon research on risk perception and science communication to describe how the media influences the communication and perception of uncertainty in science. It ends by presenting recommendations for communicating scientific uncertainty as it pertains to issues of reproducibility and replicability.

Credibility of preprints: an interdisciplinary survey of researchers | Royal Society Open Science

Abstract:  Preprints increase accessibility and can speed scholarly communication if researchers view them as credible enough to read and use. Preprint services do not provide the heuristic cues of a journal’s reputation, selection, and peer-review processes that, regardless of their flaws, are often used as a guide for deciding what to read. We conducted a survey of 3759 researchers across a wide range of disciplines to determine the importance of different cues for assessing the credibility of individual preprints and preprint services. We found that cues related to information about open science content and independent verification of author claims were rated as highly important for judging preprint credibility, and peer views and author information were rated as less important. As of early 2020, very few preprint services display any of the most important cues. By adding such cues, services may be able to help researchers better assess the credibility of preprints, enabling scholars to more confidently use preprints, thereby accelerating scientific communication and discovery.

 

 

PsyArXiv Preprints | Questionable and open research practices: attitudes and perceptions among quantitative communication researchers

Abstract:  Recent contributions have questioned the credibility of quantitative communication research. While questionable research practices are believed to be widespread, evidence for this claim is primarily derived from other disciplines. Before change in communication research can happen, it is important to document the extent to which QRPs are used and whether researchers are open to the changes proposed by the so-called open science agenda. We conducted a large survey among authors of papers published in the top-20 journals in communication science in the last ten years (N=1039). A non-trivial percent of researchers report using one or more QRPs. While QRPs are generally considered unacceptable, researchers perceive QRPs to be common among their colleagues. At the same time, we find optimism about the use of open science practices in communication research. We end with a series of recommendations outlining what journals, institutions and researchers can do moving forward.

Center for Open Science: Impact Report 2020

“The credibility of science has center stage in 2020. A raging pandemic. Partisan interests. Economic and health consequences. Misinformation everywhere. An amplified desire for certainty on what will happen and how to address it. In this climate, all public health and economic research will be politicized. All findings are understood through a political lens. When the findings are against partisan interests, the scientists are accused of reporting the outcomes they want and avoiding the ones they don’t. When the findings are aligned with partisan interests, they are accepted immediately and uncertainty ignored. Politicization can seem like a black hole inexorably sucking in the scientific community and making the science just another source of information—its credibility based on agreement with one’s pre-existing ideology. All is not lost. Science has a protective force against the forces of politicization, transparency….”