Is MDPI a predatory publisher? – Paolo Crosetto

“So, is MDPI predatory or not? I think it has elements of both. I would name their methods aggressive rent extracting, rather than predatory. And I also think that their current methods & growth rate are likely to make them shift towards more predatory over time.

MDPI publishes good papers in good journals, but it also employs some strategies that are proper to predatory publishers. I think that the success of MDPI in recent years is due to the creative combination of these two apparently contradicting strategy. One — the good journals with high quality — creates a rent that the other — spamming hundreds of colleagues to solicit papers, an astonishing increase in Special Issues, publishing papers as fast as possible — exploits.This strategy makes a lot of sense for MDPI, who shows strong growth rates and is en route to become the largest open access publisher in the world. But I don’t think it is a sustainable strategy. It suffers from basic collective action problems, that might deal a lot of damage to MDPI first, and, most importantly, to scientific publishing in general….

A predatory publisher is a journal that would publish anything — usually in return for money. MDPI rejection rates make this argument hard to sustain. Yet, MDPI is using some of the same techniques of predatory journals….”

Is MDPI a predatory publisher? – Paolo Crosetto

“So, is MDPI predatory or not? I think it has elements of both. I would name their methods aggressive rent extracting, rather than predatory. And I also think that their current methods & growth rate are likely to make them shift towards more predatory over time.

MDPI publishes good papers in good journals, but it also employs some strategies that are proper to predatory publishers. I think that the success of MDPI in recent years is due to the creative combination of these two apparently contradicting strategy. One — the good journals with high quality — creates a rent that the other — spamming hundreds of colleagues to solicit papers, an astonishing increase in Special Issues, publishing papers as fast as possible — exploits.This strategy makes a lot of sense for MDPI, who shows strong growth rates and is en route to become the largest open access publisher in the world. But I don’t think it is a sustainable strategy. It suffers from basic collective action problems, that might deal a lot of damage to MDPI first, and, most importantly, to scientific publishing in general….

A predatory publisher is a journal that would publish anything — usually in return for money. MDPI rejection rates make this argument hard to sustain. Yet, MDPI is using some of the same techniques of predatory journals….”

Publishers Care about the Version of Record, Do Researchers? – The Scholarly Kitchen

“It was against this backdrop that I read Exploring Researcher Preference for the Version of Record, which reported on research Springer Nature conducted in collaboration with ResearchGate. It is perhaps obvious to caveat that it is in Springer Nature’s interests to use this study to reinforce the value of the VOR, a central position of a recent keynote by CEO Frank Vrancken Peeters at the APE 2021 conference.

The study was conducted “in situ” and leveraged the Springer Nature syndication pilot project that posted VOR articles for access on the ResearchGate platform. As Mithu Lucraft, Director for Content Marketing Strategy, of the Springer Nature Group and one of the study’s co-authors explained to me, the survey was presented to ResearchGate users that were logged in and who had interacted with at least one Springer Nature publication in the 60 days prior to the survey being live in October 2020. 

Importantly, survey participants were not only asked to choose which version of an article they prefer but also which versions they would feel comfortable using for different purposes. In many cases, participants indicated that multiple different versions would be acceptable for a given use, which indicates that a preprint or accepted manuscript can substitute for the VOR in some use cases but perhaps not all. …”

Imposters and Impersonators in Preprints: How do we trust authors in Open Science? – The Scholarly Kitchen

“The prevalence of fictitious authorship across preprints is still unknown, and the writers’ motivations are opaque in most cases. This nefarious behavior within the open science arena raises many questions in need of discussing….”

Has the pandemic changed public attitudes about science? | Impact of Social Sciences

“At a structural level, the public faith in science’s trustworthiness and value can also be ‘future proofed’ through ongoing initiatives to make scientific research open and transparent, enhanced efforts to ensure a more diverse and inclusive scientific workforce and other efforts to improve science from within. Initiatives working in this direction include increased adoption of open science policies by research funders and global public policy that promotes more socially responsible research and innovation. Indeed, this moment of strong public support may be the perfect opportunity for long-needed structural reforms to make research more socially responsible and sustainable. In other words, it’s time to fix the roof while the sun is shining!”

Guest Post – Putting Publications into Context with the DocMaps Framework for Editorial Metadata – The Scholarly Kitchen

“Trust in academic journal articles is based on similar expectations. Journals carry out editorial processes from peer review to plagiarism checks. But these processes are highly heterogeneous in how, when, and by whom they are undertaken. In many cases, it’s not always readily apparent to the outside observer that they take place at all. And as new innovations in peer review and the open research movement lead to new experiments in how we produce and distribute research products, understanding what events take place is an increasingly important issue for publishers, authors, and readers alike.

With this in mind, the DocMaps project (a joint effort of the Knowledge Futures Group, ASAPbio, and TU Graz, supported by the Howard Hughes Medical Institute), has been working with a Technical Committee to develop a machine-readable, interoperable and extensible framework for capturing valuable context about the processes used to create research products such as journal articles. This framework is being designed to capture as much (or little) contextual data about a document as desired by the publisher: from a minimum assertion that an event took place, to a detailed history of every edit to a document….”

Science Communication in the Context of Reproducibility and Replicability: How Nonscientists Navigate Scientific Uncertainty · Issue 2.4, Fall 2020

Abstract:  Scientists stand to gain in obvious ways from recent efforts to develop robust standards for and mechanisms of reproducibility and replicability. Demonstrations of reproducibility and replicability may provide clarity with respect to areas of uncertainty in scientific findings and translate into greater impact for the research. But when it comes to public perceptions of science, it is less clear what gains might come from recent efforts to improve reproducibility and replicability. For example, could such efforts improve public understandings of scientific uncertainty? To gain insight into this issue, we would need to know how those views are shaped by media coverage of it, but none of the emergent research on public views of reproducibility and replicability in science considers that question. We do, however, have the recent report on Reproducibility and Replicability in Science issued by the National Academies of Sciences, Engineering, and Medicine, which provides an overview of public perceptions of uncertainty in science. Here, I adapt that report to begin a conversation between researchers and practitioners, with the aim of expanding research on public perceptions of scientific uncertainty. This overview draws upon research on risk perception and science communication to describe how the media influences the communication and perception of uncertainty in science. It ends by presenting recommendations for communicating scientific uncertainty as it pertains to issues of reproducibility and replicability.

Credibility of preprints: an interdisciplinary survey of researchers | Royal Society Open Science

Abstract:  Preprints increase accessibility and can speed scholarly communication if researchers view them as credible enough to read and use. Preprint services do not provide the heuristic cues of a journal’s reputation, selection, and peer-review processes that, regardless of their flaws, are often used as a guide for deciding what to read. We conducted a survey of 3759 researchers across a wide range of disciplines to determine the importance of different cues for assessing the credibility of individual preprints and preprint services. We found that cues related to information about open science content and independent verification of author claims were rated as highly important for judging preprint credibility, and peer views and author information were rated as less important. As of early 2020, very few preprint services display any of the most important cues. By adding such cues, services may be able to help researchers better assess the credibility of preprints, enabling scholars to more confidently use preprints, thereby accelerating scientific communication and discovery.

 

 

PsyArXiv Preprints | Questionable and open research practices: attitudes and perceptions among quantitative communication researchers

Abstract:  Recent contributions have questioned the credibility of quantitative communication research. While questionable research practices are believed to be widespread, evidence for this claim is primarily derived from other disciplines. Before change in communication research can happen, it is important to document the extent to which QRPs are used and whether researchers are open to the changes proposed by the so-called open science agenda. We conducted a large survey among authors of papers published in the top-20 journals in communication science in the last ten years (N=1039). A non-trivial percent of researchers report using one or more QRPs. While QRPs are generally considered unacceptable, researchers perceive QRPs to be common among their colleagues. At the same time, we find optimism about the use of open science practices in communication research. We end with a series of recommendations outlining what journals, institutions and researchers can do moving forward.

Center for Open Science: Impact Report 2020

“The credibility of science has center stage in 2020. A raging pandemic. Partisan interests. Economic and health consequences. Misinformation everywhere. An amplified desire for certainty on what will happen and how to address it. In this climate, all public health and economic research will be politicized. All findings are understood through a political lens. When the findings are against partisan interests, the scientists are accused of reporting the outcomes they want and avoiding the ones they don’t. When the findings are aligned with partisan interests, they are accepted immediately and uncertainty ignored. Politicization can seem like a black hole inexorably sucking in the scientific community and making the science just another source of information—its credibility based on agreement with one’s pre-existing ideology. All is not lost. Science has a protective force against the forces of politicization, transparency….”

Problematizing ‘predatory publishing’: A systematic review of factors shaping publishing motives, decisions, and experiences – Mills – – Learned Publishing – Wiley Online Library

Abstract:  This article systematically reviews recent empirical research on the factors shaping academics’ knowledge about, and motivations to publish work in, so?called ‘predatory’ journals. Growing scholarly evidence suggests that the concept of ‘predatory’ publishing’ – used to describe deceptive journals exploiting vulnerable researchers – is inadequate for understanding the complex range of institutional and contextual factors that shape the publication decisions of individual academics. This review identifies relevant empirical studies on academics who have published in ‘predatory’ journals, and carries out a detailed comparison of 16 papers that meet the inclusion criteria. While most start from Beall’s framing of ‘predatory’ publishing, their empirical findings move the debate beyond normative assumptions about academic vulnerability. They offer particular insights into the academic pressures on scholars at the periphery of a global research economy. This systematic review shows the value of a holistic approach to studying individual publishing decisions within specific institutional, economic and political contexts. Rather than assume that scholars publishing in ‘questionable’ journals are naïve, gullible or lacking in understanding, fine?grained empirical research provides a more nuanced conceptualization of the pressures and incentives shaping their decisions. The review suggests areas for further research, especially in emerging research systems in the global South.

 

MetaArXiv Preprints | Publication by association: the Covid-19 pandemic reveals relationships between authors and editors

Abstract:  During the COVID-19 pandemic, the rush to scientific and political judgments on the merits of hydroxychloroquine was fuelled by dubious papers which may have been published because the authors were not independent from the practices of the journals in which they appeared. This example leads us to consider a new type of illegitimate publishing entity, “self-promotion journals” which could be deployed to serve the instrumentalisation of productivity-based metrics, with a ripple effect on decisions about promotion, tenure, and grant funding.

 

MetaArXiv Preprints | Publication by association: the Covid-19 pandemic reveals relationships between authors and editors

Abstract:  During the COVID-19 pandemic, the rush to scientific and political judgments on the merits of hydroxychloroquine was fuelled by dubious papers which may have been published because the authors were not independent from the practices of the journals in which they appeared. This example leads us to consider a new type of illegitimate publishing entity, “self-promotion journals” which could be deployed to serve the instrumentalisation of productivity-based metrics, with a ripple effect on decisions about promotion, tenure, and grant funding.

 

New resource for books added to Think. Check. Submit. | Think. Check. Submit.

“Further to our announcement in October, the Steering Committee of Think. Check. Submit. is delighted to announce a new addition to its resources: a checklist for authors wishing to verify the reliability and trustworthiness of a book or monograph publisher.

Drawing on existing expertise from within the group and from experiences of our newest partner, OAPEN, the checklist for books offers sound advice along the lines of the recommendations already offered by the journal checklist….”

Research published in pay-and-publish journals won’t count: UGC panel | India News,The Indian Express

“Suggesting sweeping reforms to promote the quality of research in India, a UGC panel has recommended that publication of research material in “predatory” journals or presentations in conferences organised by their publishers should not be considered for academic credit in any form.

They include selection, confirmation, promotion, appraisal, and award of scholarships and degrees, the panel has suggested. The committee, which submitted its 14-page report to the UGC recently, has also recommended changes in PhD and MPhil programmes, including a new board for social sciences research….

Last week, the UGC launched the Consortium of Academic and Research Ethics (CARE) to approve a new official list of academic publications….”