Imposters and Impersonators in Preprints: How do we trust authors in Open Science? – The Scholarly Kitchen

“The prevalence of fictitious authorship across preprints is still unknown, and the writers’ motivations are opaque in most cases. This nefarious behavior within the open science arena raises many questions in need of discussing….”

Has the pandemic changed public attitudes about science? | Impact of Social Sciences

“At a structural level, the public faith in science’s trustworthiness and value can also be ‘future proofed’ through ongoing initiatives to make scientific research open and transparent, enhanced efforts to ensure a more diverse and inclusive scientific workforce and other efforts to improve science from within. Initiatives working in this direction include increased adoption of open science policies by research funders and global public policy that promotes more socially responsible research and innovation. Indeed, this moment of strong public support may be the perfect opportunity for long-needed structural reforms to make research more socially responsible and sustainable. In other words, it’s time to fix the roof while the sun is shining!”

Guest Post – Putting Publications into Context with the DocMaps Framework for Editorial Metadata – The Scholarly Kitchen

“Trust in academic journal articles is based on similar expectations. Journals carry out editorial processes from peer review to plagiarism checks. But these processes are highly heterogeneous in how, when, and by whom they are undertaken. In many cases, it’s not always readily apparent to the outside observer that they take place at all. And as new innovations in peer review and the open research movement lead to new experiments in how we produce and distribute research products, understanding what events take place is an increasingly important issue for publishers, authors, and readers alike.

With this in mind, the DocMaps project (a joint effort of the Knowledge Futures Group, ASAPbio, and TU Graz, supported by the Howard Hughes Medical Institute), has been working with a Technical Committee to develop a machine-readable, interoperable and extensible framework for capturing valuable context about the processes used to create research products such as journal articles. This framework is being designed to capture as much (or little) contextual data about a document as desired by the publisher: from a minimum assertion that an event took place, to a detailed history of every edit to a document….”

Science Communication in the Context of Reproducibility and Replicability: How Nonscientists Navigate Scientific Uncertainty · Issue 2.4, Fall 2020

Abstract:  Scientists stand to gain in obvious ways from recent efforts to develop robust standards for and mechanisms of reproducibility and replicability. Demonstrations of reproducibility and replicability may provide clarity with respect to areas of uncertainty in scientific findings and translate into greater impact for the research. But when it comes to public perceptions of science, it is less clear what gains might come from recent efforts to improve reproducibility and replicability. For example, could such efforts improve public understandings of scientific uncertainty? To gain insight into this issue, we would need to know how those views are shaped by media coverage of it, but none of the emergent research on public views of reproducibility and replicability in science considers that question. We do, however, have the recent report on Reproducibility and Replicability in Science issued by the National Academies of Sciences, Engineering, and Medicine, which provides an overview of public perceptions of uncertainty in science. Here, I adapt that report to begin a conversation between researchers and practitioners, with the aim of expanding research on public perceptions of scientific uncertainty. This overview draws upon research on risk perception and science communication to describe how the media influences the communication and perception of uncertainty in science. It ends by presenting recommendations for communicating scientific uncertainty as it pertains to issues of reproducibility and replicability.

Credibility of preprints: an interdisciplinary survey of researchers | Royal Society Open Science

Abstract:  Preprints increase accessibility and can speed scholarly communication if researchers view them as credible enough to read and use. Preprint services do not provide the heuristic cues of a journal’s reputation, selection, and peer-review processes that, regardless of their flaws, are often used as a guide for deciding what to read. We conducted a survey of 3759 researchers across a wide range of disciplines to determine the importance of different cues for assessing the credibility of individual preprints and preprint services. We found that cues related to information about open science content and independent verification of author claims were rated as highly important for judging preprint credibility, and peer views and author information were rated as less important. As of early 2020, very few preprint services display any of the most important cues. By adding such cues, services may be able to help researchers better assess the credibility of preprints, enabling scholars to more confidently use preprints, thereby accelerating scientific communication and discovery.



PsyArXiv Preprints | Questionable and open research practices: attitudes and perceptions among quantitative communication researchers

Abstract:  Recent contributions have questioned the credibility of quantitative communication research. While questionable research practices are believed to be widespread, evidence for this claim is primarily derived from other disciplines. Before change in communication research can happen, it is important to document the extent to which QRPs are used and whether researchers are open to the changes proposed by the so-called open science agenda. We conducted a large survey among authors of papers published in the top-20 journals in communication science in the last ten years (N=1039). A non-trivial percent of researchers report using one or more QRPs. While QRPs are generally considered unacceptable, researchers perceive QRPs to be common among their colleagues. At the same time, we find optimism about the use of open science practices in communication research. We end with a series of recommendations outlining what journals, institutions and researchers can do moving forward.

Center for Open Science: Impact Report 2020

“The credibility of science has center stage in 2020. A raging pandemic. Partisan interests. Economic and health consequences. Misinformation everywhere. An amplified desire for certainty on what will happen and how to address it. In this climate, all public health and economic research will be politicized. All findings are understood through a political lens. When the findings are against partisan interests, the scientists are accused of reporting the outcomes they want and avoiding the ones they don’t. When the findings are aligned with partisan interests, they are accepted immediately and uncertainty ignored. Politicization can seem like a black hole inexorably sucking in the scientific community and making the science just another source of information—its credibility based on agreement with one’s pre-existing ideology. All is not lost. Science has a protective force against the forces of politicization, transparency….”

Problematizing ‘predatory publishing’: A systematic review of factors shaping publishing motives, decisions, and experiences – Mills – – Learned Publishing – Wiley Online Library

Abstract:  This article systematically reviews recent empirical research on the factors shaping academics’ knowledge about, and motivations to publish work in, so?called ‘predatory’ journals. Growing scholarly evidence suggests that the concept of ‘predatory’ publishing’ – used to describe deceptive journals exploiting vulnerable researchers – is inadequate for understanding the complex range of institutional and contextual factors that shape the publication decisions of individual academics. This review identifies relevant empirical studies on academics who have published in ‘predatory’ journals, and carries out a detailed comparison of 16 papers that meet the inclusion criteria. While most start from Beall’s framing of ‘predatory’ publishing, their empirical findings move the debate beyond normative assumptions about academic vulnerability. They offer particular insights into the academic pressures on scholars at the periphery of a global research economy. This systematic review shows the value of a holistic approach to studying individual publishing decisions within specific institutional, economic and political contexts. Rather than assume that scholars publishing in ‘questionable’ journals are naïve, gullible or lacking in understanding, fine?grained empirical research provides a more nuanced conceptualization of the pressures and incentives shaping their decisions. The review suggests areas for further research, especially in emerging research systems in the global South.


MetaArXiv Preprints | Publication by association: the Covid-19 pandemic reveals relationships between authors and editors

Abstract:  During the COVID-19 pandemic, the rush to scientific and political judgments on the merits of hydroxychloroquine was fuelled by dubious papers which may have been published because the authors were not independent from the practices of the journals in which they appeared. This example leads us to consider a new type of illegitimate publishing entity, “self-promotion journals” which could be deployed to serve the instrumentalisation of productivity-based metrics, with a ripple effect on decisions about promotion, tenure, and grant funding.


MetaArXiv Preprints | Publication by association: the Covid-19 pandemic reveals relationships between authors and editors

Abstract:  During the COVID-19 pandemic, the rush to scientific and political judgments on the merits of hydroxychloroquine was fuelled by dubious papers which may have been published because the authors were not independent from the practices of the journals in which they appeared. This example leads us to consider a new type of illegitimate publishing entity, “self-promotion journals” which could be deployed to serve the instrumentalisation of productivity-based metrics, with a ripple effect on decisions about promotion, tenure, and grant funding.


New resource for books added to Think. Check. Submit. | Think. Check. Submit.

“Further to our announcement in October, the Steering Committee of Think. Check. Submit. is delighted to announce a new addition to its resources: a checklist for authors wishing to verify the reliability and trustworthiness of a book or monograph publisher.

Drawing on existing expertise from within the group and from experiences of our newest partner, OAPEN, the checklist for books offers sound advice along the lines of the recommendations already offered by the journal checklist….”

Research published in pay-and-publish journals won’t count: UGC panel | India News,The Indian Express

“Suggesting sweeping reforms to promote the quality of research in India, a UGC panel has recommended that publication of research material in “predatory” journals or presentations in conferences organised by their publishers should not be considered for academic credit in any form.

They include selection, confirmation, promotion, appraisal, and award of scholarships and degrees, the panel has suggested. The committee, which submitted its 14-page report to the UGC recently, has also recommended changes in PhD and MPhil programmes, including a new board for social sciences research….

Last week, the UGC launched the Consortium of Academic and Research Ethics (CARE) to approve a new official list of academic publications….”

International observatory targets predatory publishers | Times Higher Education (THE)

“A coalition of scientists, funders, publishing societies and librarians believes that the formation of an international observatory to study predatory journals will lead to improved advice on how to tackle them.

The initiative aims to fill the void left by the closure three years ago of Jeffrey Beall’s blacklist of predatory publishers. Since then, many others have set up their own blacklists and checklists, but there is “a lack of unity across the community about what predatory journals are”, said Agnes Grudniewicz, assistant professor at the Telfer School of Management at the University of Ottawa.

The coalition’s biggest achievement so far is to create a consensus definition of predatory journals. It defines predatory journals and publishers as “entities that prioritise self-interest at the expense of scholarship” and “are characterised by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggregate and indiscriminate solicitation practices”….

Creating an international observatory – potentially funded by research funders, charities, publishers and research institutions – was a less contentious solution than relying on blacklists or “whitelists” of approved providers, said Dr Grudniewicz. Research led by Michaela Strinzel, from the Swiss National Science Foundation, found that 34 journals listed as predatory by Professor Beall appeared on an approved list of titles run by the Directory of Open Access Journals (DOAJ), while 31 DOAJ titles were deemed predatory by subscription service Cabells….”

Predatory journals: no definition, no defence

“The consensus definition reached was: “Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.” …”

Identifying publications in questionable journals in the context of performance-based research funding

Abstract:  In this article we discuss the five yearly screenings for publications in questionable journals which have been carried out in the context of the performance-based research funding model in Flanders, Belgium. The Flemish funding model expanded from 2010 onwards, with a comprehensive bibliographic database for research output in the social sciences and humanities. Along with an overview of the procedures followed during the screenings for articles in questionable journals submitted for inclusion in this database, we present a bibliographic analysis of the publications identified. First, we show how the yearly number of publications in questionable journals has evolved over the period 2003–2016. Second, we present a disciplinary classification of the identified journals. In the third part of the results section, three authorship characteristics are discussed: multi-authorship, the seniority–or experience level–of authors in general and of the first author in particular, and the relation of the disciplinary scope of the journal (cognitive classification) with the departmental affiliation of the authors (organizational classification). Our results regarding yearly rates of publications in questionable journals indicate that awareness of the risks of questionable journals does not lead to a turn away from open access in general. The number of publications in open access journals rises every year, while the number of publications in questionable journals decreases from 2012 onwards. We find further that both early career and more senior researchers publish in questionable journals. We show that the average proportion of senior authors contributing to publications in questionable journals is somewhat higher than that for publications in open access journals. In addition, this paper yields insight into the extent to which publications in questionable journals pose a threat to the public and political legitimacy of a performance-based research funding system of a western European region. We include concrete suggestions for those tasked with maintaining bibliographic databases and screening for publications in questionable journals.