Paper mills research | COPE: Committee on Publication Ethics

“Recommended actions

A major education exercise is needed to ensure that Editors are aware of the problem of paper mills, and Editors/editorial staff are trained in identifying the fake papers.
Continued investment in tools and systems to pick up suspect papers as they are submitted.
Engagement with institutions and funders to review incentives for researchers to publish valid papers and not use services that will give quick but fake publication.
Investigation of protocols that can be put in place to impede paper mills from succeeding in their goals.
Review the retraction process to take account of the unique features of paper mill papers.
Investigate how to ensure retraction notices are applied to all copies of a paper such as preprint servers and article repositories….”

Reproducibility and Research Integrity – Science, Innovation and Technology Committee

“The United Kingdom is experiencing the largest-ever increase in public investment in research and development, with the Government R&D budget set to reach £20 billion a year by 2024/5. The creation of the new Department for Science, Innovation and Technology has been advanced by the Government as heralding an increased focus on research and innovation—seen to be among Britain’s main strengths.

At the same time, there have been increasing concerns raised that the integrity of some scientific research is questionable because of failures to be able to reproduce the claimed findings of some experiments or analyses of data and therefore confirm that the original researcher’s conclusions were justified. Some people have described this as a ‘reproducibility crisis’.

In 2018, our predecessor committee published a report ‘Research Integrity’. Some of the recommendations of that report were implemented—such as the establishment of a national research integrity committee.

This report looks in particular at the issue of the reproducibility of research….

We welcome UKRI’s policy of requiring open access to research that it funds, but we recommend that this should go further in requiring the recipients of research grants to share data and code alongside the publications arising from the funded research….”

Identifying the characteristics of excellent peer reviewers by using Publons | Emerald Insight

Abstract:  Purpose

This study aimed to identify the characteristics of excellent peer reviewers by using Publons.com (an open and free online peer review website).

Design/methodology/approach

Reviewers of the clinical medicine field on Publons were selected as the sample (n = 1,864). A logistic regression model was employed to examine the data.

Findings

The results revealed that reviewers’ verified reviews, verified editor records, and whether they were the Publons mentors had significant and positive associations with excellent peer reviewers, while their research performance (including the number of articles indexed by Web of Science (WOS), citations, H-index and high-cited researcher), genders, words per review, number of current/past editorial boards, whether they had experiences of post-publication review on Publons and whether they were Publons academy graduates had no significant associations with excellent peer reviewers.

Originality/value

This study could help journals find excellent peer reviewers from free and open online platforms.

AI Is Tearing Wikipedia Apart

“As generative artificial intelligence continues to permeate all aspects of culture, the people who steward Wikipedia are divided on how best to proceed.  During a recent community call, it became apparent that there is a community split over whether or not to use large language models to generate content. While some people expressed that tools like Open AI’s ChatGPT could help with generating and summarizing articles, others remained wary.  The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated….”

Hindawi shuttering four journals overrun by paper mills – Retraction Watch

“Hindawi will cease publishing four journals that it identified as “heavily compromised by paper mills.” 

The open access publisher announced today in a blog post that it will continue to retract articles from the closed titles, which are Computational and Mathematical Methods in Medicine, Computational Intelligence and Neuroscience, the Journal of Healthcare Engineering, and the Journal of Environmental and Public Health….”

Principles of Diamond Open Access Publishing: a draft proposal – the diamond papers

Introduction

The Action Plan for Diamond Open Access outlines a set of priorities to develop sustainable, community-driven, academic-led and -owned scholarly communication. Its goal is to create a global federation of Diamond Open Access (Diamond OA) journals and platforms around shared principles, guidelines, and quality standards while respecting their cultural, multilingual and disciplinary diversity. It proposes a definition of Diamond OA as a scholarly publication model in which journals and platforms do not charge fees to either authors or readers. Diamond OA is community-driven, academic-led and -owned, and serves a wide variety of generally small-scale, multilingual, and multicultural scholarly communities. 

Still, Diamond OA is often seen as a mere business model for scholarly publishing: no fees for authors or readers. However, Diamond OA can be better characterized by a shared set of values and principles that go well beyond the business aspect. These distinguish Diamond OA communities from other approaches to scholarly publishing. It is therefore worthwhile to spell out these values and principles, so they may serve as elements of identification for Diamond OA communities. 

The principles formulated below are intended as a first draft. They are not cast in stone, and meant to inspire discussion and evolve as a living document that will crystallize over the coming months. Many of these principles are not exclusive to Diamond OA communities. Some are borrowed or adapted from the more general 2019 Good Practice Principles for scholarly communication services defined by Sparc and COAR1, or go back to the 2016 Vienna Principles. Others have been carefully worked out in more detail by the FOREST Framework for Values-Driven Scholarly Communication in a self-assessment format for scholarly communities. Additional references can be added in the discussion.

The formulation of these principles has benefited from many conversations over the years with various members of the Diamond community now working together in the Action Plan for Diamond Open Access, cOAlition S, the CRAFT-OA and DIAMAS projects, the Fair Open Access Alliance (FOAA), Linguistics in Open Access (LingOA), the Open Library of Humanities, OPERAS, SciELO, Science Europe, and Redalyc-Amelica. This document attempts to embed these valuable contributions into principles defining the ethos of Diamond OA publishing.

 

Wiley Removes Goodin as Editor of the Journal of Political Philosophy (Updated) | Daily Nous

“Robert Goodin, the founding and longtime editor of the Journal of Political Philosophy, has been removed from his position at the journal by its publisher, Wiley….

So far, there has been no official explanation offered as to why Goodin was fired….

Anna Stilz (Princeton), a member of the Journal of Political Philosophy editorial board and editor-in-chief of Philosophy & Public Affairs, shared parts of an email she sent to fellow editorial board members.

Like many of you, I wrote earlier today to resign from Wiley’s Editorial Board…  But now I’d just like to second [the complaint about] Wiley’s unreasonable demands and to add my perspective as Editor-in-Chief of Philosophy and Public Affairs, another Wiley-owned journal.

Wiley has recently signed a number of major open-access agreements: this means that increasingly, they get their revenue through author fees for each article they publish (often covered now by public grant agencies), rather than library subscriptions. Their current company-wide strategy for maximizing revenue is to force the journals they own to publish as many articles as possible to generate maximum author fees. Where Editors refuse to do that, they exert all the pressure they can, up to and including dismissal, as in this case. Though I am not privy to the details of Bob’s communications with Wiley, I can say that P&PA has experienced similar demands. A few years back we only succeeded in getting them to back down by threatening to file a lawsuit. They were quiet for a while, but recently their demands have begun to escalateUPOD again.

 

All political philosophers and theorists who care about the journals in our field have an interest in showing Wiley that it can’t get away with this….”

To Preprint or Not to Preprint: Experience and Attitudes of Researchers Worldwide

Abstract:  The pandemic has underlined the significance of open science and spurred further growth of preprinting. Nevertheless, preprinting has been adopted at varying rates across different countries/regions. To investigate researchers’ experience with and attitudes toward preprinting, we conducted a survey of authors of research papers published in 2021 or 2022. We find that respondents in the US and Europe had a higher level of familiarity with and adoption of preprinting than those in China and the rest of the world. Respondents in China were most worried about the lack of recognition for preprinting and the risk of getting scooped. US respondents were very concerned about premature media coverage of preprints, the reliability and credibility of preprints, and public sharing of information before peer review. Respondents identified integration of preprinting in journal submission processes as the most important way to promote preprinting.

 

Scientific research is deteriorating | Science & Tech | EL PAÍS English

“The field of scientific research is deteriorating because of the way the system is set up. Researchers do the research – financed with public funds – and then the public institutions that they work for pay the big scientific publishers several times, for reviewing and publishing submissions. Simultaneously, the researchers also review scientific papers for free, while companies like Clarivate or the Shanghai Ranking draft their lists, telling everyone who are the good guys (and leaving out the people who, apparently, aren’t worth consideration).

In the last 30 years – since we’ve been living with the internet – we’ve altered the ways in which we communicate, buy, teach, learn and even flirt. And yet, we continue to finance and evaluate science in the same way as in the last century. Young researchers – underpaid and pressured by the system – are forced to spend time trying to get into a “Top 40? list, rather than working in their laboratories and making positive changes in the world.

As the Argentines say: “The problem isn’t with the pig, but with the person who feeds it.” Consciously or unconsciously, we all feed this anachronistic and ineffective system, which is suffocated by the deadly embrace between scientific journals and university rankings. Our governments and institutions fill the coffers of publishers and other companies, who then turn around and sell us their products and inform us (for a price) about what counts as quality….

Despite the issues, there’s certainly reason to be optimistic: although we scientists are victims (and accomplices) of the current system, we’re also aware of its weaknesses. We want to change this reality.

 

After a long debate – facilitated by the Open Science unit of the European Commision – the Coalition for Advancing Research Assessment (COARA) has been created. In the last four months, more than 500 institutions have joined COARA, which – along with other commitments – will avoid the use of rankings in the evaluation of research. COARA is a step forward to analyze – in a coherent, collective, global and urgent manner – the reform of research evaluation. This will help us move away from an exclusively quantitative evaluation system of journals, towards a system that includes other research products and indicators, as well as qualitative narratives that define the specific contributions of researchers across all disciplines….”

 

 

The potential of open education resources | Research Information

“OERs are here to stay. But the rate of OER adoption, quality, accessibility, discoverability, standardisation, sustainability, and inclusivity very much lies in the hands of you and I: participants in the collaborative community of publishers, institutions, educators, learners, and policy makers.

If there ever was a ‘team-effort’ required in the industry, this is it. After more than 20 years working with top consortia, libraries, ministries, hospitals, and corporations within research and education, I have never been more excited. The industry is immersed in an influx of discourse and innovation across the entire educational spectrum.

Innovation is critical, but we must innovate together to achieve true openness. Within the OER framework, there are currently many questions we do not have defined answers for. Questions like: how can we work with publishers to ensure OERs are not a threat to traditional publishing? How do we support the immense efforts of authors and content creators so content quality remains a priority? How can we work effectively with policy makers to protect intellectual property rights?

The grey area of illegal content gateways tells us one thing: there is a global issue of accessibility and affordability. Emerging regions, where school and institutions are underfunded, are impacted the most. As we take steps closer to the Sustainable Development Goals, we must work together and look at alternative ways of collaborating….”

Current concerns on journal article with preprint: Korean Journal of Internal Medicine perspectives

Abstract:  Preprints are preliminary research reports that have not yet been peer-reviewed. They have been widely adopted to promote the timely dissemination of research across many scientific fields. In August 1991, Paul Ginsparg launched an electronic bulletin board intended to serve a few hundred colleagues working in a subfield of theoretical high-energy physics, thus launching arXiv, the first and largest preprint platform. Additional preprint servers have since been implemented in different academic fields, such as BioRxiv (2013, Biology; www.biorxiv.org) and medRxiv (2019, Health Science; www.medrxiv.org). While preprint availability has made valuable research resources accessible to the general public, thus bridging the gap between academic and non-academic audiences, it has also facilitated the spread of unsupported conclusions through various media channels. Issues surrounding the preprint policies of a journal must be addressed, ultimately, by editors and include the acceptance of preprint manuscripts, allowing the citation of preprints, maintaining a double-blind peer review process, changes to the preprint’s content and authors’ list, scoop priorities, commenting on preprints, and preventing the influence of social media. Editors must be able to deal with these issues adequately, to maintain the scientific integrity of their journal. In this review, the history, current status, and strengths and weaknesses of preprints as well as ongoing concerns regarding journal articles with preprints are discussed. An optimal approach to preprints is suggested for editorial board members, authors, and researchers.

 

Data sharing upon request and statistical consistency errors in psychology: A replication of Wicherts, Bakker and Molenaar (2011) | PLOS ONE

Abstract:  Sharing research data allows the scientific community to verify and build upon published work. However, data sharing is not common practice yet. The reasons for not sharing data are myriad: Some are practical, others are more fear-related. One particular fear is that a reanalysis may expose errors. For this explanation, it would be interesting to know whether authors that do not share data genuinely made more errors than authors who do share data. (Wicherts, Bakker and Molenaar 2011) examined errors that can be discovered based on the published manuscript only, because it is impossible to reanalyze unavailable data. They found a higher prevalence of such errors in papers for which the data were not shared. However, (Nuijten et al. 2017) did not find support for this finding in three large studies. To shed more light on this relation, we conducted a replication of the study by (Wicherts et al. 2011). Our study consisted of two parts. In the first part, we reproduced the analyses from (Wicherts et al. 2011) to verify the results, and we carried out several alternative analytical approaches to evaluate the robustness of the results against other analytical decisions. In the second part, we used a unique and larger data set that originated from (Vanpaemel et al. 2015) on data sharing upon request for reanalysis, to replicate the findings in (Wicherts et al. 2011). We applied statcheck for the detection of consistency errors in all included papers and manually corrected false positives. Finally, we again assessed the robustness of the replication results against other analytical decisions. Everything taken together, we found no robust empirical evidence for the claim that not sharing research data for reanalysis is associated with consistency errors.

We won’t beat predatory journals by blacklisting them | Times Higher Education (THE)

“The Web of Science’s recent decision to delist two journals published by the Switzerland-based open access publisher MDPI, among dozens of others, follows the appearance of a list of more than 400 allegedly predatory MDPI journals on the predatoryreports.org website several weeks earlier.

While some welcomed Predatory Reports’ move, I did not. As someone who has studied so-called predatory publishing for almost a decade, I believe that creating such lists doesn’t promote integrity or trust in science. Likewise, it will not solve a key reason for the creation of journals of questionable quality: the pressure on academics to produce more and more papers….

But when discussing predatory journals, let us not reduce this phenomenon to “publishing fake results” or “providing low-quality peer review”. Such phenomena also happen in journals published by the most prestigious publishing houses. What the concept of predatory journals actually reveals is the deep inequalities between the scientific working conditions in countries close to the “centre” of global science, such as the UK and US, and those on its periphery.

 

 

 

All lists of predatory journals have concentrated almost entirely on English-language journals published in non-English-speaking countries. As a result, their usefulness is limited because although many science policy instruments promote English publications, the lion’s share of researchers still publish in their local languages….”

Anchoring effects in the assessment of papers: An empirical survey of citing authors | PLOS ONE

Abstract:  In our study, we have empirically studied the assessment of cited papers within the framework of the anchoring-and-adjustment heuristic. We are interested in the question whether the assessment of a paper can be influenced by numerical information that act as an anchor (e.g. citation impact). We have undertaken a survey of corresponding authors with an available email address in the Web of Science database. The authors were asked to assess the quality of papers that they cited in previous papers. Some authors were assigned to three treatment groups that receive further information alongside the cited paper: citation impact information, information on the publishing journal (journal impact factor) or a numerical access code to enter the survey. The control group did not receive any further numerical information. We are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation impact or journal impact), but also by numbers that are not related to quality, i.e. the access code. Our results show that the quality assessments of papers seem to depend on the citation impact information of single papers. The other information (anchors) such as an arbitrary number (an access code) and journal impact information did not play a (important) role in the assessments of papers. The results point to a possible anchoring bias caused by insufficient adjustment: it seems that the respondents assessed cited papers in another way when they observed paper impact values in the survey. We conclude that initiatives aiming at reducing the use of journal impact information in research evaluation either were already successful or overestimated the influence of this information.