PsyArXiv Preprints | Paper mills: a novel form of publishing malpractice affecting psychology

“Psychology journals are not immune to targeting by paper mills. Difficulties in obtaining peer reviewers have led many journals, such as this one, to ask authors to recommend peer reviewers. This creates a crack in the defences of a journal against fraud, if it is combined with lack of editorial oversight. This case illustrates the benefits of open peer review in detecting fraud….”

Evaluation of publication bias for 12 clinical trials of molnupiravir to treat SARS-CoV-2 infection in 13,694 patients | Research Square

Abstract:  Introduction:

During the COVID-19 pandemic, Merck Sharp and Dohme (MSD) acquired the global licensing rights for molnupiravir. MSD allowed Indian manufacturers to produce the drug under voluntary license. Indian companies conducted local clinical trials to evaluate the efficacy and safety of molnupiravir.

Methods

Searches of the Clinical Trials Registry-India (CTRI) were conducted to find registered trials of molnupiravir in India. Subsequent investigations were performed to assess which clinical trials had been presented or published.

Results

According to the CTRI, 12 randomised trials of molnupiravir were conducted in India, in 13,694 patients, starting in late May 2021. By July 2022, none of the 12 trials has been published, one was presented at a medical conference, and two were announced in press releases suggesting failure of treatment. Results from three trials were shared with the World Health Organisation. One of these three trials had many unexplained results, with effects of treatment significantly different from the MSD MOVE-OUT trial in a similar population.

Discussion

The lack of results runs counter to established practices and leaves a situation where approximately 90% of the global data on molnupiravir has not been published in any form. Access to patient-level databases is required to investigate risks of bias or medical fraud.

Who games metrics and rankings? Institutional niches and journal impact factor inflation – ScienceDirect

Abstract:  Ratings and rankings are omnipresent and influential in contemporary society. Individuals and organizations strategically respond to incentives set by rating systems. We use academic publishing as a case study to examine organizational variation in responses to influential metrics. The Journal Impact Factor (JIF) is a prominent metric linked to the value of academic journals, as well as career prospects of researchers. Since scholars, institutions, and publishers alike all have strong interests in affiliating with high JIF journals, strategic behaviors to ‘game’ the JIF metric are prevalent. Strategic self-citation is a common tactic employed to inflate JIF values. Based on empirical analyses of academic journals indexed in the Web of Science, we examine institutional characteristics conducive to strategic self-citation for JIF inflation. Journals disseminated by for-profit publishers, with lower JIFs, published in academically peripheral countries and with more recent founding dates were more likely to exhibit JIF-inflating self-citation patterns. Findings reveal the importance of status and institutional logics in influencing metrics gaming behaviors, as well as how metrics can affect work outcomes in different types of institutions. While quantitative rating systems affect many who are being evaluated, certain types of people and organizations are more prone to being influenced by rating systems than others.

Exclusive: PLOS ONE to retract more than 100 papers for manipulated peer review – Retraction Watch

“In March, an editor at PLOS ONE noticed something odd among a stack of agriculture manuscripts he was handling. One author had submitted at least 40 manuscripts over a 10-month period, much more than expected from any one person. 

The editor told the ethics team at the journal about the anomaly, and they started an investigation. Looking at the author lists and academic editors who managed peer review for the papers, the team found that some names kept popping up repeatedly. 

Within a month, the initial list of 50 papers under investigation expanded to more than 300 submissions received since 2020 – about 100 of them already published – with concerns about improper authorship and conflicts of interest that compromised peer review. 

“It definitely shot up big red flags for us when we started to see the number of names and their publication volumes,” Renee Hoch, managing editor of PLOS’s publication ethics team, told Retraction Watch. “This is probably our biggest case that we’ve seen in several years.”

The journal’s action on the published papers begins today, Retraction Watch has learned, with the retraction of 20 articles. Action on the rest will follow in batches about every two weeks as the editors finish their follow up work on specific papers. Corresponding authors on the papers to be retracted today who responded to our request for comment said they disagreed with the retractions, and disputed that they had relationships with the editors who handled their papers, among other protests….”

Questionable research practices among researchers in the most research?productive management programs – Kepes – – Journal of Organizational Behavior – Wiley Online Library

Abstract:  Questionable research practices (QRPs) among researchers have been a source of concern in many fields of study. QRPs are often used to enhance the probability of achieving statistical significance which affects the likelihood of a paper being published. Using a sample of researchers from 10 top research-productive management programs, we compared hypotheses tested in dissertations to those tested in journal articles derived from those dissertations to draw inferences concerning the extent of engagement in QRPs. Results indicated that QRPs related to changes in sample size and covariates were associated with unsupported dissertation hypotheses becoming supported in journal articles. Researchers also tended to exclude unsupported dissertation hypotheses from journal articles. Likewise, results suggested that many article hypotheses may have been created after the results were known (i.e., HARKed). Articles from prestigious journals contained a higher percentage of potentially HARKed hypotheses than those from less well-regarded journals. Finally, articles published in prestigious journals were associated with more QRP usage than less prestigious journals. QRPs increase in the percentage of supported hypotheses and result in effect sizes that likely overestimate population parameters. As such, results reported in articles published in our most prestigious journals may be less credible than previously believed.

 

Springer Nature and Université Grenoble Alpes release free enhanced integrity software to tackle fake scientific papers | Corporate Affairs Homepage | Springer Nature

“Springer Nature today announces the release of PySciDetect, its next generation open source research integrity software to identify fake research. Developed in-house with Slimmer AI and released in collaboration with Dr Cyril Labbé from Université Grenoble Alpes, PySciDetect is available for all publishers and those within the academic community to download and use. …

 

Since launch, SciDetect has been used by a number of publishers and members of the academic community. For Springer Nature, it has scanned over 3.8million journal articles and over 2.5milion book chapters. Available as open source software, PySciDetect expands on Springer Nature’s commitment to being an active partner to the community, supporting a collaborative effort to detect fraudulent scientific research and protect the integrity of the academic record.”

The Mechanics Behind A Precipitous Rise In Impact Factor: A Case Study From the British Journal of Sports Medicine

Abstract:  The impact factor is a popular but highly flawed proxy for the importance of academic journals. A variety of techniques exist to increase an individual journal’s impact factor but are usually described in abstract terms. Here, we investigate two of those: (1) the preferential publication of brief, citable, non-substantive academic content, and likewise for (2) review or meta-analytic content in the historical publications of the British Journal of Sports Medicine. Simple content analysis reveals an exponential rise in published editorial and other brief content, a persistent growth in ‘highly citable’ content, and dramatic drop in the proportion of empirical research published. These changes parallel the changes in impact factor over the time period available. The implications of this are discussed.

Editorial misconduct: the case of online predatory journals

 

The number of publishers that offer academics, researchers, and postgraduate students the opportunity to publish articles and book chapters quickly and easily has been growing steadily in recent years. This can be ascribed to a variety of factors, e.g., increasing Internet use, the Open Access movement, academic pressure to publish, and the emergence of publishers with questionable interests that cast doubt on the reliability and the scientific rigor of the articles they publish.

All this has transformed the scholarly and scientific publishing scene and has opened the door to the appearance of journals whose editorial procedures differ from those of legitimate journals. These publishers are called predatory, because their manuscript publishing process deviates from the norm (very short publication times, non-existent or low-quality peer-review, surprisingly low rejection rates, etc.).

The object of this article is to spell out the editorial practices of these journals to make them easier to spot and thus to alert researchers who are unfamiliar with them. It therefore reviews and highlights the work of other authors who have for years been calling attention to how these journals operate, to their unique features and behaviors, and to the consequences of publishing in them.

The most relevant conclusions reached include the scant awareness of the existence of such journals (especially by researchers still lacking experience), the enormous harm they cause to authors’ reputations, the harm they cause researchers taking part in promotion or professional accreditation procedures, and the feelings of chagrin and helplessness that come from seeing one’s work printed in low-quality journals. Future comprehensive research on why authors decide to submit valuable articles to these journals is also needed.

This paper therefore discusses the size of this phenomenon and how to distinguish those journals from ethical journals.

 

A Bug in Early Creative Commons Licenses Has Enabled a New Breed of Superpredator | by Cory Doctorow | Jan, 2022 | Medium

“Here’s a supreme irony: the Creative Commons licenses were invented to enable a culture of legally safe sharing, spurred by the legal terror campaign waged by the entertainment industry, led by a literal criminal predator who is now in prison for sex crimes.

But because of a small oversight in old versions of the licenses created 12 years ago, a new generation of legal predator has emerged to wage a new campaign of legal terror.

To make matters worse, this new kind of predator specifically targets people who operate in good faith, only using materials that they explicitly have been given permission to use.

What a mess….”

A Bug in Early Creative Commons Licenses Has Enabled a New Breed of Superpredator | by Cory Doctorow | Jan, 2022 | Medium

“Here’s a supreme irony: the Creative Commons licenses were invented to enable a culture of legally safe sharing, spurred by the legal terror campaign waged by the entertainment industry, led by a literal criminal predator who is now in prison for sex crimes.

But because of a small oversight in old versions of the licenses created 12 years ago, a new generation of legal predator has emerged to wage a new campaign of legal terror.

To make matters worse, this new kind of predator specifically targets people who operate in good faith, only using materials that they explicitly have been given permission to use.

What a mess….”

Another example of how the playing field is tilted in favour of copyright owners – Walled Culture

“Assuming the details of this case are confirmed during the trial, they show how the digital copyright system takes on trust claims to ownership, if made in the right way – in this case, through an established royalty management firm.  That trust contrasts strongly with a widespread reluctance by companies to recognise that people may be able to draw on copyright exceptions when they make copies, and a readiness to assume that it must be an infringement.  It’s another example of how the playing field is tilted strongly in favour of copyright owners, and against ordinary citizens.”

How misconduct helped psychological science to thrive

“Despite this history, before Stapel, researchers were broadly unaware of these problems or dismissed them as inconsequential. Some months before the case became public, a concerned colleague and I proposed to create an archive that would preserve the data collected by researchers in our department, to ensure reproducibility and reuse. A council of prominent colleagues dismissed our proposal on the basis that competing departments had no similar plans. Reasonable suggestions that we made to promote data sharing were dismissed on the unfounded grounds that psychology data sets can never be safely anonymized and would be misused out of jealousy, to attack well-meaning researchers. And I learnt about at least one serious attempt by senior researchers to have me disinvited from holding a workshop for young researchers because it was too critical of suboptimal practices….

Much of the advocacy and awareness has been driven by early-career researchers. Recent cases show how preregistering studies, replication, publishing negative results, and sharing code, materials and data can both empower the self-corrective mechanisms of science and deter questionable research practices and misconduct….

For these changes to stick and spread, they must become systemic. We need tenure committees to reward practices such as sharing data and publishing rigorous studies that have less-than-exciting outcomes. Grant committees and journals should require preregistration or explanations of why it is not warranted. Grant-programme officers should be charged with checking that data are made available in accordance with mandates, and PhD committees should demand that results are verifiable. And we need to strengthen a culture in which top research is rigorous and trustworthy, as well as creative and exciting….”

Abuse of ORCID’s weaknesses by authors who use paper mills | SpringerLink

Abstract:  In many countries around the world that use authorship and academic papers for career advancement and recognition, the accurate identity of participating authors is vital. ORCID (Open Researcher and Contributor ID), an author disambiguation tool that was created in 2012, is being vociferously implemented across a wide swathe of journals, including by many leading publishers. In some countries, authors who publish in indexed journals, particularly in journals that carry a Clarivate Analytics’ Journal Impact Factor, are rewarded, sometimes even monetarily. A strong incentive to cheat and abuse the publication ethos thus exists. There has been a recent spike in the detection of papers apparently derived from paper mills that have multiple issues with figures. The use of such figures across many papers compromises the integrity of the content in all those papers, with widespread ramifications for the integrity of the biomedical literature and of journals that may be gamed by academics. The use of ORCID does not guarantee the authenticity of authors associated with a paper mill-derived paper, nor does it fortify the paper’s integrity. These weaknesses of ORCID may dampen trust in this tool, especially if the ORCID platform is being populated by “ghost” (empty) ORCID accounts of academics whose identities cannot be clearly verified, or disposable accounts (perhaps created by paper mill operators) that are used only once, exclusively to pass the paper submission step. Open-source forensic tools to assist academics, editors and publishers to detect problematic figures, and more stringent measures by ORCID to ensure robust author identity verification, are urgently required to protect themselves, and the wider biomedical literature.

 

Manipulation of bibliometric data by editors of scientific journals

“Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status….

An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references….

Some international journals intervene arbitrarily to revise the citations in articles they receive: I submitted a paper with my colleagues to an American journal in 2017, and one of the reviewers demanded that we replace references in Russian language with references in English. Two of us responded with a correspondence note titled ‘Don’t dismiss non-English citations’ that we had then submitted to Nature: in publishing that note, the editors of Nature removed some references – from the paper2 that condemned the practice of replacing an author’s references with those more to the editor’s liking – and replaced them with, maybe more relevant, reference to a paper that we had never read by that moment! … 

Editors of many international journals are now looking not for quality papers but for papers that will not lower the impact factor of their journals….”

Some journals say they are indexed in DOAJ but they are not – Google Sheets

“The following journals say, or have said in the past, that they are indexed in DOAJ. In many cases they carry our logo also, without our permission.

Always check at https://doaj.org/search/journals that a journal is indexed even if its website carries the DOAJ logo or says that it is indexed.

If youhave questions about this list, please email DOAJ: feedback@doaj.org…”