The Retraction Watch Database becomes completely open – and RW becomes far more sustainable

“Around that time we realized the world lacked a comprehensive database of retractions. We saw how many were missing from sources researchers used, whether PubMed, Web of Science, Scopus, or others – including Crossref, more about which I will say in a moment. We were cataloging them in spreadsheets ourselves, but couldn’t keep up.

The three foundations all agreed to support our work, not just the journalism, but to create what became The Retraction Watch Database, officially launched in 2018. Part of that funding was a grant to create a strategic plan for sustainability and growth. One of the pillars of that plan was licensing the Database to organizations – commercial and nonprofit – who could use it in products that would help researchers know when what they were reading had been retracted, among other purposes. 

Those license fees – along with other income, particularly individual donations and a subcontract from a grant from the Howard Hughes Medical Institute (HHMI) – have kept Retraction Watch and The Center for Scientific Integrity running for several years. We are deeply grateful for the support and show of confidence they represent. 

But we also always wanted to make the Database available to as many people as possible, whether or not they had access to tools that licensed it, if we could find a financial model that did not rely on such fees. (We always provided the data free of charge to scholars studying retractions and related phenomena.)

Fast forward to today. We’re thrilled to announce that Crossref has acquired The Retraction Watch Database and will make it completely open and freely available….”

Crossref acquires Retraction Watch data and opens it for the scientific community

The Center for Scientific Integrity, the organisation behind the Retraction Watch blog and database, and Crossref, the global infrastructure underpinning research communications, both not-for-profits, announced today that the Retraction Watch database has been acquired by Crossref and made a public resource. An agreement between the two organisations will allow Retraction Watch to keep the data populated on an ongoing basis and always open, alongside publishers registering their retraction notices directly with Crossref.

Paper mills research | COPE: Committee on Publication Ethics

“Recommended actions

A major education exercise is needed to ensure that Editors are aware of the problem of paper mills, and Editors/editorial staff are trained in identifying the fake papers.
Continued investment in tools and systems to pick up suspect papers as they are submitted.
Engagement with institutions and funders to review incentives for researchers to publish valid papers and not use services that will give quick but fake publication.
Investigation of protocols that can be put in place to impede paper mills from succeeding in their goals.
Review the retraction process to take account of the unique features of paper mill papers.
Investigate how to ensure retraction notices are applied to all copies of a paper such as preprint servers and article repositories….”

Hindawi shuttering four journals overrun by paper mills – Retraction Watch

“Hindawi will cease publishing four journals that it identified as “heavily compromised by paper mills.” 

The open access publisher announced today in a blog post that it will continue to retract articles from the closed titles, which are Computational and Mathematical Methods in Medicine, Computational Intelligence and Neuroscience, the Journal of Healthcare Engineering, and the Journal of Environmental and Public Health….”

Wiley paused Hindawi special issues amid quality problems, lost $9 million in revenue – Retraction Watch

“Hindawi, the open access publisher that Wiley acquired in 2021, temporarily suspended publishing special issues because of “compromised articles,” according to a press release announcing the company’s third quarter financial results….

In Wiley’s third quarter that ended Jan. 31, 2023, the suspension cost Hindawi – whose business model is based on charging authors to publish – $9 million in lost revenue compared to the third quarter of 2022. The company cited the pause as the primary reason its revenue from its research segment “was down 4% as reported, or down 2% at constant currency and excluding acquisitions,” the press release stated….

The announcement follows scrutiny from sleuths, and the publisher retracting hundreds of papers for manipulated peer review last September, after Hindawi’s research integrity team began investigating a single special issue. 

The notorious paper with capital Ts as error bars was also published in a special issue of a Hindawi journal before it was retracted in December….”

A quantitative and qualitative open citation analysis of retracted articles in the humanities | Quantitative Science Studies | MIT Press

Abstract:  In this article, we show and discuss the results of a quantitative and qualitative analysis of open citations to retracted publications in the humanities domain. Our study was conducted by selecting retracted papers in the humanities domain and marking their main characteristics (e.g., retraction reason). Then, we gathered the citing entities and annotated their basic metadata (e.g., title, venue, etc.) and the characteristics of their in-text citations (e.g., intent, sentiment, etc.). Using these data, we performed a quantitative and qualitative study of retractions in the humanities, presenting descriptive statistics and a topic modeling analysis of the citing entities’ abstracts and the in-text citation contexts. As part of our main findings, we noticed that there was no drop in the overall number of citations after the year of retraction, with few entities which have either mentioned the retraction or expressed a negative sentiment toward the cited publication. In addition, on several occasions, we noticed a higher concern/awareness when it was about citing a retracted publication, by the citing entities belonging to the health sciences domain, if compared to the humanities and the social science domains. Philosophy, arts, and history are the humanities areas that showed the higher concerns toward the retraction.

 

Exclusive: Hindawi and Wiley to retract over 500 papers linked to peer review rings | Retraction Watch

After months of investigation that identified networks of reviewers and editors manipulating the peer review process, Hindawi plans to retract 511 papers across 16 journals, Retraction Watch has learned. 

The retractions, which the publisher and its parent company, Wiley, will announce tomorrow in a blog post, will be issued in the next month, and more may come as its investigation continues. They are not yet making the list available. 

Hindawi’s research integrity team found several signs of manipulated peer reviews for the affected papers, including reviews that contained duplicated text, a few individuals who did a lot of reviews, reviewers who turned in their reviews extremely quickly, and misuse of databases that publishers use to vet potential reviewers.

[…]

 

No evidence that mandatory open data policies increase error correction | Nature Ecology & Evolution

Berberi, I., Roche, D.G. No evidence that mandatory open data policies increase error correction. Nat Ecol Evol (2022). https://doi.org/10.1038/s41559-022-01879-9

Preprint: https://doi.org/10.31222/osf.io/k8ver

Abstract: Using a database of open data policies for 199 journals in ecology and evolution, we found no detectable link between data sharing requirements and article retractions or corrections. Despite the potential for open data to facilitate error detection, poorly archived datasets, the absence of open code and the stigma associated with correcting or retracting articles probably stymie error correction. Requiring code alongside data and destigmatizing error correction among authors and journal editors could increase the effectiveness of open data policies at helping science self-correct.

 

Questionable research practices among researchers in the most research?productive management programs – Kepes – – Journal of Organizational Behavior – Wiley Online Library

Abstract:  Questionable research practices (QRPs) among researchers have been a source of concern in many fields of study. QRPs are often used to enhance the probability of achieving statistical significance which affects the likelihood of a paper being published. Using a sample of researchers from 10 top research-productive management programs, we compared hypotheses tested in dissertations to those tested in journal articles derived from those dissertations to draw inferences concerning the extent of engagement in QRPs. Results indicated that QRPs related to changes in sample size and covariates were associated with unsupported dissertation hypotheses becoming supported in journal articles. Researchers also tended to exclude unsupported dissertation hypotheses from journal articles. Likewise, results suggested that many article hypotheses may have been created after the results were known (i.e., HARKed). Articles from prestigious journals contained a higher percentage of potentially HARKed hypotheses than those from less well-regarded journals. Finally, articles published in prestigious journals were associated with more QRP usage than less prestigious journals. QRPs increase in the percentage of supported hypotheses and result in effect sizes that likely overestimate population parameters. As such, results reported in articles published in our most prestigious journals may be less credible than previously believed.

 

Can Twitter data help in spotting problems early with publications? What retracted COVID-19 papers can teach us about science in the public sphere | Impact of Social Sciences

“Publications that are based on wrong data, methodological mistakes, or contain other types of severe errors can spoil the scientific record if they are not retracted. Retraction of publications is one of the effective ways to correct the scientific record. However, before a problematic publication can be retracted, the problem has to be found and brought to the attention of the people involved (the authors of the publication and editors of the journal). The earlier a problem with a published paper is detected, the earlier the publication can be retracted and the less wasted effort goes into new research that is based on disinformation within the scientific record. Therefore, it would be advantageous to have an early warning system that spots potential problems with published papers, or maybe even before based on a preprint version….”

Repeat It or Take It Back

“Outside of eLife and , to an extent , PLoS , no one of scale and weight  in the commercial publishing sector has really climbed aboard the Open Science movement with a recognition of the sort of data and communication control that Open Science will require . 

So what is that requirement ? In two words – Replicability and Retraction . …”

Influence of accessibility (open and toll-based) of scholarly publications on retractions | SpringerLink

“We have examined retracted publications in different subject fields and attempted to analyse whether online free accessibility (Open Access) influences retraction by examining the scholarly literature published from 2000 through 2019, an incidence of the recent 20 years of publications. InCites, a research analytics tool developed by Clarivate Analytics®, in consultation with Web of Science, PubMed Central, and Retraction Watch databases were used to harvest data for the study. Retracted ‘Article’ and ‘Review’ publications were examined concerning their online accessibility mode (Toll Access and Open Access), based on non-parametric tests like Odds Ratio, Wilcoxon Signed Rank Test, Mann–Whitney U Test, Mann–Kendall and Sen’s methods. The Odds for OA articles to have retraction are about 1.62 as large (62% higher) compared with TA articles (95% CI 1.5, 1.7). 0.028% of OA publications are retracted compared with 0.017% TA publications. Retractions have occurred in all subject areas. In eight subject areas, the Odds for retraction of OA articles are larger compared with retraction of TA articles. In three subject areas, the Odds for retraction of OA articles are lesser compared with the retraction of TA articles. In the remaining 11 subject areas, no significant difference is observed. Post-retraction, though a decline is observed in the citation count of OA & TA publications (p?<?.01), yet the Odds for OA articles to get cited after retraction are about 1.21 as large (21% higher) compared with TA articles (95% CI 1.53, 1.72). TA publications are retracted earlier compared to OA publications (p?<?.01). We observed an increasing trend of retracted works published in both modes. However, the rate of retraction of OA publications is double than the rate of retraction of TA publications.