Questionable research practices among researchers in the most research?productive management programs – Kepes – – Journal of Organizational Behavior – Wiley Online Library

Abstract:  Questionable research practices (QRPs) among researchers have been a source of concern in many fields of study. QRPs are often used to enhance the probability of achieving statistical significance which affects the likelihood of a paper being published. Using a sample of researchers from 10 top research-productive management programs, we compared hypotheses tested in dissertations to those tested in journal articles derived from those dissertations to draw inferences concerning the extent of engagement in QRPs. Results indicated that QRPs related to changes in sample size and covariates were associated with unsupported dissertation hypotheses becoming supported in journal articles. Researchers also tended to exclude unsupported dissertation hypotheses from journal articles. Likewise, results suggested that many article hypotheses may have been created after the results were known (i.e., HARKed). Articles from prestigious journals contained a higher percentage of potentially HARKed hypotheses than those from less well-regarded journals. Finally, articles published in prestigious journals were associated with more QRP usage than less prestigious journals. QRPs increase in the percentage of supported hypotheses and result in effect sizes that likely overestimate population parameters. As such, results reported in articles published in our most prestigious journals may be less credible than previously believed.

 

Springer Nature and Université Grenoble Alpes release free enhanced integrity software to tackle fake scientific papers | Corporate Affairs Homepage | Springer Nature

“Springer Nature today announces the release of PySciDetect, its next generation open source research integrity software to identify fake research. Developed in-house with Slimmer AI and released in collaboration with Dr Cyril Labbé from Université Grenoble Alpes, PySciDetect is available for all publishers and those within the academic community to download and use. …

 

Since launch, SciDetect has been used by a number of publishers and members of the academic community. For Springer Nature, it has scanned over 3.8million journal articles and over 2.5milion book chapters. Available as open source software, PySciDetect expands on Springer Nature’s commitment to being an active partner to the community, supporting a collaborative effort to detect fraudulent scientific research and protect the integrity of the academic record.”

The Mechanics Behind A Precipitous Rise In Impact Factor: A Case Study From the British Journal of Sports Medicine

Abstract:  The impact factor is a popular but highly flawed proxy for the importance of academic journals. A variety of techniques exist to increase an individual journal’s impact factor but are usually described in abstract terms. Here, we investigate two of those: (1) the preferential publication of brief, citable, non-substantive academic content, and likewise for (2) review or meta-analytic content in the historical publications of the British Journal of Sports Medicine. Simple content analysis reveals an exponential rise in published editorial and other brief content, a persistent growth in ‘highly citable’ content, and dramatic drop in the proportion of empirical research published. These changes parallel the changes in impact factor over the time period available. The implications of this are discussed.

Editorial misconduct: the case of online predatory journals

 

The number of publishers that offer academics, researchers, and postgraduate students the opportunity to publish articles and book chapters quickly and easily has been growing steadily in recent years. This can be ascribed to a variety of factors, e.g., increasing Internet use, the Open Access movement, academic pressure to publish, and the emergence of publishers with questionable interests that cast doubt on the reliability and the scientific rigor of the articles they publish.

All this has transformed the scholarly and scientific publishing scene and has opened the door to the appearance of journals whose editorial procedures differ from those of legitimate journals. These publishers are called predatory, because their manuscript publishing process deviates from the norm (very short publication times, non-existent or low-quality peer-review, surprisingly low rejection rates, etc.).

The object of this article is to spell out the editorial practices of these journals to make them easier to spot and thus to alert researchers who are unfamiliar with them. It therefore reviews and highlights the work of other authors who have for years been calling attention to how these journals operate, to their unique features and behaviors, and to the consequences of publishing in them.

The most relevant conclusions reached include the scant awareness of the existence of such journals (especially by researchers still lacking experience), the enormous harm they cause to authors’ reputations, the harm they cause researchers taking part in promotion or professional accreditation procedures, and the feelings of chagrin and helplessness that come from seeing one’s work printed in low-quality journals. Future comprehensive research on why authors decide to submit valuable articles to these journals is also needed.

This paper therefore discusses the size of this phenomenon and how to distinguish those journals from ethical journals.

 

A Bug in Early Creative Commons Licenses Has Enabled a New Breed of Superpredator | by Cory Doctorow | Jan, 2022 | Medium

“Here’s a supreme irony: the Creative Commons licenses were invented to enable a culture of legally safe sharing, spurred by the legal terror campaign waged by the entertainment industry, led by a literal criminal predator who is now in prison for sex crimes.

But because of a small oversight in old versions of the licenses created 12 years ago, a new generation of legal predator has emerged to wage a new campaign of legal terror.

To make matters worse, this new kind of predator specifically targets people who operate in good faith, only using materials that they explicitly have been given permission to use.

What a mess….”

A Bug in Early Creative Commons Licenses Has Enabled a New Breed of Superpredator | by Cory Doctorow | Jan, 2022 | Medium

“Here’s a supreme irony: the Creative Commons licenses were invented to enable a culture of legally safe sharing, spurred by the legal terror campaign waged by the entertainment industry, led by a literal criminal predator who is now in prison for sex crimes.

But because of a small oversight in old versions of the licenses created 12 years ago, a new generation of legal predator has emerged to wage a new campaign of legal terror.

To make matters worse, this new kind of predator specifically targets people who operate in good faith, only using materials that they explicitly have been given permission to use.

What a mess….”

Another example of how the playing field is tilted in favour of copyright owners – Walled Culture

“Assuming the details of this case are confirmed during the trial, they show how the digital copyright system takes on trust claims to ownership, if made in the right way – in this case, through an established royalty management firm.  That trust contrasts strongly with a widespread reluctance by companies to recognise that people may be able to draw on copyright exceptions when they make copies, and a readiness to assume that it must be an infringement.  It’s another example of how the playing field is tilted strongly in favour of copyright owners, and against ordinary citizens.”

How misconduct helped psychological science to thrive

“Despite this history, before Stapel, researchers were broadly unaware of these problems or dismissed them as inconsequential. Some months before the case became public, a concerned colleague and I proposed to create an archive that would preserve the data collected by researchers in our department, to ensure reproducibility and reuse. A council of prominent colleagues dismissed our proposal on the basis that competing departments had no similar plans. Reasonable suggestions that we made to promote data sharing were dismissed on the unfounded grounds that psychology data sets can never be safely anonymized and would be misused out of jealousy, to attack well-meaning researchers. And I learnt about at least one serious attempt by senior researchers to have me disinvited from holding a workshop for young researchers because it was too critical of suboptimal practices….

Much of the advocacy and awareness has been driven by early-career researchers. Recent cases show how preregistering studies, replication, publishing negative results, and sharing code, materials and data can both empower the self-corrective mechanisms of science and deter questionable research practices and misconduct….

For these changes to stick and spread, they must become systemic. We need tenure committees to reward practices such as sharing data and publishing rigorous studies that have less-than-exciting outcomes. Grant committees and journals should require preregistration or explanations of why it is not warranted. Grant-programme officers should be charged with checking that data are made available in accordance with mandates, and PhD committees should demand that results are verifiable. And we need to strengthen a culture in which top research is rigorous and trustworthy, as well as creative and exciting….”

Abuse of ORCID’s weaknesses by authors who use paper mills | SpringerLink

Abstract:  In many countries around the world that use authorship and academic papers for career advancement and recognition, the accurate identity of participating authors is vital. ORCID (Open Researcher and Contributor ID), an author disambiguation tool that was created in 2012, is being vociferously implemented across a wide swathe of journals, including by many leading publishers. In some countries, authors who publish in indexed journals, particularly in journals that carry a Clarivate Analytics’ Journal Impact Factor, are rewarded, sometimes even monetarily. A strong incentive to cheat and abuse the publication ethos thus exists. There has been a recent spike in the detection of papers apparently derived from paper mills that have multiple issues with figures. The use of such figures across many papers compromises the integrity of the content in all those papers, with widespread ramifications for the integrity of the biomedical literature and of journals that may be gamed by academics. The use of ORCID does not guarantee the authenticity of authors associated with a paper mill-derived paper, nor does it fortify the paper’s integrity. These weaknesses of ORCID may dampen trust in this tool, especially if the ORCID platform is being populated by “ghost” (empty) ORCID accounts of academics whose identities cannot be clearly verified, or disposable accounts (perhaps created by paper mill operators) that are used only once, exclusively to pass the paper submission step. Open-source forensic tools to assist academics, editors and publishers to detect problematic figures, and more stringent measures by ORCID to ensure robust author identity verification, are urgently required to protect themselves, and the wider biomedical literature.

 

Manipulation of bibliometric data by editors of scientific journals

“Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status….

An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references….

Some international journals intervene arbitrarily to revise the citations in articles they receive: I submitted a paper with my colleagues to an American journal in 2017, and one of the reviewers demanded that we replace references in Russian language with references in English. Two of us responded with a correspondence note titled ‘Don’t dismiss non-English citations’ that we had then submitted to Nature: in publishing that note, the editors of Nature removed some references – from the paper2 that condemned the practice of replacing an author’s references with those more to the editor’s liking – and replaced them with, maybe more relevant, reference to a paper that we had never read by that moment! … 

Editors of many international journals are now looking not for quality papers but for papers that will not lower the impact factor of their journals….”

Some journals say they are indexed in DOAJ but they are not – Google Sheets

“The following journals say, or have said in the past, that they are indexed in DOAJ. In many cases they carry our logo also, without our permission.

Always check at https://doaj.org/search/journals that a journal is indexed even if its website carries the DOAJ logo or says that it is indexed.

If youhave questions about this list, please email DOAJ: feedback@doaj.org…”

Both Questionable and Open Research Practices Are Prevalent in Education Research – Matthew C. Makel, Jaret Hodges, Bryan G. Cook, Jonathan A. Plucker, 2021

Abstract:  Concerns about the conduct of research are pervasive in many fields, including education. In this preregistered study, we replicated and extended previous studies from other fields by asking education researchers about 10 questionable research practices and five open research practices. We asked them to estimate the prevalence of the practices in the field, to self-report their own use of such practices, and to estimate the appropriateness of these behaviors in education research. We made predictions under four umbrella categories: comparison to psychology, geographic location, career stage, and quantitative orientation. Broadly, our results suggest that both questionable and open research practices are used by many education researchers. This baseline information will be useful as education researchers seek to understand existing social norms and grapple with whether and how to improve research practices.

 

Citations and metrics of journals discontinued… | F1000Research

Abstract:  Background: Scopus is a leading bibliometric database. It contains a large part of the articles cited in peer-reviewed publications. The journals included in Scopus are periodically re-evaluated to ensure they meet indexing criteria and some journals might be discontinued for ‘publication concerns’. Previously published articles may remain indexed and can be cited. Their metrics have yet to be studied. This study aimed to evaluate the main features and metrics of journals discontinued from Scopus for publication concerns, before and after their discontinuation, and to determine the extent of predatory journals among the discontinued journals.

Methods: We surveyed the list of discontinued journals from Scopus (July 2019). Data regarding metrics, citations and indexing were extracted from Scopus or other scientific databases, for the journals discontinued for publication concerns. 
Results: A total of 317 journals were evaluated. Ninety-three percent of the journals (294/317) declared they published using an Open Access model. The subject areas with the greatest number of discontinued journals were Medicine (52/317; 16%), Agriculture and Biological Science (34/317; 11%), and Pharmacology, Toxicology and Pharmaceutics (31/317; 10%). The mean number of citations per year after discontinuation was significantly higher than before (median of difference 16.89 citations, p<0.0001), and so was the number of citations per document (median of difference 0.42 citations, p<0.0001). Twenty-two percent (72/317) were included in the Cabell’s blacklist. The DOAJ currently included only 9 journals while 61 were previously included and discontinued, most for ‘suspected editorial misconduct by the publisher’.
Conclusions: Journals discontinued for ‘publication concerns’ continue to be cited despite discontinuation and predatory behaviour seemed common. These citations may influence scholars’ metrics prompting artificial career advancements, bonus systems and promotion. Countermeasures should be taken urgently to ensure the reliability of Scopus metrics for the purpose of scientific assessment of scholarly publishing at both journal- and author-level.

DARPA letter to KEI confirming investigation of Moderna for failure to report government funding in patent applications | Knowledge Ecology International

“On Friday, September 18, 2020, KEI received a letter from the Defense Advanced Research Projects Agency (DARPA) confirming that the agency was investigating Moderna for failure to report government funding in patent applications. The Financial Times and other outlets had previously reported this investigation (see: https://www.keionline.org/moderna), but this letter is the first official notice we have received from DARPA.

The letter from DARPA is signed by D. Peter Donaghue, who is the Division Director for Contracts at DARPA.

The letter is short, and confirms that DARPA is conducting an investigation. I would expect Moderna to report this to shareholders at some point….”

What’s Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers | Fantastic Anachronism

[Some recommendations:]

Ignore citation counts. Given that citations are unrelated to (easily-predictable) replicability, let alone any subtler quality aspects, their use as an evaluative tool should stop immediately.
Open data, enforced by the NSF/NIH. There are problems with privacy but I would be tempted to go as far as possible with this. Open data helps detect fraud. And let’s have everyone share their code, too—anything that makes replication/reproduction easier is a step in the right direction.
Financial incentives for universities and journals to police fraud. It’s not easy to structure this well because on the one hand you want to incentivize them to minimize the frauds published, but on the other hand you want to maximize the frauds being caught. Beware Goodhart’s law!
Why not do away with the journal system altogether? The NSF could run its own centralized, open website; grants would require publication there. Journals are objectively not doing their job as gatekeepers of quality or truth, so what even is a journal? A combination of taxonomy and reputation. The former is better solved by a simple tag system, and the latter is actually misleading. Peer review is unpaid work anyway, it could continue as is. Attach a replication prediction market (with the estimated probability displayed in gargantuan neon-red font right next to the paper title) and you’re golden. Without the crutch of “high ranked journals” maybe we could move to better ways of evaluating scientific output. No more editors refusing to publish replications. You can’t shift the incentives: academics want to publish in “high-impact” journals, and journals want to selectively publish “high-impact” research. So just make it impossible. Plus as a bonus side-effect this would finally sink Elsevier….”