Abstract: In many countries around the world that use authorship and academic papers for career advancement and recognition, the accurate identity of participating authors is vital. ORCID (Open Researcher and Contributor ID), an author disambiguation tool that was created in 2012, is being vociferously implemented across a wide swathe of journals, including by many leading publishers. In some countries, authors who publish in indexed journals, particularly in journals that carry a Clarivate Analytics’ Journal Impact Factor, are rewarded, sometimes even monetarily. A strong incentive to cheat and abuse the publication ethos thus exists. There has been a recent spike in the detection of papers apparently derived from paper mills that have multiple issues with figures. The use of such figures across many papers compromises the integrity of the content in all those papers, with widespread ramifications for the integrity of the biomedical literature and of journals that may be gamed by academics. The use of ORCID does not guarantee the authenticity of authors associated with a paper mill-derived paper, nor does it fortify the paper’s integrity. These weaknesses of ORCID may dampen trust in this tool, especially if the ORCID platform is being populated by “ghost” (empty) ORCID accounts of academics whose identities cannot be clearly verified, or disposable accounts (perhaps created by paper mill operators) that are used only once, exclusively to pass the paper submission step. Open-source forensic tools to assist academics, editors and publishers to detect problematic figures, and more stringent measures by ORCID to ensure robust author identity verification, are urgently required to protect themselves, and the wider biomedical literature.
“Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status….
An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references….
Some international journals intervene arbitrarily to revise the citations in articles they receive: I submitted a paper with my colleagues to an American journal in 2017, and one of the reviewers demanded that we replace references in Russian language with references in English. Two of us responded with a correspondence note titled ‘Don’t dismiss non-English citations’ that we had then submitted to Nature: in publishing that note, the editors of Nature removed some references – from the paper2 that condemned the practice of replacing an author’s references with those more to the editor’s liking – and replaced them with, maybe more relevant, reference to a paper that we had never read by that moment! …
Editors of many international journals are now looking not for quality papers but for papers that will not lower the impact factor of their journals….”
Abstract: Concerns about the conduct of research are pervasive in many fields, including education. In this preregistered study, we replicated and extended previous studies from other fields by asking education researchers about 10 questionable research practices and five open research practices. We asked them to estimate the prevalence of the practices in the field, to self-report their own use of such practices, and to estimate the appropriateness of these behaviors in education research. We made predictions under four umbrella categories: comparison to psychology, geographic location, career stage, and quantitative orientation. Broadly, our results suggest that both questionable and open research practices are used by many education researchers. This baseline information will be useful as education researchers seek to understand existing social norms and grapple with whether and how to improve research practices.
Abstract: Background: Scopus is a leading bibliometric database. It contains a large part of the articles cited in peer-reviewed publications. The journals included in Scopus are periodically re-evaluated to ensure they meet indexing criteria and some journals might be discontinued for ‘publication concerns’. Previously published articles may remain indexed and can be cited. Their metrics have yet to be studied. This study aimed to evaluate the main features and metrics of journals discontinued from Scopus for publication concerns, before and after their discontinuation, and to determine the extent of predatory journals among the discontinued journals.
Methods: We surveyed the list of discontinued journals from Scopus (July 2019). Data regarding metrics, citations and indexing were extracted from Scopus or other scientific databases, for the journals discontinued for publication concerns.
Results: A total of 317 journals were evaluated. Ninety-three percent of the journals (294/317) declared they published using an Open Access model. The subject areas with the greatest number of discontinued journals were Medicine (52/317; 16%), Agriculture and Biological Science (34/317; 11%), and Pharmacology, Toxicology and Pharmaceutics (31/317; 10%). The mean number of citations per year after discontinuation was significantly higher than before (median of difference 16.89 citations, p<0.0001), and so was the number of citations per document (median of difference 0.42 citations, p<0.0001). Twenty-two percent (72/317) were included in the Cabell’s blacklist. The DOAJ currently included only 9 journals while 61 were previously included and discontinued, most for ‘suspected editorial misconduct by the publisher’.
Conclusions: Journals discontinued for ‘publication concerns’ continue to be cited despite discontinuation and predatory behaviour seemed common. These citations may influence scholars’ metrics prompting artificial career advancements, bonus systems and promotion. Countermeasures should be taken urgently to ensure the reliability of Scopus metrics for the purpose of scientific assessment of scholarly publishing at both journal- and author-level.
“On Friday, September 18, 2020, KEI received a letter from the Defense Advanced Research Projects Agency (DARPA) confirming that the agency was investigating Moderna for failure to report government funding in patent applications. The Financial Times and other outlets had previously reported this investigation (see: https://www.keionline.org/moderna), but this letter is the first official notice we have received from DARPA.
The letter from DARPA is signed by D. Peter Donaghue, who is the Division Director for Contracts at DARPA.
The letter is short, and confirms that DARPA is conducting an investigation. I would expect Moderna to report this to shareholders at some point….”
Ignore citation counts. Given that citations are unrelated to (easily-predictable) replicability, let alone any subtler quality aspects, their use as an evaluative tool should stop immediately.
Open data, enforced by the NSF/NIH. There are problems with privacy but I would be tempted to go as far as possible with this. Open data helps detect fraud. And let’s have everyone share their code, too—anything that makes replication/reproduction easier is a step in the right direction.
Financial incentives for universities and journals to police fraud. It’s not easy to structure this well because on the one hand you want to incentivize them to minimize the frauds published, but on the other hand you want to maximize the frauds being caught. Beware Goodhart’s law!
Why not do away with the journal system altogether? The NSF could run its own centralized, open website; grants would require publication there. Journals are objectively not doing their job as gatekeepers of quality or truth, so what even is a journal? A combination of taxonomy and reputation. The former is better solved by a simple tag system, and the latter is actually misleading. Peer review is unpaid work anyway, it could continue as is. Attach a replication prediction market (with the estimated probability displayed in gargantuan neon-red font right next to the paper title) and you’re golden. Without the crutch of “high ranked journals” maybe we could move to better ways of evaluating scientific output. No more editors refusing to publish replications. You can’t shift the incentives: academics want to publish in “high-impact” journals, and journals want to selectively publish “high-impact” research. So just make it impossible. Plus as a bonus side-effect this would finally sink Elsevier….”
“Here’s an odd thing. Over and over again, when a researcher is mistreated by a journal or publisher, we see them telling their story but redacting the name of the journal or publisher involved. Here are a couple of recent examples….”
“Hundreds of drug companies, medical device manufacturers, and universities owe the public a decade’s worth of missing data from clinical trials, federal officials warned last week.
New rules issued last week in the wake of a federal court ruling in February instructed clinical trial sponsors to submit missing data for trials conducted between 2007 and 2017 “as soon as possible.” For years, many trials conducted during that span have largely been exempted from reporting their data to ClinicalTrials.gov, a public database, meaning a decade of data about approved drugs and medical devices has never been made public.
The court’s ruling, and the federal government’s decision not to appeal it and instead to urge trial sponsors to submit the missing information, represent a major win for transparency advocates, who for years have fought to recover the decadelong gap in publicly available clinical trial data. …
The court ruling, and the resulting change in federal policy, come after years of reporting that has detailed how federal research agencies routinely fail to enforce their own rules regarding clinical trial transparency — which advocates say is critical for the public’s understanding of a given medicines’s safety and efficacy. …”
Abstract: Plagiarism and self-plagiarism are widespread in biomedical publications, although journals are increasingly implementing plagiarism detection software as part of their editorial processes. Wikipedia, a free online encyclopedia written by its users, has global public health importance as a source of online health information. However, plagiarism of Wikipedia in peer-reviewed publications has received little attention. Here, I present five cases of PubMed-indexed articles containing Wiki-plagiarism, i.e. copying of Wikipedia content into medical publications without proper citation of the source. The true incidence of this phenomenon remains unknown and requires systematic study. The potential scope and implications of Wiki-plagiarism are discussed.