Attitudes, behaviours and experiences of authors of COVID-19 preprints

Abstract:  The COVID-19 pandemic caused a rise in preprinting, apparently triggered by the need for open and rapid dissemination of research outputs. We surveyed authors of COVID-19 preprints to learn about their experience of preprinting as well as publishing in a peer-reviewed journal. A key aim was to consider preprints in terms of their effectiveness for authors to receive feedback on their work. We also aimed to compare the impact of feedback on preprints with the impact of comments of editors and reviewers on papers submitted to journals. We observed a high rate of new adopters of preprinting who reported positive intentions regarding preprinting their future work. This allows us to posit that the boost in preprinting may have a structural effect that will last after the pandemic. We also saw a high rate of feedback on preprints but mainly through “closed” channels – directly to the authors. This means that preprinting was a useful way to receive feedback on research, but the value of feedback could be increased further by facilitating and promoting “open” channels for preprint feedback. At the same time, almost a quarter of the preprints that received feedback received comments resembling journal peer review. This shows the potential of preprint feedback to provide valuable detailed comments on research. However, journal peer review resulted in a higher rate of major changes in the papers surveyed, suggesting that the journal peer review process has significant added value compared to preprint feedback.

 

No evidence that mandatory open data policies increase error correction | Nature Ecology & Evolution

Berberi, I., Roche, D.G. No evidence that mandatory open data policies increase error correction. Nat Ecol Evol (2022). https://doi.org/10.1038/s41559-022-01879-9

Preprint: https://doi.org/10.31222/osf.io/k8ver

Abstract: Using a database of open data policies for 199 journals in ecology and evolution, we found no detectable link between data sharing requirements and article retractions or corrections. Despite the potential for open data to facilitate error detection, poorly archived datasets, the absence of open code and the stigma associated with correcting or retracting articles probably stymie error correction. Requiring code alongside data and destigmatizing error correction among authors and journal editors could increase the effectiveness of open data policies at helping science self-correct.

 

Preprints in Chemistry: a Research Team’s Journey** – Ciriminna – ChemistryOpen – Wiley Online Library

Abstract:  The benefits of publishing research papers first in preprint form are substantial and long-lasting also in chemistry. Recounting the outcomes of our team’s nearly six-year journey through preprint publishing, we show evidence that preprinting research substantially benefits both early career and senior researchers in today’s highly interdisciplinary chemical research. These findings are of general value, as shown by analyzing the case of four more research teams based in economically developed and developing countries.

 

PsyArXiv Preprints | Paper mills: a novel form of publishing malpractice affecting psychology

“Psychology journals are not immune to targeting by paper mills. Difficulties in obtaining peer reviewers have led many journals, such as this one, to ask authors to recommend peer reviewers. This creates a crack in the defences of a journal against fraud, if it is combined with lack of editorial oversight. This case illustrates the benefits of open peer review in detecting fraud….”

Reliability of citations of medRxiv preprints in articles published on COVID-19 in the world leading medical journals | PLOS ONE

Abstract:  Introduction

Preprints have been widely cited during the COVID-19 pandemics, even in the major medical journals. However, since subsequent publication of preprint is not always mentioned in preprint repositories, some may be inappropriately cited or quoted. Our objectives were to assess the reliability of preprint citations in articles on COVID-19, to the rate of publication of preprints cited in these articles and to compare, if relevant, the content of the preprints to their published version.

Methods

Articles published on COVID in 2020 in the BMJ, The Lancet, the JAMA and the NEJM were manually screened to identify all articles citing at least one preprint from medRxiv. We searched PubMed, Google and Google Scholar to assess if the preprint had been published in a peer-reviewed journal, and when. Published articles were screened to assess if the title, data or conclusions were identical to the preprint version.

Results

Among the 205 research articles on COVID published by the four major medical journals in 2020, 60 (29.3%) cited at least one medRxiv preprint. Among the 182 preprints cited, 124 were published in a peer-reviewed journal, with 51 (41.1%) before the citing article was published online and 73 (58.9%) later. There were differences in the title, the data or the conclusion between the preprint cited and the published version for nearly half of them. MedRxiv did not mentioned the publication for 53 (42.7%) of preprints.

Conclusions

More than a quarter of preprints citations were inappropriate since preprints were in fact already published at the time of publication of the citing article, often with a different content. Authors and editors should check the accuracy of the citations and of the quotations of preprints before publishing manuscripts that cite them.

Exclusive: PLOS ONE to retract more than 100 papers for manipulated peer review – Retraction Watch

“In March, an editor at PLOS ONE noticed something odd among a stack of agriculture manuscripts he was handling. One author had submitted at least 40 manuscripts over a 10-month period, much more than expected from any one person. 

The editor told the ethics team at the journal about the anomaly, and they started an investigation. Looking at the author lists and academic editors who managed peer review for the papers, the team found that some names kept popping up repeatedly. 

Within a month, the initial list of 50 papers under investigation expanded to more than 300 submissions received since 2020 – about 100 of them already published – with concerns about improper authorship and conflicts of interest that compromised peer review. 

“It definitely shot up big red flags for us when we started to see the number of names and their publication volumes,” Renee Hoch, managing editor of PLOS’s publication ethics team, told Retraction Watch. “This is probably our biggest case that we’ve seen in several years.”

The journal’s action on the published papers begins today, Retraction Watch has learned, with the retraction of 20 articles. Action on the rest will follow in batches about every two weeks as the editors finish their follow up work on specific papers. Corresponding authors on the papers to be retracted today who responded to our request for comment said they disagreed with the retractions, and disputed that they had relationships with the editors who handled their papers, among other protests….”

Rigor and Transparency Index: Large Scale Analysis of Scientific Reporting Quality

“JMIR Publications recently published “Establishing Institutional Scores With the Rigor and Transparency Index: Large-scale Analysis of Scientific Reporting Quality” in the Journal of Medical Internet Research (JMIR), which reported that improving rigor and transparency measures should lead to improvements in reproducibility across the scientific literature, but assessing measures of transparency tends to be very difficult if performed manually by reviewers.

 

The overall aim of this study is to establish a scientific reporting quality metric that can be used across institutions and countries, as well as to highlight the need for high-quality reporting to ensure replicability within biomedicine, making use of manuscripts from the Reproducibility Project: Cancer Biology.

 

The authors address an enhancement of the previously introduced Rigor and Transparency Index (RTI), which attempts to automatically assess the rigor and transparency of journals, institutions, and countries using manuscripts scored on criteria found in reproducibility guidelines (eg, NIH, MDAR, ARRIVE).

 

Using work by the Reproducibility Project: Cancer Biology, the authors could determine that replication studies scored significantly higher than the original papers which, according to the project, all required additional information from authors to begin replication efforts.

 

Unfortunately, RTI measures for journals, institutions, and countries all currently score lower than the replication study average. If they take the RTI of these replication studies as a target for future manuscripts, more work will be needed to ensure the average manuscript contains sufficient information for replication attempts….”

Journal Impact Factor and Peer Review Thoroughness and Helpfulness: A Supervised Machine Learning Study

Abstract: 

The journal impact factor (JIF) is often equated with journal quality and the quality of the peer review of the papers submitted to the journal. We examined the association between the content of peer review and JIF by analysing 10,000 peer review reports submitted to 1,644 medical and life sciences journals. Two researchers hand-coded a random sample of 2,000 sentences. We then trained machine learning models to classify all 187,240 sentences as contributing or not contributing to content categories. We examined the association between ten groups of journals defined by JIF deciles and the content of peer reviews using linear mixed-effects models, adjusting for the length of the review. The JIF ranged from 0.21 to 74.70. The length of peer reviews increased from the lowest (median number of words 185) to the JIF group (387 words). The proportion of sentences allocated to different content categories varied widely, even within JIF groups. For thoroughness, sentences on ‘Materials and Methods’ were more common in the highest JIF journals than in the lowest JIF group (difference of 7.8 percentage points; 95% CI 4.9 to 10.7%). The trend for ‘Presentation and Reporting’ went in the opposite direction, with the highest JIF journals giving less emphasis to such content (difference -8.9%; 95% CI -11.3 to -6.5%). For helpfulness, reviews for higher JIF journals devoted less attention to ‘Suggestion and Solution’ and provided fewer Examples than lower impact factor journals. No, or only small differences were evident for other content categories. In conclusion, peer review in journals with higher JIF tends to be more thorough in discussing the methods used but less helpful in terms of suggesting solutions and providing examples. Differences were modest and variability high, indicating that the JIF is a bad predictor for the quality of peer review of an individual manuscript.

 

 

Agreement on Reforming Research Assessment

“As signatories of this Agreement, we agree on the need to reform research assessment practices. Our vision is that the assessment of research, researchers and research organisations recognises the diverse outputs, practices and activities that maximise the quality and impact of research. This requires basing assessment primarily on qualitative judgement, for which peer review is central, supported by responsible use of quantitative indicators. Among other purposes, this is fundamental for: deciding which researchers to recruit, promote or reward, selecting which research proposals to fund, and identifying which research units and organisations to support….”

 

How AI could help make Wikipedia entries more accurate

“Automated tools can help identify gibberish or statements that lack citations, but helping human editors determine whether a source actually backs up a claim is a much more complex task — one that requires an AI system’s depth of understanding and analysis.

Building on Meta AI’s research and advancements, we’ve developed the first model capable of automatically scanning hundreds of thousands of citations at once to check whether they truly support the corresponding claims. It’s open-sourced here, and you can see a demo of our verifier here. As a knowledge source for our model, we created a new dataset of 134 million public webpages — an order of magnitude larger and significantly more intricate than ever used for this sort of research. It calls attention to questionable citations, allowing human editors to evaluate the cases most likely to be flawed without having to sift through thousands of properly cited statements. If a citation seems irrelevant, our model will suggest a more applicable source, even pointing to the specific passage that supports the claim. Eventually, our goal is to build a platform to help Wikipedia editors systematically spot citation issues and quickly fix the citation or correct the content of the corresponding article at scale….”

When a journal is both at the ‘top’ and the ‘bottom’: the illogicality of conflating citation-based metrics with quality | SpringerLink

Abstract:  The ‘quality’, ‘prestige’, and ‘impact’ of a scholarly journal is largely determined by the number of citations its articles receive. Publication and citation metrics play an increasingly central (but contested) role in academia, often influencing who gets hired, who gets promoted, and who gets funded. These metrics are also of interest to institutions, as they may impact government funding, and their position in influential university rankings. Within this context, researchers around the world are experiencing pressure to publish in the ‘top’ journals. But what if a journal is considered a ‘top’ journal, and a ‘bottom’ journal at the same time? We recently came across such a case, and wondered if it was just an anomaly, or if it was more common than we might assume. This short communication reports the findings of our investigation into the nature and extent of this phenomenon in Scimago Journal Country and Rank (SJR) and Journal Citation Reports (JCR), both of which produce influential citation-based metrics. In analyzing around 25,000 journals and 12,000 journals respectively, we found that they are commonly placed into multiple subject categories. If citation-based metrics are an indication of broader concepts of research/er quality, which is so often implied or inferred, then we would expect that journals would be ranked similarly across these categories. However, our findings show that it is not uncommon for journals to attract citations to differing degrees depending on their category, resulting in journals that may at the same time be perceived as both ‘high’ and ‘low’ quality. This study is further evidence of the illogicality of conflating citation-based metrics with journal, research, and researcher quality, a continuing and ubiquitous practice that impacts thousands of researchers.

 

Potential ethical problems in the creation of open educational resources through virtual spaces in academia: Heliyon

Abstract:  The concept of open educational resources (OER) is an emerging phenomenon that encourages modern teaching and learning in the higher education sector. Although many institutions are promoting the adoption and creation of OER, they are still lacking in the policies and development guidelines related to the creation. This could perpetrate the potential ethical problems that affect the development of OER. This study aimed to find out ethical procedures and peer-review processes associated with the adoption and development of OER. A qualitative approach was used to gather data from OER developers in the academic space. Structuration theory was considered the main theoretical underpinning of this study. The commonly used big data virtual spaces for OER, such as social media and learning management systems (LMS), were identified. The study articulates three major causalities of OER’s ethical problems, as follows: non-compliance to openness, transactional purchases of OER, non-incentives for developers. Also, the scholars’ ideas and OER outputs cannot be undermined; however, there is a need for a peer-review process in the creation of OER. Institutions are expected to formulate the standards and requirements to be followed in the creation of OER because OER contributes to the rising of big data in the education domains. The study recommends that OER be developed for a specific purpose and aligned with the specific subject content, and the resource must be precise and peer reviewed for quality measures.

 

Modern geoscience publishing – GEOSCIENTIST

“The preprint is the initial version of a research article, often (but not always) before submission to a journal and before formal peer-review. Preprints help modernise geoscience by removing barriers that inhibit broad participation in the scientific process, and which are slowing progress towards a more open and transparent research culture. …

Preprints have many well-documented benefits for both researchers and the public (e.g., Bourne et al., 2017; Sarabipour et al., 2019; Pourret et al., 2020). For example, preprints enable:

 

• Rapid sharing of research results, which can be critical for time-sensitive studies (such as after disasters), as well as for early career researchers applying for jobs, or any academic applying for grants or a promotion, given that journal-led peer review can take many months to years;
• Greater visibility and accessibility for research outputs, given there is no charge for posting or reading a preprint, especially for those who do not have access to pay-walled journals, or limited access due to remote working (such as during lockdowns);
• Additional peer feedback beyond that provided by journal-led peer review, enhancing the possibility of collaboration via community input and discussion;
• Researchers to establish priority (or a precedent) on their results, mitigating the chance of being ‘scooped’;
• Breakdown of the silos that traditional journals uphold, by exposing us to broader research than we might encounter otherwise, and giving a home to works that do not have a clear destination in a traditional publication;
• Research to be more open and transparent, with the intention of improving the overall quality, integrity, and reproducibility of results. …”

Hintergrund – Der MDPI-Verlag – Wolf im Schafspelz? [Background: MDPI – A wolf in sheep’s clothing?] | Laborjournal online

by Henrik Müller

Die unorthodoxen Methoden des schweizerischen Verlagshauses MDPI spalten die Wissenschaftsgemeinde. Fördert es mit seiner Flut an Sonderausgaben und ultraschnellem Peer Review den wissenschaftlichen Austausch? Oder schafft es wissenschaftliche Qualität ab? Kritiker und Befürworter sind ganz unterschiedlicher Auffassung.

Research assessment and implementation of Open Science

“ACKNOWLEDGES that in order to accelerate the implementation and the impact of Open Science policies and practices across Europe, action has to be taken to move towards a renewed approach to research assessment, including incentive and reward schemes, to put in place a European approach in accordance with the Pact for Research and Innovation in Europe, and strengthen capacities for academic publishing and scholarly communication of all research outputs, and encourage where appropriate, the use of multilingualism for the purpose of wider communication of European research results….

EMPHASIZES that applying Open Science principles should be appropriately rewarded in researchers’ careers; …”