Do you have a preprint in progress and want constructive feedback? Submit it for discussion at the ASAPbio-PREreview live-streamed preprint journal clubs – ASAPbio

“Preprints provide a great avenue for researchers to get feedback on their work from the community. This type of community feedback is particularly valuable when gathered on early preprints, that is, on manuscripts that are still work-in-progress, prior to their submission for journal publication. The feedback from the community can allow authors to get a sense of what parts of the work are particularly appreciated by their peers, what can be improved in the write up of the research, and give them ideas for further experiments or lines of research.

To highlight the value of sharing early work via preprints and the benefits of community feedback, ASAPbio and PREreview are partnering to host live-streamed preprint journal clubs for early preprints (the event will follow the format of PREreview Live-streamed preprint journal clubs as described here). During the journal club, participants will discuss the preprint with a focus on highlighting the positive aspects of the work and on offering constructive suggestions for next steps for the study. After the collaborative discussion, we will post a summary of the discussion on PREreview’s platform for preprint reviews. The review will therefore receive a digital object identifier (DOI), and participants will have the option to be recognized for their contribution.

We invite authors of early preprints who would like feedback on their work to submit their work for discussion at one of these journal clubs….”

Can artificial intelligence assess the quality of academic journal articles in the next REF? | Impact of Social Sciences

“For journal article prediction, there is no knowledge base related to quality that could be leveraged to predict REF scores across disciplines, so only the machine learning AI approach is possible. All previous attempts to produce related predictions have used machine learning (or statistical regression, which is also a form of pattern matching). Thus, we decided to build machine learning systems to predict journal article scores. As inputs, based on an extensive literature review of related prior work, we chose: field and year normalised citation rate; authorship team size, diversity, productivity, and field and year normalised average citation impact; journal names and citation rates (similar to the Journal Impact Factor); article length and abstract readability; and words and phrases in the title, keywords and abstract. We used provisional REF2021 scores for journal articles with these inputs and asked the AI to spot patterns that would allow it to accurately predict REF scores….”

TIER2

“Enhancing Trust, Integrity and Efficiency in Research through next-level Reproducibility…

TIER2 aims to boost knowledge on reproducibility, create tools, engage communities, implement interventions and policy across different contexts to increase re-use and overall quality of research results….”

Market forces influence editorial decisions – ScienceDirect

“In this issue of Cortex Huber et al. recount their experience in attempting to update the scientific record through an independent replication of a published study (Huber, Potter, & Huszar, 2019). In general, publishers resist issuing retractions, refutations or corrections to their stories or papers for fear of losing public trust, diminishing their brand and possibly ceding their market share (Sullivan, 2018). Unfortunately, this is just one way that market logic – retaining a competitive advantage among peers – explicitly or implicitly influences editorial priorities and decisions more broadly….

There’s the well-known tautology that news is what newsrooms decide to cover and what’s “newsworthy” is influenced by market logic. That news organizations, charged with relating truth and facts, are subject to market-based decisions is a major source of contention among the discerning public. It should be even more contentious that the stewards of scientific knowledge, academic publishers, are also beholden to it….

Although top journals are loathe to admit they ‘chase cites’ (Editorial, 2018), market forces make this unavoidable. One example is a strategy akin to product cost cross subsidization such as when in journalism profitable traffic-driving, click-bait articles subsidize more costly and in-depth, long-form investigative reporting. In order to attract the ‘best’ science, top journals must maintain a competitive impact factor. If the impact factor strays too far from the nearest competitor, then the journal will have trouble publishing the science it deems as most important because of the worth coveted researchers place on perceived impact….

Although publishers tout the value of replications and pay lip service to other reformative practices, their policies in this regard are often vague and non-committal….

Most professional editors are committed to advancing strong science, but however well-intentioned and sought in good faith reforms are, they are necessarily hamstrung by market forces. This includes restrained requirements for more rigorous and responsible research conduct. Journals do not want to put in place policies that are seemingly so onerous that authors decide to instead publish in competing but less demanding journals. Researchers need incentives for and enforcement of more rigorous research practices, but they want easier paths to publication. The result is that new policies at top journals allow publishers to maintain a patina of progressiveness in the absence of real accountability….

The reforms suggested by Huber et al. are welcome short-term fixes, but the community should demand longer-term solutions that break up the monopoly of academic publishers and divorce the processes of evaluation, publication and curation (Eisen and Polka, 2018). Only then may we wrest the power of science’s stewardship from the heavy hand of the market.”

SciELO – Public Health – The challenge of preprints for public health The challenge of preprints for public health

“ASAPbio (Accelerating Science and Publication in Biology) 3 is a group of biology researchers that promotes preprint publication and has produced a number of studies that attempt to allay concerns about its quality, claiming, for example, that published articles previously submitted to a preprint server did not show relevant changes for its publication 4. Authors from this group have argued that the current approaches to evaluate research and researchers hold back a more widespread adoption of the preprint methodology 5, which would explain its relatively small participation on the general panorama of scientific publication.

 

Despite claims to the contrary, however, there are examples of poor studies published as preprints, which caused undesirable consequences in public health. Two methodologically flawed studies about a protective effect of tobacco smoking against COVID-19 (one of which has an author with known connections with the tobacco industry), for example, increased the commercialization of tobacco products in France and Iran 6 and a virology study that erroneously stated that the SARS-COV-2 virus had “HIV insertions” fueled conspiracy theories about the former virus being a bioweapon, which lingered on even after the preprint was removed from the server due to its egregious errors 7. Studies have found that much of the public discussion and even policy was indeed driven by what was published in preprints rather than in scientific journals 7,8,9,10, thus, quality issues are a major cause of concern.

 

On the other hand, similar errors have been observed within traditional publishing; the publication of a poor quality paper with undisclosed conflicts of interest in one of the most prestigious medical journals, The Lancet, which became the trigger for the contemporary wave of anti-vaccine activism, is a major, and regretful, example. Understanding to what extent this problem is likely to occur with or without gatekeeping mechanisms is necessary.

 

Preprint advocates countered that the effect of poor science disseminated via preprints would be lessened by media reporting that explicitly indicated that those studies did not undergo any peer review and, thus, required more criticism and reserve before being considered essential sources for a public debate. It was probably the case of South African media 8, but in Brazil, a study found that less than 40% of preprint-based reports on mass media clearly showed their provisional character 11….”

Tren Publikasi Jurnal Open Access di Indonesia

Abstract:  The purpose of this research is to determine the publication trend of open access journals in Indonesia. A systematic review was used as the research method in this study. The focus of this study is to identify the themes that are most frequently discussed and discovered in studies on open access journals in Indonesia. According to the findings of this study, the most frequently discussed research themes were those concerning journal governance, marketing strategies, user perspectives, and matrices. Although several themes were discovered, the most frequently discussed theme was journal governance, specifically the challenges and problems faced by journal managers. Financing issues, journal quality, and piracy are among the challenges and issues discussed. This is due to the state of open access journals, which are still evolving in response to technological advancements.

 

Open issues for education in radiological research: data integrity, study reproducibility, peer-review, levels of evidence, and cross-fertilization with data scientists | SpringerLink

Abstract:  We are currently facing extraordinary changes. A harder and harder competition in the field of science is open in each country as well as in continents and worldwide. In this context, what should we teach to young students and doctors? There is a need to look backward and return to “fundamentals”, i.e. the deep characteristics that must characterize the research in every field, even in radiology. In this article, we focus on data integrity (including the “declarations” given by the authors who submit a manuscript), reproducibility of study results, and the peer-review process. In addition, we highlight the need of raising the level of evidence of radiological research from the estimation of diagnostic performance to that of diagnostic impact, therapeutic impact, patient outcome, and social impact. Finally, on the emerging topic of radiomics and artificial intelligence, the recommendation is to aim for cross-fertilization with data scientists, possibly involving them in the clinical departments.

 

Over €4.4 million granted to four new projects to enhance the common European data space for cultural heritage | Europeana Pro

“The Europeana Initiative is at the heart of the common European data space for cultural heritage, a flagship initiative of the European Union to support the digital transformation of the cultural heritage sector. Discover the projects funded under the initiative….

We are delighted to announce that the European Commission has funded four projects under their new flagship initiative for deployment of the common European data space for cultural heritage. The call for these projects, launched in spring 2022, aimed at seizing the opportunities of advanced technologies for the digital transformation of the cultural heritage sector. This included a focus on 3D, artificial intelligence or machine learning for increasing the quality, sustainability, use and reuse of data, which we are excited to see the projects explore in the coming months….”

H index, journal citation indicator, and other impact factors in neurosurgical publications – Is there a ‘cost factor’ that determines the quality? – ScienceDirect

Abstract:  Objective

There has been an increase in number of Neurosurgical publications including open access approach over the recent years. We aim to compare the Journal’s performance and its relationship to the submission fee incurred in publication. We have performed an in-depth analysis of various Neurosurgical journals’ performance in terms of the bibliometrics and have attempted to determine if there is any impact of the cost incurred to the quality of Journal’s output.

Methods

We identified 53 journals issuing neurosurgical-related work. Quantitative analysis from various search engines involved obtaining H indices, journal citations indicators, and other journal’s metrics such as immediacy index and 5-year impact factor utilising Journal Citation Reports from Clarivate software. Open access fees, coloured print costs, and individual subscription fees were collected. Correlations were produced using Spearmen Rho (?), p<0.05.

Results

Median H indices for 53 journals is 54 (range: 0-292), with journal citation indicators median reported at 0.785 (range: 0-2.45). Median immediacy indices are 0.797 (range: 0-4.076) and the median for 5-year impact factor is 2.76 (range: 0-12.704). There is a very strong positive correlation between JCI and immediacy indices, JCI and 5-year impact factor and 5-year impact factor and immediacy indices (? >0.7, p < .05). There is a moderate positive correlation between the H index and JCI (?= 0.399, p = 0.004). It is unclear whether there is any correlation between the indices and the OA costs and subscription costs for personal usage respectively (p > 0.05).

Conclusions

Our analysis indicates that larger costs incurred for open access fees and subscription costs for personal use are not clearly reflected upon the journals’ performance and this is quantified by utilising various indices. There appears to be a strong association within performance across the journals’ metrics. It would be beneficial to include learning about the bibliometric indices’ impact for research publications in the medical education training to maximise the quality of the scientific work produced and increase the visibility of the information produced. The potential full movement to OA exclusive journals would form a significant barrier for junior researchers, small institutions, or full time-trainee doctors with limited funding available. This study suggests the need for a robust measurement of the journals’ output and the quality of the work produced.

[2212.07811] Do altmetric scores reflect article quality? Evidence from the UK Research Excellence Framework 2021

Abstract:  Altmetrics are web-based quantitative impact or attention indicators for academic articles that have been proposed to supplement citation counts. This article reports the first assessment of the extent to which mature altmetrics from this http URL and Mendeley associate with journal article quality. It exploits expert norm-referenced peer review scores from the UK Research Excellence Framework 2021 for 67,030+ journal articles in all fields 2014-17/18, split into 34 Units of Assessment (UoAs). The results show that altmetrics are better indicators of research quality than previously thought, although not as good as raw and field normalised Scopus citation counts. Surprisingly, field normalising citation counts can reduce their strength as a quality indicator for articles in a single field. For most UoAs, Mendeley reader counts are the best, tweet counts are also a relatively strong indicator in many fields, and Facebook, blogs and news citations are moderately strong indicators in some UoAs, at least in the UK. In general, altmetrics are the strongest indicators of research quality in the health and physical sciences and weakest in the arts and humanities. The Altmetric Attention Score, although hybrid, is almost as good as Mendeley reader counts as a quality indicator and reflects more non-scholarly impacts.

 

[2212.05416] In which fields are citations indicators of research quality?

Abstract:  Citation counts are widely used as indicators of research quality to support or replace human peer review and for lists of top cited papers, researchers, and institutions. Nevertheless, the extent to which citation counts reflect research quality is not well understood. We report the largest-scale evaluation of the relationship between research quality and citation counts, correlating them for 87,739 journal articles in 34 field-based Units of Assessment (UoAs) from the UK. We show that the two correlate positively in all academic fields examined, from very weak (0.1) to strong (0.5). The highest correlations are in health, life sciences and physical sciences and the lowest are in the arts and humanities. The patterns are similar for the field classification schemes of Scopus and this http URL. We also show that there is no citation threshold in any field beyond which all articles are excellent quality, so lists of top cited articles are not definitive collections of excellence. Moreover, log transformed citation counts have a close to linear relationship with UK research quality ranked scores that is shallow in some fields but steep in others. In conclusion, whilst appropriately field normalised citations associate positively with research quality in all fields, they never perfectly reflect it, even at very high values.

 

Responsible Research Assessment I: Implementing DORA for hiring and promotion in psychology | PsychArchives

Abstract:  The use of journal impact factors and other metric indicators of research productivity, such as the h-index, has been heavily criticized for being invalid for the assessment of individual researchers and for fueling a detrimental “publish or perish” culture. Multiple initiatives call for developing alternatives to existing metrics that better reflect quality (instead of quantity) in research assessment. This report, written by a task force established by the German Psychological Society, proposes how responsible research assessment could be done in the field of psychology. We present four principles of responsible research assessment in hiring and promotion and suggest a two-step assessment procedure that combines the objectivity and efficiency of indicators with a qualitative, discursive assessment of shortlisted candidates. The main aspects of our proposal are (a) to broaden the range of relevant research contributions to include published data sets and research software, along with research papers, and (b) to place greater emphasis on quality and rigor in research evaluation.

 

Opinion: The Promise and Plight of Open Data | TS Digest | The Scientist

“At the same time, open data allow anyone to reproduce a study’s analyses and validate its findings. Occasionally, readers identify errors in the data or analyses that slipped through the peer-review process. These errors can be handled through published corrections or retractions, depending on their severity. One would expect open data to result in more errors being identified and fixed in published papers. 

But are journals with open-data policies more likely than their traditional counterparts to correct published research with erroneous results? To answer this, we collected information on data policies and article retractions for 199 journals that publish research in the fields of ecology and evolution, and compared retraction rates before and after open-data policies were implemented. 

Surprisingly, we found no detectable link between data-sharing policies and annual rates of article retractions. We also found that the publication of corrections was not affected by requirements to share data, and that these results persisted after accounting for differences in publication rates among journals and time lags between policy implementation and study publication due to the peer-review process. While our analysis was restricted to studies in ecology and evolution, colleagues in psychology and medicine have suggested to us that they expect similar patterns in their fields of study. 

Do these results mean that open-data policies are ineffective? No. There is no doubt that open data promote transparency, but our results suggest that a greater potential for error detection does not necessarily translate into greater error correction. We propose three additional practices, some of which could actually improve open-data practices, to help science self-correct. …”

Pandemic and infodemic: the role of academic journals and preprints | SpringerLink

“In contrast, before the outbreak of COVID-19 pandemic, clinical researchers were generally reluctant to adopt widespread sharing of preprints, probably because of concern that the potential harm that could result to patients, if medical treatment is based on findings that have not been vetted by peer reviewers. For example, the BMJ group opened a preprint server (ClinMedNetPrints.org) in 1999, but was closed in 2008, because only around 80 submissions were posted during this period [7]. The BMJ group, together with Cold Spring Harbor Laboratory and Yale University launched a new server, bioR?iv in 2013, and medR?iv in 2019 [7], but they were not actively used.

Outbreak of COVID-19 pandemic triggered clinical researchers to use actively preprint servers, and during the initial few years of the COVID-19 pandemic, more than 35,000 preprints, mainly related to COVID-19, have been posted to medR?iv. This marked increase in the posting of preprints indicates that clinical researchers have found benefits of preprints in the era of COVID-19 pandemic: research outcomes can be disseminated quickly, potentially speeding up research that may lead to the development of vaccines and treatments; quality of the draft can be improved by receiving feedback from a wider group of readers; the authors can claim priority of their discovery; and unlike articles published in subscription-based journals, all the preprints are freely available to anyone….”

 

The challenge of preprints for public health The challenge of preprints for public health

“Despite disagreements over whether this form of publication is actually beneficial or not, its advantages and problems present a high degree of convergence among advocates and detractors. On the one hand, preprint is beneficial because it is a quicker way to disseminate scientific content with open access to everyone; on the other hand, the lack of adequate vetting, especially for peer reviews, increases the risk of disseminating bad science and can lead to several problems 2. The dissent lies in considering to what extent possible risks overcome possible benefits (or vice versa).

 

The argument about this rapid dissemination has strong supporting evidence. A study on preprint publication showed that preprint are published on average 14 months earlier than peer-reviewed journal articles 1. This is expected considering that the time-intensive process of peer reviews and revising manuscripts is totally bypassed. However, in this strength lies its very fragility: how to assure that this shorter process will not compromise the quality of the publication?

 

ASAPbio (Accelerating Science and Publication in Biology) 3 is a group of biology researchers that promotes preprint publication and has produced a number of studies that attempt to allay concerns about its quality, claiming, for example, that published articles previously submitted to a preprint server did not show relevant changes for its publication 4. Authors from this group have argued that the current approaches to evaluate research and researchers hold back a more widespread adoption of the preprint methodology 5, which would explain its relatively small participation on the general panorama of scientific publication.

 

Despite claims to the contrary, however, there are examples of poor studies published as preprints, which caused undesirable consequences in public health. Two methodologically flawed studies about a protective effect of tobacco smoking against COVID-19 (one of which has an author with known connections with the tobacco industry), for example, increased the commercialization of tobacco products in France and Iran 6 and a virology study that erroneously stated that the SARS-COV-2 virus had “HIV insertions” fueled conspiracy theories about the former virus being a bioweapon, which lingered on even after the preprint was removed from the server due to its egregious errors 7. Studies have found that much of the public discussion and even policy was indeed driven by what was published in preprints rather than in scientific journals 7,8,9,10, thus, quality issues are a major cause of concern.

 

On the other hand, similar errors have been observed within traditional publishing; the publication of a poor quality paper with undisclosed conflicts of interest in one of the most prestigious medical journals, The Lancet, which became the trigger for the contemporary wave of anti-vaccine activism, is a major, and regretful, example. Understanding to what extent this problem is likely to occur with or without gatekeeping mechanisms is necessary….”