Abstract: This article explores the application of journal quality and credibility evaluation tools to library science publications. The researchers investigate quality and credibility attributes of forty eight peer-reviewed library science journals with open access components using two evaluative tools developed and published by librarians. The results identify common positive and negative attributes of library science journals, compare the results of the two evaluation tools, and discuss their ease of use and limitations. Overall, the results show that while library science journals do not fall prey to the same concerning characteristics that librarians use to caution other researchers, there are several areas in which publishers can improve the quality and credibility of their journals.
Abstract: In this study, by using Beall’s (Scholarly open-access, 2014; Beall’s list of predatory journals and publishers, 2018) predatory journal lists as well as direct e-mail solicitations from journals, we intentionally submitted a poorly written manuscript to 58 open-access journals using counterfeit names and affiliations. Although, there have been several studies examining the practices of questionable journals, there is a lack of research investigating the interactive processes in detail. Our analysis, then, was to provide a more comprehensive view of the underlying reasoning for the acceptance or rejection of a manuscript. Of the 31 journals acknowledging receipt of our manuscript, 21 accepted it either unexpurgated or asked only for cosmetic revisions. Regarding ‘positive responses’, we point to five common flaws associated with such journals, namely that (1) they lack any interest in the researchers who are submitting manuscripts; (2) they do not judge academic writing in accordance with expected conventions; (3) they appear to be indifferent to scholarship including research design, plagiarism issues, and citation quality; (4) their review process is opaque and overly hasty, and (5) the tone they use in correspondence e-mail messages is highly inappropriate. Based upon the investigation, it is clear that such journals’ primary aim is in securing the article processing fee. Our findings paint a more comprehensive picture of questionable journal practices with the hope of disseminating such information to the broader scholarly community.
“Until recently, MDPI and Frontiers were known for their meteoric rise. At one point, powered by the Guest Editor model, the two publishers combined for about 500,000 papers (annualized), which translated into nearly USD $1,000,000,000 annual revenue. Their growth was extraordinary, but so has been their contraction. MDPI has declined by 27% and Frontiers by 36% in comparison to their peak.
Despite their slowdown, MDPI and Frontiers have become an integral part of the modern publishing establishment. Their success reveals that their novel offering resonates with thousands of researchers. Their turbulent performance, however, shows that their publishing model is subject to risk, and its implementation should acknowledge and mitigate such risk….”
A major education exercise is needed to ensure that Editors are aware of the problem of paper mills, and Editors/editorial staff are trained in identifying the fake papers.
Continued investment in tools and systems to pick up suspect papers as they are submitted.
Engagement with institutions and funders to review incentives for researchers to publish valid papers and not use services that will give quick but fake publication.
Investigation of protocols that can be put in place to impede paper mills from succeeding in their goals.
Review the retraction process to take account of the unique features of paper mill papers.
Investigate how to ensure retraction notices are applied to all copies of a paper such as preprint servers and article repositories….”
Abstract: Research is becoming increasingly accessible to the public via open access publications, researchers’ social media postings, outreach activities, and popular disseminations. A healthy research discourse is typified by debates, disagreements, and diverging views. Consequently, readers may rely on the information available, such as publication reference attributes and bibliometric markers, to resolve conflicts. Yet, critical voices have warned about the uncritical and one-sided use of such information to assess research. In this study we wanted to get insight into how individuals without research training place trust in research based on clues present in publication references. A questionnaire was designed to probe respondents’ perceptions of six publication attributes. A total of 148 students responded to the questionnaire of which 118 were undergraduate students (with limited experience and knowledge of research) and 27 were graduate students (with some knowledge and experience of research). The results showed that the respondents were mostly influenced by the number of citations and the recency of publication, while author names, publication type, and publication origin were less influential. There were few differences between undergraduate and graduate students, with the exception that undergraduate students more strongly favoured publications with multiple authors over publications with single authors. We discuss possible implications for teachers that incorporate research articles in their curriculum.
“Measuring the transparency and credibility of research is fundamental to our mission. By having measures of transparency and credibility we can learn about the current state of research practice, we can evaluate the impact of our interventions, we can track progress on culture change, and we can investigate whether adopting transparency behaviors is associated with increasing credibility of findings….
Many groups have conducted research projects that manually code a sample of papers from a field to assess current practices. These are useful but highly effortful. If machines can be trained to do the work, we will get much more data, more consistently, and much faster. There are at least three groups that have made meaningful progress creating scalable solutions: Ripeta, SciScore, and DataSeer. These groups are trying to make it possible, accurate, and easy to assess many papers for whether the authors shared data, used reporting standards, identified their conflicts of interest, and other transparency relevant actions….”
From Google’s English: “Scientific knowledge gained a relevant audience in the pandemic because lies about Covid-19 threaten the lives of the population. It has been a long time since humanity faced such a high mortality disease globally. The pandemic required scientific journals to ensure the rapid publication of available evidence, ensuring the quality of information and the identification of biases that could compromise ittwo, since these works are the essential raw material to fight fake news , misinformation and conspiracy theories, which undermine the population’s adherence to the measures necessary to fight the pandemic….
in an infodemic1, naturally, fanciful, incredible news, which appeals to emotions and seems more phenomenal than reality itself, gains repercussions. The scientific dissemination of Covid-19 became an objective response by scientists to the denial movement3, which calls into question the effectiveness of vaccines, sabotages prevention measures and propagates miracle cures….”
“During my time overseeing the library services department of a large school district, we found our subscription databases were generally a well-kept secret. The lack of trained school librarians available to teach these resources was part of the issue. But Google was ubiquitous, as was Wikipedia, and they became de facto research sources for students, despite their limitations for such a role.
Google has its place for students and researchers (I used it for this article), as does Google Scholar (which I also used). But for students, subscription databases should also play a central research role, beginning with age-appropriate sources for elementary kids – like National Geographic – and moving up to “Gale in Context” for middle school students, and more scholarly articles for high schoolers from sources like ABC-CLIO….”
Abstract: This paper discusses the reasons for emergence of predatory publications in India, engendered by mandates of higher educational institutions: that require stipulated number of research publications for employment and promotions. Predatory journals have eclipsed the merits of open access publishing, compromised ethical practices, and left the research community groping for benchmarks of research integrity and publication ethics. To fight back the menace of predatory publications, University Grants Commission, India has established “Consortium for Academic Research and Ethics” (UGC-CARE) in 2018 to promote and benchmark research integrity and publication ethics among the Indian academia. The present paper discusses the UGC-CARE initiative, its structure, objectives and specifically, “UGC-CARE Reference List of Quality Journals” (UGC-CARE list) and finally, the challenges it faces.
Abstract: Responding to calls to take a more active role in communicating their research findings, scientists are increasingly using open online platforms, such as Twitter, to engage in science communication or to publicize their work. Given the ease with which misinformation spreads on these platforms, it is important for scientists to present their findings in a manner that appears credible. To examine the extent to which the online presentation of science information relates to its perceived credibility, we designed and conducted two surveys on Amazon’s Mechanical Turk. In the first survey, participants rated the credibility of science information on Twitter compared with the same information in other media, and in the second, participants rated the credibility of tweets with modified characteristics: presence of an image, text sentiment, and the number of likes/retweets. We find that similar information about scientific findings is perceived as less credible when presented on Twitter compared to other platforms, and that perceived credibility increases when presented with recognizable features of a scientific article. On a platform as widely distrusted as Twitter, use of these features may allow researchers who regularly use Twitter for research-related networking and communication to present their findings in the most credible formats.
Abstract: Researchers from Uzbekistan are leading the global list of publications in predatory journals. The current paper reviews the principles of implementation of the “publish or perish policy” in Uzbekistan with an overarching aim of detecting the factors that are pushing more and more scholars to publish the results of their studies in predatory journals. Scientific publications have historically been a cornerstone in the development of science. For the past five decades, the quantity of publications has become a common indicator for determining academic capacity. Governments and institutions are increasingly employing this indicator as an important criterion for promotion and recruitment; simultaneously, researchers are being awarded Ph.D. and D.Sc. degrees for the number of articles they publish in scholarly journals. Many talented academics have had a pay rise or promotion declined due to a short or nonexistent bibliography, which leads to significant pressure on academics to publish. The “publish or perish” principle has become a trend in academia and the key performance indicator for habilitation in Uzbekistan. The present study makes a case for re-examining the criteria set by the Supreme Attestation Commission of the Republic of Uzbekistan for candidates applying for Ph.D. and D.Sc. as well as faculty promotion requirements in the light of current evidence for the deteriorating academic performance of scholars. View Full-Text
“Anyone tracking scholarship on Central Asia is sure to be swamped by Uzbek research in unreputable publications
A new paper has found why: Under pressure from Uzbekistan’s government, academics are succumbing to predatory journals – publishers that, for a fee, overlook best practices like peer review or editing. Many of the researchers are forced to publish far more often than feasible if the bar were higher, and the quality shows: Uzbek academics are global leaders in spreading research that some scholars would explicitly call “bullshit.”
“Publish or perish”: It’s a global problem among academics, with eye-opening salience in Uzbekistan, find Bahtiyor Eshchanov of the Center for Economic Research and Reforms in Tashkent and his three Uzbek co-authors in a new paper in Publications, a peer-reviewed journal about scholarly publishing….”
Objectives To describe and compare the characteristics of scholars who reviewed for predatory or legitimate journals in terms of their sociodemographic characteristics and reviewing and publishing behaviour.
Design Linkage of random samples of predatory journals and legitimate journals of the Cabells Scholarly Analytics’ journal lists with the Publons database, employing the Jaro-Winkler string metric. Descriptive analysis of sociodemographic characteristics and reviewing and publishing behaviour of scholars for whom reviews were found in the Publons database.
Setting Peer review of journal articles.
Participants Reviewers who submitted peer review reports to Publons.
Measurements Numbers of reviews for predatory journals and legitimate journals per reviewer. Academic age of reviewers, the total number of reviews, number of publications and number of reviews and publications per year.
Results Analyses included 183 743 unique reviews submitted to Publons by 19 598 reviewers. Six thousand and seventy-seven reviews were for 1160 predatory journals (3.31% of all reviews) and 177 666 reviews for 6403 legitimate journals (96.69%). Most scholars never submitted reviews for predatory journals (90.0% of all scholars); few scholars (7.6%) reviewed occasionally or rarely (1.9%) for predatory journals. Very few scholars submitted reviews predominantly or exclusively for predatory journals (0.26% and 0.35%, respectively). The latter groups of scholars were of younger academic age and had fewer publications and reviews than the first groups. Regions with the highest shares of predatory reviews were sub-Saharan Africa (21.8% reviews for predatory journals), Middle East and North Africa (13.9%) and South Asia (7.0%), followed by North America (2.1%), Latin America and the Caribbean (2.1%), Europe and Central Asia (1.9%) and East Asia and the Pacific (1.5%).
Conclusion To tackle predatory journals, universities, funders and publishers need to consider the entire research workflow and educate reviewers on concepts of quality and legitimacy in scholarly publishing.
“So, is MDPI predatory or not? I think it has elements of both. I would name their methods aggressive rent extracting, rather than predatory. And I also think that their current methods & growth rate are likely to make them shift towards more predatory over time.
MDPI publishes good papers in good journals, but it also employs some strategies that are proper to predatory publishers. I think that the success of MDPI in recent years is due to the creative combination of these two apparently contradicting strategy. One — the good journals with high quality — creates a rent that the other — spamming hundreds of colleagues to solicit papers, an astonishing increase in Special Issues, publishing papers as fast as possible — exploits.This strategy makes a lot of sense for MDPI, who shows strong growth rates and is en route to become the largest open access publisher in the world. But I don’t think it is a sustainable strategy. It suffers from basic collective action problems, that might deal a lot of damage to MDPI first, and, most importantly, to scientific publishing in general….
A predatory publisher is a journal that would publish anything — usually in return for money. MDPI rejection rates make this argument hard to sustain. Yet, MDPI is using some of the same techniques of predatory journals….”