“We have some exciting news to share – a new and improved Journalytics Academic & Predatory Reports platform will soon be here. Our team has been working on multiple updates and enhancements to our tried and true platform that will benefit users in multiple ways. Along with our ongoing addition of new verified and predatory journals, users will experience better search results, new data points and visualizations, increased stability and speed, and more secure logins.
In addition to the visual elements and expanded analytics of this redesign, a key component is the full integration of our Journalytics and Predatory Reports databases. This integration will allow for comprehensive searches that present the full range of publishing opportunities and threats in a given area. Our goal is to facilitate journal discovery and evaluation so our users know the journals and know the risks.
Last month we hosted a webinar to give users a sneak peek at the upcoming changes, which include a new guided search page to jumpstart journal discovery, updated platform and journal card designs, and new data points such as fees and article output. Check out the video below or visit our YouTube channel where you’ll find a time-stamped table of contents in the description for easy navigation to specific points in the video….”
Objectives To describe and compare the characteristics of scholars who reviewed for predatory or legitimate journals in terms of their sociodemographic characteristics and reviewing and publishing behaviour.
Design Linkage of random samples of predatory journals and legitimate journals of the Cabells Scholarly Analytics’ journal lists with the Publons database, employing the Jaro-Winkler string metric. Descriptive analysis of sociodemographic characteristics and reviewing and publishing behaviour of scholars for whom reviews were found in the Publons database.
Setting Peer review of journal articles.
Participants Reviewers who submitted peer review reports to Publons.
Measurements Numbers of reviews for predatory journals and legitimate journals per reviewer. Academic age of reviewers, the total number of reviews, number of publications and number of reviews and publications per year.
Results Analyses included 183 743 unique reviews submitted to Publons by 19 598 reviewers. Six thousand and seventy-seven reviews were for 1160 predatory journals (3.31% of all reviews) and 177 666 reviews for 6403 legitimate journals (96.69%). Most scholars never submitted reviews for predatory journals (90.0% of all scholars); few scholars (7.6%) reviewed occasionally or rarely (1.9%) for predatory journals. Very few scholars submitted reviews predominantly or exclusively for predatory journals (0.26% and 0.35%, respectively). The latter groups of scholars were of younger academic age and had fewer publications and reviews than the first groups. Regions with the highest shares of predatory reviews were sub-Saharan Africa (21.8% reviews for predatory journals), Middle East and North Africa (13.9%) and South Asia (7.0%), followed by North America (2.1%), Latin America and the Caribbean (2.1%), Europe and Central Asia (1.9%) and East Asia and the Pacific (1.5%).
Conclusion To tackle predatory journals, universities, funders and publishers need to consider the entire research workflow and educate reviewers on concepts of quality and legitimacy in scholarly publishing.
“In scholarly publishing, blacklists aim to register fraudulent or deceptive journals and publishers, also known as “predatory”, to minimise the spread of unreliable research and the growing of fake publishing outlets. However, blacklisting remains a very controversial activity for several reasons: there is no consensus regarding the criteria used to determine fraudulent journals, the criteria used may not always be transparent or relevant, and blacklists are rarely updated regularly. Cabell’s paywalled blacklist service attempts to overcome some of these issues in reviewing fraudulent journals on the basis of transparent criteria and in providing allegedly up-to-date information at the journal entry level. We tested Cabell’s blacklist to analyse whether or not it could be adopted as a reliable tool by stakeholders in scholarly communication, including our own academic library. To do so, we used a copy of Walt Crawford’s Gray Open Access dataset (2012-2016) to assess the coverage of Cabell’s blacklist and get insights on their methodology. Out of the 10,123 journals that we tested, 4,681 are included in Cabell’s blacklist. Out of this number of journals included in the blacklist, 3,229 are empty journals, i.e. journals in which no single article has ever been published. Other collected data points to questionable weighing and reviewing methods and shows a lack of rigour in how Cabell applies its own procedures: some journals are blacklisted on the basis of 1 to 3 criteria – some of which are very questionable, identical criteria are recorded multiple times in individual journal entries, discrepancies exist between reviewing dates and the criteria version used and recorded by Cabell, reviewing dates are missing, and we observed two journals blacklisted twice with a different number of violations. Based on these observations, we conclude with recommendations and suggestions that could help improve Cabell’s blacklist service.”