“When we think about the predatory journal issue, given the ability of Large Language Models (LLMs) to generate convincing text at zero cost to the user, this threatens the business model of the deceptive publisher. There are only a few studies into the motivations of authors who publish in predatory journals, but those that have looked at the question broadly identify them as either being unaware or unethical. While a naïve author may still publish in a predatory journal thinking it is legitimate, an unethical one may weigh up the expense and risk of knowingly doing so against the cheaper and potentially less risky alternative of using Generative AI.
For example, imagine you are an unethical author and just want to get a publication in a recognized journal, and are willing to take some risks to do so, but unwilling to actually write a real paper yourself. Publishing in a predatory journal is an option, but it will cost a few hundred dollars and only gives you a publication in a journal few people have heard of, so significant risk for minimal reward. However, if you ask an AI to write you a paper and then make some tweaks, you might get it published in a better journal for no fee (assuming you take the non-open access route). With AI-checking software still in its infancy, an unethical author may decide that using an AI is the most effective course of action, hoping it escapes detection and is accepted. And of course, they can much more easily increase their output using this option. So as we can see, an unintended consequence of Generative AI could be to reduce demand for predatory journals, effectively disrupting their business models….”