Written by Paul Shannon, John Chodacki, Nokome Bentley, Adam Hyde, Ryan Dix-Peek, Yannis Barlas and Ben Whitmore
Recently, a team of technologists from across the globe came together in New Zealand to brainstorm how to integrate AI into the peer review process using Kotahi responsibly. We recognize the potential benefits that AI can bring, such as increased efficiency and accuracy, but we also acknowledge the need to be thoughtful about how we implement this technology. As a result, we engaged in a collaborative brainstorming session to explore how AI could be integrated into our platform responsibly and effectively. In this article, we will share some of our key insights and considerations from this session.
As the scientific publishing landscape continues to evolve, many are looking to AI as a potential solution to streamline the publishing process. At Kotahi, we’ve been thinking about how AI can be integrated into our platform to improve the efficiency and accuracy of the whole process.
Types of AI contributions
One question we’ve been grappling with is how to be transparent and honest about AI’s contribution and quantify its impact. Our approach was to first distinguish between different applications of AI technologies so we can better distinguish boundaries for where they are appropriate:
Computer-assisted humans use AI to help with certain tasks, such as identifying potential conflicts of interest or suggesting potential reviewers.
Generative AI, on the other hand, can create original content, such as writing summaries or even entire manuscripts.
By establishing this foundational distinction between these two approaches, we have a clearer understanding of the role that AI should play in the publication process.
Publishing is collaboration
While AI can help with certain tasks, such as identifying potential reviewers, it is important to remember that humans must continue to play the primary role in the publishing process. Authors and reviewers provide invaluable feedback and insights that cannot be replicated by AI alone. Therefore, we need to find a way to integrate AI into the publishing process transparently that also does not diminish the importance of human input.
One potential solution that maintains this distinction is to, for example, offer AI assistance to a reviewer directly in the reviewer form (as an opt-in) to help them turn their review notes (possibly in bullet points) into readable sentences and paragraphs that use a constructive, respectful tone suitable for a review. This would allow reviewers to choose whether or not they want to use AI to assist them in their reviews while also offering transparency about the use of AI in the review process.