“[Q] Which brings us to Wikipedia. Many of us consult it, slightly wary of its bias, depth, and accuracy. But, as you’ll be sharing in your speech at Intellisys, the content actually ends up being surprisingly reliable. How does that happen?
[A] The answer to “should you believe Wikipedia?” isn’t simple. In my book I argue that the content of a popular Wikipedia page is actually the most reliable form of information ever created. Think about it—a peer-reviewed journal article is reviewed by three experts (who may or may not actually check every detail), and then is set in stone. The contents of a popular Wikipedia page might be reviewed by thousands of people. If something changes, it is updated. Those people have varying levels of expertise, but if they support their work with reliable citations, the results are solid. On the other hand, a less popular Wikipedia page might not be reliable at all….”
eLife is excited to announce a new approach to peer review and publishing in medicine, including public health and health policy.
One of the most notable impacts of the COVID-19 pandemic has been the desire to share important results and discoveries quickly, widely and openly, leading to rapid growth of the preprint server medRxiv. Despite the benefits of rapid, author-driven publication in accelerating research and democratising access to results, the growing number of clinical preprints means that individuals and institutions may act quickly on new information before it is adequately scrutinised.
To address this challenge, eLife is bringing its system of editorial oversight by practicing clinicians and clinician-investigators, and rigorous, consultative peer review to preprints. The journal’s goal is to produce ‘refereed preprints’ on medRxiv that provide readers and potential users with a detailed assessment of the research, comments on its potential impact, and perspectives on its use. By providing this rich and rapid evaluation of new results, eLife hopes peer-reviewed preprints will become a reliable indicator of quality in medical research, rather than journal impact factor.
“It’s also worth quantifying the additional direct costs — especially in a system that is already considered too expensive by many. In an APC world, the authors of the accepted articles cover the costs of reviewing all those other articles that get rejected. For an Open Access journal with a 25% acceptance rate and an average of 2.2 reviews per article, paying the reviewers for one article’s worth of review comes in an 2.2 * $450 = $990. The journal reviews 1/0.25 = 4 articles to find one that is publishable, and the authors of the publishable article pay the costs for reviewing the other three. So, the modest proposal of a $450 fee for each review balloons to an additional $3960 being added to the Article Processing Charge for an average journal. …”
“When SMRJ was started, the editors used email and Word docs to track peer review, and they published all articles in PDF format. However, with the journal continuing to expand, the editors realized they were in need of an easier way to track submissions and a new publishing system to improve the journal’s online reading experience and chances of being added to relevant indexes. As a result, Chief Editor William Corser and Assistant Editor Sam Wisniewski began searching for publishing tools and services, focused on three key areas: streamlining peer review, modernizing the journal’s website, and producing XML for all articles.
After considering different options, Corser and Wisniewski chose to use Scholastica’s peer review and open access publishing software, as well as Scholastica’s typesetting service to produce PDF, HTML, and XML article files. Since making the switch, they’ve found that peer review is smoother for editors and authors and they’re making strides towards reaching their article discovery and indexing goals….”
“In 2020, DADOS began accepting the submission of manuscripts from preprint servers. However, there are still many concerns from the academic community, especially in the Social Sciences, about what preprints are and what changes they bring to the traditional framework of scientific assessment and publication. Our goal here is to answer these questions briefly, in addition to explaining in a simple way how to submit a preprint to DADOS. To this end, we have prepared a schematic of how manuscripts are evaluated in the traditional double-blind review system and how it has been modified in the preprint model. Next, we have a video and a podcast episode (both available in Portuguese only) about how DADOS will incorporate preprints, followed by a text summarizing this material….”
“Starting from this issue, Human Reproduction will be enriched by the introduction of a new section entitled ‘Peer Perspectives’, where each month we’ll publish a short report from ESHRE’s Twitter Journal Club—#ESHREjc….”
Abstract: Preprints are an increasingly important component of the scholarly record and preprint platforms have correspondingly grown in number. Academic communities value preprints for the opportunity to share early findings with peers and receive immediate feedback on not-yet-reviewed works. With the COVID pandemic, a broader audience is turning to preprints, as political leaders, journalists, and the public seek new information about the virus. Complications arise, however, when the unvetted nature of these works is not clearly signaled alongside discussions of their findings. In late 2020, Rick Anderson captured these concerns, highlighting cases where discredited preprints remained available to read, presenting a potential for misinformation. Anderson posited that preprint platform providers, not just editors, should ensure adequate preprint vetting and be willing to retract them. With the availability of two new open-source preprint platforms–PKP’s Open Preprint Systems (OPS) and Birkbeck’s Janeway preprint server–library publishers now have familiar, robust infrastructure for entering this space and are a logical home for such services, especially given a strong commitment to a specific research community. But what additional responsibilities must we accept–if any–as publishers of this genre? Should we establish terms for vetting of submissions? Without adequate domain knowledge, how would we enforce, or even audit, such terms? How do we indicate that a specific preprint’s findings have not yet been formally accepted? What about obligations regarding debunked publications? What are the responsibilities of platform providers, publishers, and editors? Should library publishers, as a community of practice, expand on the proposed best practices related to preprint metadata to ensure we are responsible actors in providing access to early research? Panelists will explore these questions during the session’s first half, and invite attendee participation for the second. Registered attendees will receive an advance survey regarding current/planned preprint publishing, in order to identify additional discussion topics.
eLife is pleased to announce today its ongoing support for Coko to develop open-source software solutions for publishing, including Kotahi – a new journal platform that can also help facilitate the publication and review of preprints.
“Part of our mission at bioRxiv is to alert readers to reviews and discussion of preprints and support the different ways readers provide feedback to authors on their work. These include tweets, comments on preprints and community- or journal-organized peer reviews. bioRxiv improves discoverability of such efforts by linking to peer reviews, community discussions and mentions of the preprint in social and traditional media. By aggregating this information in a new dashboard, we are now making these even easier for readers to find and access.
A series of new icons now appears in the dashboard launch bar, above each Abstract, representing different sources of preprint discussion or evaluation; the numbers of each evaluation or interaction are shown, and clicking on one of the icons opens a dashboard with details of the entries in that section….”
While early commenting on studies is seen as one of the advantages of preprints, the type of such comments, and the people who post them, have not been systematically explored.
Materials and methods
We analysed comments posted between 21 May 2015 and 9 September 2019 for 1983 bioRxiv preprints that received only one comment on the bioRxiv website. The comment types were classified by three coders independently, with all differences resolved by consensus.
Our analysis showed that 69% of comments were posted by non-authors (N = 1366), and 31% by the preprints’ authors themselves (N = 617). Twelve percent of non-author comments (N = 168) were full review reports traditionally found during journal review, while the rest most commonly contained praises (N = 577, 42%), suggestions (N = 399, 29%), or criticisms (N = 226, 17%). Authors’ comments most commonly contained publication status updates (N = 354, 57%), additional study information (N = 158, 26%), or solicited feedback for the preprints (N = 65, 11%).
Our results indicate that comments posted for bioRxiv preprints may have potential benefits for both the public and the scholarly community. Further research is needed to measure the direct impact of these comments on comments made by journal peer reviewers, subsequent preprint versions or journal publications.