“This is the third and last event in the DOAJ at 20 series, where we are celebrating 20 years of DOAJ through themes ‘Open’, ‘Global’ and ‘Trusted’.
This event will be on the theme ‘Trusted’, where our moderator Joy Owango will host a discussion with our four speakers; Judith Barnsby, Dr Haseeb Irfanullah, Ivan Oransky and Ixchel M. Faniel. Topics for discussion will surround what it means to be trusted, and what variations of mistrust exist in scholarly communications. At the end of the discussion, there will be a Q&A, where the audience can ask any questions they have to our speakers. Dominic Mitchell from DOAJ will open and close the event, outlining what Trusted means for DOAJ.”
Abstract: We report evidence of an undocumented method to manipulate citation counts involving ‘sneaked’ references. Sneaked references are registered as metadata for scientific articles in which they do not appear. This manipulation exploits trusted relationships between various actors: publishers, the Crossref metadata registration agency, digital libraries, and bibliometric platforms. By collecting metadata from various sources, we show that extra undue references are actually sneaked in at Digital Object Identifier (DOI) registration time, resulting in artificially inflated citation counts. As a case study, focusing on three journals from a given publisher, we identified at least 9% sneaked references (5,978/65,836) mainly benefiting two authors. Despite not existing in the articles, these sneaked references exist in metadata registries and inappropriately propagate to bibliometric dashboards. Furthermore, we discovered ‘lost’ references: the studied bibliometric platform failed to index at least 56% (36,939/65,836) of the references listed in the HTML version of the publications. The extent of the sneaked and lost references in the global literature remains unknown and requires further investigations. Bibliometric platforms producing citation counts should identify, quantify, and correct these flaws to provide accurate data to their patrons and prevent further citation gaming.
Abstract: Academic journals have been publishing the results of biomedical research for more than 350 years. Reviewing their history reveals that the ways in which journals vet submissions have changed over time, culminating in the relatively recent appearance of the current peer-review process. Journal brand and Impact Factor have meanwhile become quality proxies that are widely used to filter articles and evaluate scientists in a hypercompetitive prestige economy. The Web created the potential for a more decoupled publishing system in which articles are initially disseminated by preprint servers and then undergo evaluation elsewhere. To build this future, we must first understand the roles journals currently play and consider what types of content screening and review are necessary and for which papers. A new, open ecosystem involving preprint servers, journals, independent content-vetting initiatives, and curation services could provide more multidimensional signals for papers and avoid the current conflation of trust, quality, and impact. Academia should strive to avoid the alternative scenario, however, in which stratified publisher silos lock in submissions and simply perpetuate this conflation.
“With this short survey, we would like to solicit community input for our project at this year’s Scholarly Communication Institute (SCI2023). In our project, we will be studying the way in which Open Peer Review (OPR) models can contribute to diversity and trust in research. With OPR, we are particularly referring to modes of peer review in which actors’ identities are revealed (Open Identities), peer review reports are openly shared (Open Reports), or non-invited stakeholders are able to participate (Open Participation). While these models of peer review have the potential to contribute to diversity, equity and inclusion, their efficacy is still largely unknown. We are therefore curious to hear your thoughts on potential benefits or risks of OPR in your community, as well as open questions that you would like to see addressed.”
“Here’s the problem, and it’s true for science as much as it’s true for coworkers, spouses, or anyone else: Trust can only be earned, not demanded. And one of the most critical places where scientists, journals, and funders could earn that trust is by giving more prominence to replications and reanalyses that expose prior scientific errors….
In some cases, to be sure, a letter to the editor might suffice for minor corrections, and journal editors obviously have the responsibility to make sure that a failed replication is indeed accurate and important enough to publish.
But in the case of failed replications that expose obvious errors in the original article, a short letter will likely be inadequate to address the journal’s earlier mistake. We all know that such letters won’t be as widely read, and the original article will probably still be cited and read nearly as often (indeed, in psychology, the publication of a failed replication only makes the citation rate of the original article go down by 14%, and another study even found that non-replicable papers are cited at a higher rate than replicable papers).
What these medical and health journals are saying, however, is that they place a higher priority on citations and audience interest than on publishing replications. Put another way, they prefer popularity over truth, when the two are in conflict.
This is not a respectable scientific position, nor does it deserve public trust….”
“Until recently, MDPI and Frontiers were known for their meteoric rise. At one point, powered by the Guest Editor model, the two publishers combined for about 500,000 papers (annualized), which translated into nearly USD $1,000,000,000 annual revenue. Their growth was extraordinary, but so has been their contraction. MDPI has declined by 27% and Frontiers by 36% in comparison to their peak.
Despite their slowdown, MDPI and Frontiers have become an integral part of the modern publishing establishment. Their success reveals that their novel offering resonates with thousands of researchers. Their turbulent performance, however, shows that their publishing model is subject to risk, and its implementation should acknowledge and mitigate such risk….”
The Center for Scientific Integrity, the organisation behind the Retraction Watch blog and database, and Crossref, the global infrastructure underpinning research communications, both not-for-profits, announced today that the Retraction Watch database has been acquired by Crossref and made a public resource. An agreement between the two organisations will allow Retraction Watch to keep the data populated on an ongoing basis and always open, alongside publishers registering their retraction notices directly with Crossref.
“In workshop #4, participants discussed how part of the solution to perceptions of low quality OA publishing is positively defining what good or trustworthy OA publishing is, and so, help identify reliable publishing venues.
The consensus amongst workshop participants was that a focus on the process and quality-assurance practices that a publisher (or journal / book / platform) follows is the best way to inspire trust. And that this matters more than the abstract and flawed concept of prestige. A philosophy that therefore emerged in workshop #4 was to drive a shift away from prestigious and towards trusted publishing venues – the latter judged by publishing processes and practices.
Participants discussed how some kitemarks already hint at publishing venues that can be, and are, trusted, such as COPE membership, DOAJ listing and OASPA membership.
A new (and as yet unreleased) rubric for measuring publishers by their practices is also in development within the librarian community. This underscores the thinking that process and transparency are important….”
“There are three specific issues that could be taken up by the G20:
Endorsement of cOAlition S: While initial efforts may have facilitated a shift towards a pay-to-publish model that does not work for most of the world, cOAlition S remains the most promising vehicle for reform and is actively exploring alternative models from emerging economies.
Championing equitable funding: There will be costs to infrastructure that is likely to be needed for research publishing reforms. This necessitates innovative and equitable funding mechanisms that ensure all researchers, irrespective of their geographical location or institutional affiliation, can publish their work Open Access.
Policy harmonization: G20 is a high-level political platform and may not be the right forum for negotiating comprehensive Open Access policies. But if the G20 nations were to endorse specific Open Access policy positions, it would provide direction for national and multilateral initiatives.
There is a window of opportunity. India, which holds the G20 presidency, is already lighting a path by putting research publishing on the agenda of several G20 engagement groups. These groups, particularly the Chief Scientific Advisers Roundtable, can seize the moment and harness the influence of the G20 to pursue effective, efficient, and equitable research publishing. They would do well to work with leaders from Brazil and South Africa, who will hold the presidency in 2024 and 2025 respectively, to ensure momentum for reform is sustained.
“As a result of our discussions with publishers, vendors, and researchers we developed an initial record summary prototype, which we will be piloting this fall. We’re hoping to make it easier for editors to find and understand information within ORCID records and surface the trust markers that can help them make decisions about the trustworthiness of an ORCID record….”
“Throughout the next three to five years, there will be a sharp rise of Open Access within scholarly communications. In an eco-system that is Open-at-Scale, there will be many new opportunities for scalable tools for knowledge discovery on massively available content. We expect that this will likely have a significant impact on the ecosystem of scholarly communications, most likely in a very positive and beautiful way – and, as the motto says: it will change things At Scale….
One of our future forecasts is also that in an Open Access world the competition for the best authors and peer reviewers will intensify….
A world of Open Access needs a new locus of trust. Information will appear in many places and in many versions. We need to secure the Version of Record that was peer-reviewed….”
Abstract: In recent years, the scientific community has called for improvements in the credibility, robustness and reproducibility of research, characterized by increased interest and promotion of open and transparent research practices. While progress has been positive, there is a lack of consideration about how this approach can be embedded into undergraduate and postgraduate research training. Specifically, a critical overview of the literature which investigates how integrating open and reproducible science may influence student outcomes is needed. In this paper, we provide the first critical review of literature surrounding the integration of open and reproducible scholarship into teaching and learning and its associated outcomes in students. Our review highlighted how embedding open and reproducible scholarship appears to be associated with (i) students’ scientific literacies (i.e. students’ understanding of open research, consumption of science and the development of transferable skills); (ii) student engagement (i.e. motivation and engagement with learning, collaboration and engagement in open research) and (iii) students’ attitudes towards science (i.e. trust in science and confidence in research findings). However, our review also identified a need for more robust and rigorous methods within pedagogical research, including more interventional and experimental evaluations of teaching practice. We discuss implications for teaching and learning scholarship.
Data broker RELX is represented on Twitter by their Chief Communications Officer Paul Abrahams. Due to RELX subsidiary Elsevier being one of the largest publishers of academic journals, Dr. Abrahams frequently engages with academics on the social media platform. On their official pages, Elsevier tries to emphasize that they really, really can be trusted, honestly […]
Scholars need to be able to trust each other, because other – wise they cannot collaborate and use each other’s findings. Similarly trust is essential for research to be applied for individuals, society or the natural environment. The trustworthiness is threatened when researchers engage in questionable research practices or worse. By adopting open science practices, research becomes transparent and accountable. Only then it is possible to verify whether trust in research findings is justified. The magnitude of the issue is substantial with a prevalence of four percent for both fabrication and falsification, and more than 50% for questionable research practices. This implies that researchers regularly engage in behaviors that harm the validity and trustworthiness of their work. What is good for the quality and reliability of research is not always good for a scholarly career. Navigating this dilemma depends on how virtuous the researcher at issue is, but also on the local research climate and the perverse incentives in the way the research system functions. Research institutes, funding agencies and scholarly journals can do a lot to foster research integrity, first and foremost by improving the quality of peer review and reforming researcher assessment