Measuring Research Transparency

“Measuring the transparency and credibility of research is fundamental to our mission. By having measures of transparency and credibility we can learn about the current state of research practice, we can evaluate the impact of our interventions, we can track progress on culture change, and we can investigate whether adopting transparency behaviors is associated with increasing credibility of findings….

Many groups have conducted research projects that manually code a sample of papers from a field to assess current practices. These are useful but highly effortful. If machines can be trained to do the work, we will get much more data, more consistently, and much faster. There are at least three groups that have made meaningful progress creating scalable solutions: Ripeta, SciScore, and DataSeer. These groups are trying to make it possible, accurate, and easy to assess many papers for whether the authors shared data, used reporting standards, identified their conflicts of interest, and other transparency relevant actions….”

SciELO – Brazil – Divulgação científica imuniza contra desinformação Divulgação científica imuniza contra desinformação

From Google’s English:  “Scientific knowledge gained a relevant audience in the pandemic because lies about Covid-19 threaten the lives of the population. It has been a long time since humanity faced such a high mortality disease globally. The pandemic required scientific journals to ensure the rapid publication of available evidence, ensuring the quality of information and the identification of biases that could compromise ittwo, since these works are the essential raw material to fight fake news , misinformation and conspiracy theories, which undermine the population’s adherence to the measures necessary to fight the pandemic….

in an infodemic1, naturally, fanciful, incredible news, which appeals to emotions and seems more phenomenal than reality itself, gains repercussions. The scientific dissemination of Covid-19 became an objective response by scientists to the denial movement3, which calls into question the effectiveness of vaccines, sabotages prevention measures and propagates miracle cures….”

Opinion: Pros and Cons of Google vs. Subscription Databases

“During my time overseeing the library services department of a large school district, we found our subscription databases were generally a well-kept secret. The lack of trained school librarians available to teach these resources was part of the issue. But Google was ubiquitous, as was Wikipedia, and they became de facto research sources for students, despite their limitations for such a role.

Google has its place for students and researchers (I used it for this article), as does Google Scholar (which I also used). But for students, subscription databases should also play a central research role, beginning with age-appropriate sources for elementary kids – like National Geographic – and moving up to “Gale in Context” for middle school students, and more scholarly articles for high schoolers from sources like ABC-CLIO….”

 

View of The UGC-CARE initiative: Indian academia’s quest for research and publishing integrity | First Monday

Abstract:  This paper discusses the reasons for emergence of predatory publications in India, engendered by mandates of higher educational institutions: that require stipulated number of research publications for employment and promotions. Predatory journals have eclipsed the merits of open access publishing, compromised ethical practices, and left the research community groping for benchmarks of research integrity and publication ethics. To fight back the menace of predatory publications, University Grants Commission, India has established “Consortium for Academic Research and Ethics” (UGC-CARE) in 2018 to promote and benchmark research integrity and publication ethics among the Indian academia. The present paper discusses the UGC-CARE initiative, its structure, objectives and specifically, “UGC-CARE Reference List of Quality Journals” (UGC-CARE list) and finally, the challenges it faces.

 

Credibility of scientific information on social media: Variation by platform, genre and presence of formal credibility cues | Quantitative Science Studies | MIT Press

Abstract:  Responding to calls to take a more active role in communicating their research findings, scientists are increasingly using open online platforms, such as Twitter, to engage in science communication or to publicize their work. Given the ease with which misinformation spreads on these platforms, it is important for scientists to present their findings in a manner that appears credible. To examine the extent to which the online presentation of science information relates to its perceived credibility, we designed and conducted two surveys on Amazon’s Mechanical Turk. In the first survey, participants rated the credibility of science information on Twitter compared with the same information in other media, and in the second, participants rated the credibility of tweets with modified characteristics: presence of an image, text sentiment, and the number of likes/retweets. We find that similar information about scientific findings is perceived as less credible when presented on Twitter compared to other platforms, and that perceived credibility increases when presented with recognizable features of a scientific article. On a platform as widely distrusted as Twitter, use of these features may allow researchers who regularly use Twitter for research-related networking and communication to present their findings in the most credible formats.

 

Efficiency of “Publish or Perish” Policy—Some Considerations Based on the Uzbekistan Experience

Abstract:  Researchers from Uzbekistan are leading the global list of publications in predatory journals. The current paper reviews the principles of implementation of the “publish or perish policy” in Uzbekistan with an overarching aim of detecting the factors that are pushing more and more scholars to publish the results of their studies in predatory journals. Scientific publications have historically been a cornerstone in the development of science. For the past five decades, the quantity of publications has become a common indicator for determining academic capacity. Governments and institutions are increasingly employing this indicator as an important criterion for promotion and recruitment; simultaneously, researchers are being awarded Ph.D. and D.Sc. degrees for the number of articles they publish in scholarly journals. Many talented academics have had a pay rise or promotion declined due to a short or nonexistent bibliography, which leads to significant pressure on academics to publish. The “publish or perish” principle has become a trend in academia and the key performance indicator for habilitation in Uzbekistan. The present study makes a case for re-examining the criteria set by the Supreme Attestation Commission of the Republic of Uzbekistan for candidates applying for Ph.D. and D.Sc. as well as faculty promotion requirements in the light of current evidence for the deteriorating academic performance of scholars. View Full-Text

 

Under pressure, Uzbek researchers flood academia with nonsense | Eurasianet

“Anyone tracking scholarship on Central Asia is sure to be swamped by Uzbek research in unreputable publications

A new paper has found why: Under pressure from Uzbekistan’s government, academics are succumbing to predatory journals – publishers that, for a fee, overlook best practices like peer review or editing. Many of the researchers are forced to publish far more often than feasible if the bar were higher, and the quality shows: Uzbek academics are global leaders in spreading research that some scholars would explicitly call “bullshit.”

“Publish or perish”: It’s a global problem among academics, with eye-opening salience in Uzbekistan, find Bahtiyor Eshchanov of the Center for Economic Research and Reforms in Tashkent and his three Uzbek co-authors in a new paper in Publications, a peer-reviewed journal about scholarly publishing….”

Characteristics of scholars who review for predatory and legitimate journals: linkage study of Cabells Scholarly Analytics and Publons data | BMJ Open

Abstract

Objectives To describe and compare the characteristics of scholars who reviewed for predatory or legitimate journals in terms of their sociodemographic characteristics and reviewing and publishing behaviour.

Design Linkage of random samples of predatory journals and legitimate journals of the Cabells Scholarly Analytics’ journal lists with the Publons database, employing the Jaro-Winkler string metric. Descriptive analysis of sociodemographic characteristics and reviewing and publishing behaviour of scholars for whom reviews were found in the Publons database.

Setting Peer review of journal articles.

Participants Reviewers who submitted peer review reports to Publons.

Measurements Numbers of reviews for predatory journals and legitimate journals per reviewer. Academic age of reviewers, the total number of reviews, number of publications and number of reviews and publications per year.

Results Analyses included 183 743 unique reviews submitted to Publons by 19 598 reviewers. Six thousand and seventy-seven reviews were for 1160 predatory journals (3.31% of all reviews) and 177 666 reviews for 6403 legitimate journals (96.69%). Most scholars never submitted reviews for predatory journals (90.0% of all scholars); few scholars (7.6%) reviewed occasionally or rarely (1.9%) for predatory journals. Very few scholars submitted reviews predominantly or exclusively for predatory journals (0.26% and 0.35%, respectively). The latter groups of scholars were of younger academic age and had fewer publications and reviews than the first groups. Regions with the highest shares of predatory reviews were sub-Saharan Africa (21.8% reviews for predatory journals), Middle East and North Africa (13.9%) and South Asia (7.0%), followed by North America (2.1%), Latin America and the Caribbean (2.1%), Europe and Central Asia (1.9%) and East Asia and the Pacific (1.5%).

Conclusion To tackle predatory journals, universities, funders and publishers need to consider the entire research workflow and educate reviewers on concepts of quality and legitimacy in scholarly publishing.

Is MDPI a predatory publisher? – Paolo Crosetto

“So, is MDPI predatory or not? I think it has elements of both. I would name their methods aggressive rent extracting, rather than predatory. And I also think that their current methods & growth rate are likely to make them shift towards more predatory over time.

MDPI publishes good papers in good journals, but it also employs some strategies that are proper to predatory publishers. I think that the success of MDPI in recent years is due to the creative combination of these two apparently contradicting strategy. One — the good journals with high quality — creates a rent that the other — spamming hundreds of colleagues to solicit papers, an astonishing increase in Special Issues, publishing papers as fast as possible — exploits.This strategy makes a lot of sense for MDPI, who shows strong growth rates and is en route to become the largest open access publisher in the world. But I don’t think it is a sustainable strategy. It suffers from basic collective action problems, that might deal a lot of damage to MDPI first, and, most importantly, to scientific publishing in general….

A predatory publisher is a journal that would publish anything — usually in return for money. MDPI rejection rates make this argument hard to sustain. Yet, MDPI is using some of the same techniques of predatory journals….”

Is MDPI a predatory publisher? – Paolo Crosetto

“So, is MDPI predatory or not? I think it has elements of both. I would name their methods aggressive rent extracting, rather than predatory. And I also think that their current methods & growth rate are likely to make them shift towards more predatory over time.

MDPI publishes good papers in good journals, but it also employs some strategies that are proper to predatory publishers. I think that the success of MDPI in recent years is due to the creative combination of these two apparently contradicting strategy. One — the good journals with high quality — creates a rent that the other — spamming hundreds of colleagues to solicit papers, an astonishing increase in Special Issues, publishing papers as fast as possible — exploits.This strategy makes a lot of sense for MDPI, who shows strong growth rates and is en route to become the largest open access publisher in the world. But I don’t think it is a sustainable strategy. It suffers from basic collective action problems, that might deal a lot of damage to MDPI first, and, most importantly, to scientific publishing in general….

A predatory publisher is a journal that would publish anything — usually in return for money. MDPI rejection rates make this argument hard to sustain. Yet, MDPI is using some of the same techniques of predatory journals….”

Publishers Care about the Version of Record, Do Researchers? – The Scholarly Kitchen

“It was against this backdrop that I read Exploring Researcher Preference for the Version of Record, which reported on research Springer Nature conducted in collaboration with ResearchGate. It is perhaps obvious to caveat that it is in Springer Nature’s interests to use this study to reinforce the value of the VOR, a central position of a recent keynote by CEO Frank Vrancken Peeters at the APE 2021 conference.

The study was conducted “in situ” and leveraged the Springer Nature syndication pilot project that posted VOR articles for access on the ResearchGate platform. As Mithu Lucraft, Director for Content Marketing Strategy, of the Springer Nature Group and one of the study’s co-authors explained to me, the survey was presented to ResearchGate users that were logged in and who had interacted with at least one Springer Nature publication in the 60 days prior to the survey being live in October 2020. 

Importantly, survey participants were not only asked to choose which version of an article they prefer but also which versions they would feel comfortable using for different purposes. In many cases, participants indicated that multiple different versions would be acceptable for a given use, which indicates that a preprint or accepted manuscript can substitute for the VOR in some use cases but perhaps not all. …”

Imposters and Impersonators in Preprints: How do we trust authors in Open Science? – The Scholarly Kitchen

“The prevalence of fictitious authorship across preprints is still unknown, and the writers’ motivations are opaque in most cases. This nefarious behavior within the open science arena raises many questions in need of discussing….”

Has the pandemic changed public attitudes about science? | Impact of Social Sciences

“At a structural level, the public faith in science’s trustworthiness and value can also be ‘future proofed’ through ongoing initiatives to make scientific research open and transparent, enhanced efforts to ensure a more diverse and inclusive scientific workforce and other efforts to improve science from within. Initiatives working in this direction include increased adoption of open science policies by research funders and global public policy that promotes more socially responsible research and innovation. Indeed, this moment of strong public support may be the perfect opportunity for long-needed structural reforms to make research more socially responsible and sustainable. In other words, it’s time to fix the roof while the sun is shining!”

Guest Post – Putting Publications into Context with the DocMaps Framework for Editorial Metadata – The Scholarly Kitchen

“Trust in academic journal articles is based on similar expectations. Journals carry out editorial processes from peer review to plagiarism checks. But these processes are highly heterogeneous in how, when, and by whom they are undertaken. In many cases, it’s not always readily apparent to the outside observer that they take place at all. And as new innovations in peer review and the open research movement lead to new experiments in how we produce and distribute research products, understanding what events take place is an increasingly important issue for publishers, authors, and readers alike.

With this in mind, the DocMaps project (a joint effort of the Knowledge Futures Group, ASAPbio, and TU Graz, supported by the Howard Hughes Medical Institute), has been working with a Technical Committee to develop a machine-readable, interoperable and extensible framework for capturing valuable context about the processes used to create research products such as journal articles. This framework is being designed to capture as much (or little) contextual data about a document as desired by the publisher: from a minimum assertion that an event took place, to a detailed history of every edit to a document….”

Science Communication in the Context of Reproducibility and Replicability: How Nonscientists Navigate Scientific Uncertainty · Issue 2.4, Fall 2020

Abstract:  Scientists stand to gain in obvious ways from recent efforts to develop robust standards for and mechanisms of reproducibility and replicability. Demonstrations of reproducibility and replicability may provide clarity with respect to areas of uncertainty in scientific findings and translate into greater impact for the research. But when it comes to public perceptions of science, it is less clear what gains might come from recent efforts to improve reproducibility and replicability. For example, could such efforts improve public understandings of scientific uncertainty? To gain insight into this issue, we would need to know how those views are shaped by media coverage of it, but none of the emergent research on public views of reproducibility and replicability in science considers that question. We do, however, have the recent report on Reproducibility and Replicability in Science issued by the National Academies of Sciences, Engineering, and Medicine, which provides an overview of public perceptions of uncertainty in science. Here, I adapt that report to begin a conversation between researchers and practitioners, with the aim of expanding research on public perceptions of scientific uncertainty. This overview draws upon research on risk perception and science communication to describe how the media influences the communication and perception of uncertainty in science. It ends by presenting recommendations for communicating scientific uncertainty as it pertains to issues of reproducibility and replicability.