Anchoring effects in the assessment of papers: An empirical survey of citing authors | PLOS ONE

Abstract:  In our study, we have empirically studied the assessment of cited papers within the framework of the anchoring-and-adjustment heuristic. We are interested in the question whether the assessment of a paper can be influenced by numerical information that act as an anchor (e.g. citation impact). We have undertaken a survey of corresponding authors with an available email address in the Web of Science database. The authors were asked to assess the quality of papers that they cited in previous papers. Some authors were assigned to three treatment groups that receive further information alongside the cited paper: citation impact information, information on the publishing journal (journal impact factor) or a numerical access code to enter the survey. The control group did not receive any further numerical information. We are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation impact or journal impact), but also by numbers that are not related to quality, i.e. the access code. Our results show that the quality assessments of papers seem to depend on the citation impact information of single papers. The other information (anchors) such as an arbitrary number (an access code) and journal impact information did not play a (important) role in the assessments of papers. The results point to a possible anchoring bias caused by insufficient adjustment: it seems that the respondents assessed cited papers in another way when they observed paper impact values in the survey. We conclude that initiatives aiming at reducing the use of journal impact information in research evaluation either were already successful or overestimated the influence of this information.

 

Market forces influence editorial decisions – ScienceDirect

“In this issue of Cortex Huber et al. recount their experience in attempting to update the scientific record through an independent replication of a published study (Huber, Potter, & Huszar, 2019). In general, publishers resist issuing retractions, refutations or corrections to their stories or papers for fear of losing public trust, diminishing their brand and possibly ceding their market share (Sullivan, 2018). Unfortunately, this is just one way that market logic – retaining a competitive advantage among peers – explicitly or implicitly influences editorial priorities and decisions more broadly….

There’s the well-known tautology that news is what newsrooms decide to cover and what’s “newsworthy” is influenced by market logic. That news organizations, charged with relating truth and facts, are subject to market-based decisions is a major source of contention among the discerning public. It should be even more contentious that the stewards of scientific knowledge, academic publishers, are also beholden to it….

Although top journals are loathe to admit they ‘chase cites’ (Editorial, 2018), market forces make this unavoidable. One example is a strategy akin to product cost cross subsidization such as when in journalism profitable traffic-driving, click-bait articles subsidize more costly and in-depth, long-form investigative reporting. In order to attract the ‘best’ science, top journals must maintain a competitive impact factor. If the impact factor strays too far from the nearest competitor, then the journal will have trouble publishing the science it deems as most important because of the worth coveted researchers place on perceived impact….

Although publishers tout the value of replications and pay lip service to other reformative practices, their policies in this regard are often vague and non-committal….

Most professional editors are committed to advancing strong science, but however well-intentioned and sought in good faith reforms are, they are necessarily hamstrung by market forces. This includes restrained requirements for more rigorous and responsible research conduct. Journals do not want to put in place policies that are seemingly so onerous that authors decide to instead publish in competing but less demanding journals. Researchers need incentives for and enforcement of more rigorous research practices, but they want easier paths to publication. The result is that new policies at top journals allow publishers to maintain a patina of progressiveness in the absence of real accountability….

The reforms suggested by Huber et al. are welcome short-term fixes, but the community should demand longer-term solutions that break up the monopoly of academic publishers and divorce the processes of evaluation, publication and curation (Eisen and Polka, 2018). Only then may we wrest the power of science’s stewardship from the heavy hand of the market.”

Cite-seeing and Reviewing: A Study on Citation Bias in Peer Review

Citations play an important role in researchers’ careers as a key factor in evaluation of scientific impact. Many anecdotes advice authors to exploit this fact and cite prospective reviewers to try obtaining a more positive evaluation for their submission. In this work, we investigate if such a citation bias actually exists: Does the citation of a reviewer’s own work in a submission cause them to be positively biased towards the submission? In conjunction with the review process of two flagship conferences in machine learning and algorithmic economics, we execute an observational study to test for citation bias in peer review. In our analysis, we carefully account for various confounding factors such as paper quality and reviewer expertise, and apply different modeling techniques to alleviate concerns regarding the model mismatch. Overall, our analysis involves 1,314 papers and 1,717 reviewers and detects citation bias in both venues we consider. In terms of the effect size, by citing a reviewer’s work, a submission has a non-trivial chance of getting a higher score from the reviewer: an expected increase in the score is approximately 0.23 on a 5-point Likert item. For reference, a one-point increase of a score by a single reviewer improves the position of a submission by 11% on average.

A Survey of Biomedical Journals To Detect Editorial Bias and Nepotistic Behavior

Alongside the growing concerns regarding predatory journal growth, other questionable editorial practices have gained visibility recently. Among them, we explored the usefulness of the Percentage of Papers by the Most Prolific author (PPMP) and the Gini index (level of inequality in the distribution of authorship among authors) as tools to identify journals that may show favoritism in accepting articles by specific authors. We examined whether the PPMP, complemented by the Gini index, could be useful for identifying cases of potential editorial bias, using all articles in a sample of 5,468 biomedical journals indexed in the National Library of Medicine. For articles published between 2015 and 2019, the median PPMP was 2.9%, and 5% of journal exhibited a PPMP of 10.6% or more. Among the journals with the highest PPMP or Gini index values, where a few authors were responsible for a disproportionate number of publications, a random sample was manually examined, revealing that the most prolific author was part of the editorial board in 60 cases (61%). The papers by the most prolific authors were more likely to be accepted for publication within 3 weeks of their submission. Results of analysis on a subset of articles, excluding nonresearch articles, were consistent with those of the principal analysis. In most journals, publications are distributed across a large number of authors. Our results reveal a subset of journals where a few authors, often members of the editorial board, were responsible for a disproportionate number of publications. To enhance trust in their practices, journals need to be transparent about their editorial and peer review practices.