Guest Post – Reputation and Publication Volume at MDPI and Frontiers – The Scholarly Kitchen

“Until recently, MDPI and Frontiers were known for their meteoric rise. At one point, powered by the Guest Editor model, the two publishers combined for about 500,000 papers (annualized), which translated into nearly USD $1,000,000,000 annual revenue. Their growth was extraordinary, but so has been their contraction. MDPI has declined by 27% and Frontiers by 36% in comparison to their peak.

Despite their slowdown, MDPI and Frontiers have become an integral part of the modern publishing establishment. Their success reveals that their novel offering resonates with thousands of researchers. Their turbulent performance, however, shows that their publishing model is subject to risk, and its implementation should acknowledge and mitigate such risk….”

You do not receive enough recognition for your influential science | bioRxiv

Abstract:  During career advancement and funding allocation decisions in biomedicine, reviewers have traditionally depended on journal-level measures of scientific influence like the impact factor. Prestigious journals are thought to pursue a reputation of exclusivity by rejecting large quantities of papers, many of which may be meritorious. It is possible that this process could create a system whereby some influential articles are prospectively identified and recognized by journal brands but most influential articles are overlooked. Here, we measure the degree to which journal prestige hierarchies capture or overlook influential science. We quantify the fraction of scientists’ articles that would receive recognition because (a) they are published in journals above a chosen impact factor threshold, or (b) are at least as well-cited as articles appearing in such journals. We find that the number of papers cited at least as well as those appearing in high-impact factor journals vastly exceeds the number of papers published in such venues. At the investigator level, this phenomenon extends across gender, racial, and career stage groupings of scientists. We also find that approximately half of researchers never publish in a venue with an impact factor above 15, which under journal-level evaluation regimes may exclude them from consideration for opportunities. Many of these researchers publish equally influential work, however, raising the possibility that the traditionally chosen journal-level measures that are routinely considered under decision-making norms, policy, or law, may recognize as little as 10-20% of the work that warrants recognition.


Report from ‘Equity in OA’ workshop #4 – part 2: Trust as the new prestige – OASPA

“In workshop #4, participants discussed how part of the solution to perceptions of low quality OA publishing is positively defining what good or trustworthy OA publishing is, and so, help identify reliable publishing venues. 

The consensus amongst workshop participants was that a focus on the process and quality-assurance practices that a publisher (or journal / book / platform) follows is the best way to inspire trust. And that this matters more than the abstract and flawed concept of prestige. A philosophy that therefore emerged in workshop #4 was to drive a shift away from prestigious and towards trusted publishing venues – the latter judged by publishing processes and practices.

Participants discussed how some kitemarks already hint at publishing venues that can be, and are, trusted, such as COPE membership, DOAJ listing and OASPA membership. 

A new (and as yet unreleased) rubric for measuring publishers by their practices is also in development within the librarian community. This underscores the thinking that process and transparency are important….”

Article processing charges for open access journal publishing: A review – Borrego – Learned Publishing – Wiley Online Library

Abstract:  Some open access (OA) publishers charge authors fees to make their articles freely available online. This paper reviews literature on article processing charges (APCs) that has been published since 2000. Despite praise for diamond OA journals, which charge no fees, most OA articles are published by commercial publishers that charge APCs. Publishers fix APCs depending on the reputation assigned to journals by peers. Evidence shows a relationship between high impact metrics and higher, faster rising APCs. Authors express reluctance about APCs, although this varies by discipline depending on previous experience of paying publication fees and the availability of research grants to cover them. Authors rely on a mix of research grants, library funds and personal assets to pay the charges. Two major concerns have been raised in relation to APCs: the inability of poorly funded authors to publish research and their impact on journal quality. Waivers have not solved the first issue. Research shows little extension of waiver use, unintended side effects on co-author networks and concerns regarding criteria to qualify for them. Bibliometric studies concur that journals that charge APCs have a similar citation impact to journals that rely on other income sources.


Report from Equity in Open Access workshop #2: Why do professors pick paywalls? – OASPA

“Following workshop #1, OASPA’s second ‘Equity in OA’ workshop was held on 28 March 2023. The report from workshop #2 was published last week by Alicia Wise and Lorraine Estelle of Information Power. 

Researchers’ preferences for publishing behind paywalls was a recurring topic in workshop conversations, and our reflections on ‘Equity in OA’ workshop #2 are linked to the assertion that professors do pick paywalls – at least sometimes. But do they really want to? Drawing on discussions in workshop #2 and other sources, here are some thoughts on why this might be and what can be done about it….”

Raging Against The Mythical Figure Who Keeps Us Down (Not) – Future U

“A related issue: Academic publishing thrives on the unlimited growth model. Scientists publishing more papers are a source of revenue. Type’ academic publishing racket’ into a search engine. I got 3,990,000 links. You only need to read a few to see that publishers are making big profits, are double dipping, and are not overly concerned about the damage done to the open exchange of ideas. You will see that vanity journals (e.g., Nature, Science, Cell), and the perceived need to publish within them, have changed the behavior of the science community in detrimental ways.

Who keeps for-profit journals in business? Who does free work for them as an author or referee? Who maintains the idea that if you don’t publish in vanity journals, you haven’t done quality science? People like me. There are alternative dissemination outlets that are more open, less profit-driven, and less vanity based. So why do I not use them? Maybe I justify choices by saying, ‘That’s the nature of the science game.’ Maybe in doing ‘my job,’ I lose attention to the reality that affects what I do, even if it isn’t listed on my job-to-do lists. Maybe I need to remind myself that resistance is in my control. I can stop supporting organizations that limit access to public resources for monetary gains….”

A suggestion for eLife

“I have a simple suggestion for how to counteract such a concern, and that is that the journal should adopt a different criterion for deciding which papers to review – this should be done solely on the basis of the introduction and methods, without any knowledge of the results. Editors could also be kept unaware of the identity of authors.


If eLife wants to achieve a distinctive reputation for quality, it could do so by only taking forward to review those articles that have identified an interesting question and tackled it with robust methodology. It’s well-known that editors and reviewers tend to be strongly swayed by novel and unexpected results, and will disregard methodological weaknesses if the findings look exciting. If authors had to submit a results-blind version of the manuscript in the first instance, then I predict that the initial triage by editors would look rather different.  The question for the editor would no longer be one about the kind of review the paper would generate, but would focus rather on whether this was a well-conducted study that made the editor curious to know what the results would look like.  The papers that subsequently appeared in eLife would look different to those in its high-profile competitors, such as Nature and Science, but in a good way.  Those ultra-exciting but ultimately implausible papers would get filtered out, leaving behind only those that could survive being triaged solely on rationale and methods.”

Measuring Back: Bibliodiversity and the Journal Impact Factor brand. A Case study of IF-journals included in the 2021 Journal Citations Report. | Zenodo

Abstract:  Little attention has been devoted to whether the Impact Factor (IF) can be considered a responsible metric in light of bibliodiversity. This paper critically engages with this question in measuring the following variables of IF journals included in the 2021 Journal CItation Reports and examining their distribution: publishing models (hybrid, Open Access with or without fees, subscription), world regions, language(s) of publication, subject categories, publishers, and the prices of article processing charges (APC) if any. Our results show that the quest for prestige or perceived quality through the IF brand poses serious threats to bibliodiversity. The IF brand can indeed hardly be considered a responsible metric insofar as it perpetuates publishing concentration, maintains a domination of the Global North and its attendant artificial image of mega producer of scholarly content, does not promote linguistic diversity, and de-incentivizes fair and equitable open access by entrenching fee-based OA delivery options with rather high APCs.


Tired of the profiteering in academic publishing? Vote with your feet. – Spatial Ecology and Evolution Lab

“First, let’s say one of the Olympian Editors asks you to review a manuscript for one of the profit-making esteem engines. You record on your CV that you have been asked to review for this journal (esteem points!), but you politely decline the invitation, explaining that you would rather your professional service go towards open science initiatives.

The editor at the esteem factory finds that her job has just become a lot harder than it used to be. It is hard to find reviewers, and the reviews aren’t as thorough or as good anymore. She keeps the line on her CV stating that she has been an editor at X (esteem points!), and then steps down at the next opportunity. She has better things to do than spend her days cajoling reluctant reviewers. And so it goes.

Being a discerning reviewer has nothing but benefits. There are no esteem points lost for the individual, and there is a higher turnover of editorial staff at high-esteem journals. This turnover means more opportunity and less competition for these positions, and it means the esteem hierarchy is flattened somewhat because, well, who hasn’t been an editor for Nature, and, besides, the stuff published there isn’t as good as it used to be. Overburdened reviewers have an important reason to do less reviewing; they are, through individual decision, changing the face of academic publishing and making science accessible to all….”

Publication in English should not be associated with prestige | Responsible Research

“Recently, multilingualism in scholarly communication has emerged as one of the central points of concern in the conversations on responsible science and research. This is a report of EuroScience Open Forum 2022 (ESOF) panel discussion on challenges of recognizing and supporting multilingual scholarly work in different geographical and organisational contexts….

Currently, the majority of scholarly communication is conducted in English. For example, in 2020, 95% of all the articles in one of the largest bibliographic databases, the Web of Science, were in English. 

Whilst the prevalence of an academic lingua franca certainly advances international communication between academics, multiple issues arise from this language hegemony. Considering that the majority of the global population does not speak English, the wider society is largely excluded from the scientific discourse and sharing of information. Concurrently, researchers have fewer resources and possibilities to publish in other languages without it negatively affecting their careers. As such, many local non-English Open Access journals, which play an important role as publishers of locally relevant research, are struggling to subsist.

Today, efforts to tackle these challenges are emerging in various localities through education, technological innovation and reformations of research assessment and funding. Moreover, important steps have been taken through international collaboration and policy making, which are essential for the fair and efficient global research community….

Throughout the panel discussion, it was emphasised that funders are in a key position in encouraging and facilitating the language-variety in scholarly communication. The Executive Director of cOAlition S, an Open Access initiative of national research funding organisations, Johan Rooryck presented, that the goal of increasing Diamond Open Access journals goes hand in hand with the effort of widening multilingualism in publishing. Rooryck noted that according to a recent Diamond Open Access study, most Diamond journals are multilingual while serving an international readership. According to Rooryck, national funders can also promote multilingual bibliodiversity of academic books via funding schemes for Open Access books. Rooryck’s message was clear: “Funders and universities should value multilingual publication in the same way as publication in English. We should convince PhD students of this too. Publication in English should not be associated with prestige.”…”

‘The attitude of publishers is a barrier to open access’ | UKSG

“Transitioning to open research is incredibly important for the University of Liverpool for two reasons: the external environment we are now operating in, and our own philosophy and approach to research.

But there are barriers, particularly the research culture and the attitude of publishers….

In my experience, the biggest barrier is culture: researchers are used to operating in a particular way. Changing practice and mindset takes time and must be conducted sensitively.

Open research benefits all researchers, so having their support on this journey is vitally important.

Some researchers are concerned that publishing their work open access has implications for their intellectual property (IP) rights. In fact, this is a perceived problem, since the same IP protections apply to all work, whether published behind a paywall or published open access.

Despite the recognition that citation metrics are not a suitable proxy for research assessment, some researchers continue to seek the kudos of publishing in a so-called prestige journal with a high-impact factor, such as ‘Nature’.  They see this as a key career goal and worry their progression will falter without this achievement….

So, while I acknowledge there has been significant progress towards open access globally, and in particular compliance with UKRI’s open access policy, the attitude of publishers which are driven by profit margins continues to be an unacceptable barrier….”

Not all that shines is Diamond: why Open Access publication favours rich authors, prestigious universities and industry-funded research | A Blog of Trial and Error

by Marcel Hobma

In recent years, it has become increasingly common for researchers to publish their work in Open Access by paying article processing costs to the publisher [1, 2]. Before the digital revolution, academic publishing was mostly subscription-based and university libraries paid publishers at regular intervals for large bundles of journals. Every physical copy of a journal came with its own production and distribution costs, making Open Access an unrealistic pursuit. When academic research was digitalized and the costs of copying and disseminating research lowered dramatically, the Open Access movement gained momentum and at least four ways of Open Access (OA) publishing joined the old subscription model [3]. Authors can now publish their studies in subscription-based Green OA journals, which allows them to republish their work on large preprint servers such as ArXiv and in freely accessible institutional repositories managed by university libraries. A second option is to publish in full Open Access, peer-reviewed journals that rely on author-paid article processing costs to maintain a steady source of income. Diamond OA journals like the Journal of Trail and Error also publish in full Open Access, but don’t charge the authors with any costs. Lastly, there exists the option to publish in commercial Hybrid journals that combine the subscription model with Open Access publishing.

Article processing costs allow researchers to publish Open Access articles in well-edited and prestigious journals, which is the main reason for authors and their funders to pay these costs. Open access is often portrayed as essential to the transparent and cooperative nature of science, but also aims to circumvent the high paywalls raised by commercial publishers that limit the access of research, and therefore facilitate the dissemination of valuable knowledge [4-6]. However, the promises and advantages of the author-paid funding mechanism also come with a downside in the form of publication bias. Not every author or institution might be able or willing to pay article processing costs if they are too high and this could lead to selective publishing practices that favour certain groups of researchers, institutions and research topics.



Impact Factors, Altmetrics, and Prestige, Oh My: The Relationship Between Perceived Prestige and Objective Measures of Journal Quality | SpringerLink

Abstract:  The focus of this work is to examine the relationship between subjective and objective measures of prestige of journals in our field. Findings indicate that items pulled from Clarivate, Elsevier, and Google all have statistically significant elements related to perceived journal prestige. Just as several widely used bibliometric metrics related to prestige, so were altmetric scores.


Starstruck by journal prestige and citation counts? On students’ bias and perceptions of trustworthiness according to clues in publication references | SpringerLink

Abstract:  Research is becoming increasingly accessible to the public via open access publications, researchers’ social media postings, outreach activities, and popular disseminations. A healthy research discourse is typified by debates, disagreements, and diverging views. Consequently, readers may rely on the information available, such as publication reference attributes and bibliometric markers, to resolve conflicts. Yet, critical voices have warned about the uncritical and one-sided use of such information to assess research. In this study we wanted to get insight into how individuals without research training place trust in research based on clues present in publication references. A questionnaire was designed to probe respondents’ perceptions of six publication attributes. A total of 148 students responded to the questionnaire of which 118 were undergraduate students (with limited experience and knowledge of research) and 27 were graduate students (with some knowledge and experience of research). The results showed that the respondents were mostly influenced by the number of citations and the recency of publication, while author names, publication type, and publication origin were less influential. There were few differences between undergraduate and graduate students, with the exception that undergraduate students more strongly favoured publications with multiple authors over publications with single authors. We discuss possible implications for teachers that incorporate research articles in their curriculum.


Less ‘prestigious’ journals can contain more diverse research, by citing them we can shape a more just politics of citation. | Impact of Social Sciences

“The ‘top’ journals in any discipline are those that command the most prestige, and that position is largely determined by the number of citations their published articles garner. Despite being highly problematic, citation-based metrics remain ubiquitous, influencing researchers’ review, promotion and tenure outcomes. Bibliometric studies in various fields have shown that the ‘top’ journals are heavily dominated by research produced in and about a small number of ‘core’ countries, mostly the USA and the UK, and thus reproduce existing global power imbalances within and beyond academia.

In our own field of higher education, studies over many years have revealed persistent western hegemony in published scholarship. However, we observed that most studies tend to focus their analysis on the ‘top’ journals, and (by default) on those that publish exclusively in English. We wondered if publication patterns were similar in other journals. So, we set out to compare (among other things) the author affiliations and study contexts of articles published in journals in the top quartile of impact (Q1), with those in the bottom quartile of impact (Q4)….”