Rethinking Research Assessment for the Greater Good: Findings from the RPT Project – Scholarly Communications Lab | ScholCommLab

“The review, promotion, and tenure (RPT) process is central to academic life and workplace advancement. It influences where faculty direct their attention, research, and publications. By unveiling the RPT process, we can inform actions that lead towards a greater opening of research. 

Between 2017 and 2022, we conducted a multi-year research project involving the collection and analysis of more than 850 RPT guidelines and 338 surveys with scholars from 129 research institutions across Canada and the US. Starting with a literature review of academic promotion and tenure processes, we launched six studies applying mixed methods approaches such as surveys and matrix coding.

So how do today’s universities and colleges incentivize open access research? Read on for 6 key takeaways from our studies….”

At what point do academics forego citations for journal status? | Impact of Social Sciences

“The limitations of journal based citation metrics for assessing individual researchers are well known. However, the way in which these assessment systems differentially shape research practices within disciplines is less well understood. Presenting evidence from a new analysis of business and management academics, Rossella Salandra and Ammon Salter and James Walker¸ explore how journal status is valued by these academics and the point at which journal status becomes more prized than academic influence….”

At what point do academics forego citations for journal status? | Impact of Social Sciences

“The limitations of journal based citation metrics for assessing individual researchers are well known. However, the way in which these assessment systems differentially shape research practices within disciplines is less well understood. Presenting evidence from a new analysis of business and management academics, Rossella Salandra and Ammon Salter and James Walker¸ explore how journal status is valued by these academics and the point at which journal status becomes more prized than academic influence….”

An open science argument against closed metrics

“In the Open Scientist Handbook, I argue that open science supports anti-rivalrous science collaborations where most metrics are of little, or of negative value. I would like to share some of these arguments here….

Institutional prestige is a profound drag on the potential for networked science. If your administration has a plan to “win” the college ratings game, this plan will only make doing science harder. It makes being a scientist less rewarding. Playing finite games of chasing arbitrary metrics or ‘prestige’ drags scientists away from the infinite play of actually doing science….

As Cameron Neylon said at the metrics breakout of the ‘Beyond the PDF’ conference some years ago, “reuse is THE metric.” Reuse reveals and confirms the advantage that open sharing has over current, market-based, practices. Reuse validates the work of the scientist who contributed to the research ecosystem. Reuse captures more of the inherent value of the original discovery and accelerates knowledge growth….”

bjoern.brembs.blog » Why publication services must not be negotiated

“Recently, the “German Science and Humanities Council” (Wissenschaftsrat) has issued their “Recommendations on the Transformation of Academic Publishing: Towards Open Access“. On page 33 they write that increasing the competition between publishers is an explicit goal of current transformative agreements:

publishers become publication service providers and enter into competition with other providers

This emphasis on competition refers back to the simple fact that as content (rather than service) providers, legacy publishers currently enjoy monopolies on their content, as, e.g., the European Commission has long recognized: In at least two market analyses, one dating as far back as 2003 and one from 2015, the EC acknowledges the lack of a genuine market due to the lack of substitutability…

Without such prestige, the faculty argue, they cannot work, risk their careers and funding. Arguments that these ancient vehicles are unreliable, unaffordable and dysfunctional are brushed away by emphasizing that their academic freedom allows them to drive whatever vehicle they want to their field work. Moreover, they argue, the price of around one million is “very attractive” because of the prestige the money buys them.

With this analogy, it becomes clear why and how tenders protect the public interest against any individual interests. In this analogy, it is likely also clear that academic freedom does not and should not trump all other considerations. In this respect, I would consider the analogy very fitting and have always argued for such a balance of public and researcher interests: academic freedom does not automatically exempt academics from procurement rules.

Therefore, ten experts advocate a ban on all negotiations with publishers and, instead, advocate policies that ensure that all publication services for public academic institutions must be awarded by tender, analogous the the example set by Open Research Europe and analogous to how all other, non-digital infrastructure contracts are awarded.”

bjoern.brembs.blog » Replacing the prestige signal

“Evidence suggests that the prestige signal in our current journals is noisy, expensive and flags unreliable science. There is a lack of evidence that the supposed filter function of prestigious journals is not just a biased random selection of already self-selected input material. As such, massive improvement along several variables can be expected from a more modern implementation of the prestige signal….

How could a more modern system support a ‘prestige signal’ that would actually deserve the moniker? Obviously, if journals were to be replaced by a modern information infrastructure, only our imagination is the limit for which filters the scholarly community may want to implement. Some general ideas may help guide that brainstorming process: If the use of such ‘prestige’ not only in career advancement and funding, but also in the defense of the current system is anything to go by, there should be massive demand for a prestige signal that was worth its price. Today, this prestige arises from selectivity based on expertise (Nature’s slogan always was “the world’s best science”). This entails an expert-based filter that selects only very few (‘the best’) out of of the roughly 3 million peer-reviewed articles being published each year. Importantly, there is no a priori need to objectively specify and determine the criteria for this filter in advance. In a scenario after all journals had been replaced by a modern infrastructure for text, data and code, such services (maybe multiple services, competing for our subscriptions?) need only record not just the articles they selected (as now) but also those they explicitly did not select in addition to the rest that wasn’t even considered. Users (or the services themselves or both) would then be able to compute track records of such services according to criteria that are important to them.

Mimicking current implementations, e.g., the number of citations could be used to determine which service selected the most highly cited articles, how many it missed and which it falsely didn’t even consider. But why stop at bare citations? A modern infrastructure allows for plenty of different markers for scholarly quality. One could just as well use a (already existing) citation typology to differentiate between different types of citations, one could count replications, media mentions, anything, really, to derive track records by which these services may be compared. Given the plentiful demand indicated by the fervent supporters of prestigious journals, services would compete with each other using their track records for the subscriptions of individual and institutional users, providing for innovation at competitive prices, just like any other service market. Such an efficient, competitive marketplace of services, however, can ever only arise, if the current monopoly journals are replaced with a system that allows for such a market to be designed. If demand was not as high as expected, but such a signal nevertheless desired by some, a smaller, more basic implementation could be arranged on a non-profit, open source basis, funded by the vast savings that replacing journals would entail. One may also opt to hold competitions for such services, awarding prizes to the service that best serves the needs of the scholarly community. The possibilities are endless – but only once the scholarly community finds a way to put itself into a position where it has any power over the implementation of its infrastructure.”

 

Journal Prestige Index: Expanding the Horizons of Assessment of Research Impact

“In the backdrop of the above facts, a new research metric for a transparent, fair and comprehensive assessment of research impact was the need of the hour. Of late, Higher Education Commission (HEC) of Pakistan has taken a bold initiative in this regard and developed a new combinatorial, proprietary and derived metric known as journal prestige index (JPI), which takes into account six well established, publicly available, and most influential citation-related parameters for its calculation.7 These were chosen from a list of 29 different citation and usage metrics.8 Both, raw scores and percentiles, are used to give equal weightage to all six factors….

HJRS represents a bold step in right direction for academic journal recognition and ranking.”

How faculty define quality, prestige, and impact of academic journals

Abstract:  Despite the calls for change, there is significant consensus that when it comes to evaluating publications, review, promotion, and tenure processes should aim to reward research that is of high “quality,” is published in “prestigious” journals, and has an “impact.” Nevertheless, such terms are highly subjective and present challenges to ascertain precisely what such research looks like. Accordingly, this article responds to the question: how do faculty from universities in the United States and Canada define the terms quality, prestige, and impact of academic journals? We address this question by surveying 338 faculty members from 55 different institutions in the U.S. and Canada. While relying on self-reported definitions that are not linked to their behavior, this study’s findings highlight that faculty often describe these distinct terms in overlapping ways. Additionally, results show that marked variance in definitions across faculty does not correspond to demographic characteristics. This study’s results highlight the subjectivity of common research terms and the importance of implementing evaluation regimes that do not rely on ill-defined concepts and may be context specific.

Why the price of scholarly publishing is so much higher than the cost | Sauropod Vertebra Picture of the Week

“In an efficient market, competing providers of a good will each try to undercut each other until the prices they charge approach the cost. If, for example, Elsevier and Springer-Nature were competing in a healthy free market, they would each be charging prices around one third of what they are charging now, for fear of being outcompeted by their lower-priced competitor. (Half of those price-cuts would be absorbed just by decreasing the huge profit margins; the rest would have to come from streamlining business processes, in particular things like the costs of maintaining paywalls and the means of passing through them.)

So why doesn’t the Invisible Hand operate on scholarly publishers? Because they are not really in competition. Subscriptions are not substitutable goods because each published article is unique. If I need to read an article in an Elsevier journal then it’s no good my buying a lower-priced Springer-Nature subscription instead: it won’t give me access to the article I need.

(This is one of the reasons why the APC-based model — despite its very real drawbacks — is better than the subscription model: because the editorial-and-publication services offered by Elsevier and Springer-Nature are substitutable. If one offers the service for $3000 and the other for $2000, I can go to the better-value provider. And if some other publisher offers it for $1000 or $500, I can go there instead.)…

Björn Brembs has been writing for years about the fact that every market has a luxury segment: you can buy a perfectly functional wristwatch for $10, yet people spend thousands on high-end watches. He’s long been concerned that if scholarly publishing goes APC-only, then people will be queuing up to pay the €9,500 APC for Nature in what would become a straightforward pay-for-prestige deal. And he’s right: given the outstandingly stupid way we evaluate reseachers for jobs, promotion and tenure, lots of people will pay a 10x markup for the “I was published in Nature” badge even though Nature papers are an objectively bad way to communicate research.

But it feels like something stranger is happening here. It’s almost as though the whole darned market is a luxury segment….

How can funders fix this, and get APCs down to levels that approximate publishing cost? I see at least three possibilities.

First, they could stop paying APCs for their grantees. Instead, they could add a fixed sum onto all grants they make — $1,500, say — and leave it up to the researchers whether to spend more on a legacy publisher (supplementing the $1,500 from other sources of their own) or to spend less on a cheaper born-OA publisher and redistribute the excess elsewhere.

Second, funders could simply publish the papes themselves. To be fair several big funders are doing this now, so we have Wellcome Open Research, Gates Open Research, etc. But doesn’t it seem a bit silly to silo research according to what body awarded the grant that funded it? And what about authors who don’t have a grant from one of these bodies, or indeed any grant at all?

That’s why I think the third solution is best. I would like to see funders stop paying APCs and stop building their own publishing solutions, and instead collaborate to build and maintain a global publishing solution that all researchers could use irrespective of grant-recipient status. I have much to say on what such a solution should look like, but that is for another time.”

Commercial Science Journals: A Luxury Market? – SBMT

“SBMT: Why are the “diamond/platinum” journals the least valued by editorial metrics and funding agencies?

Dr. TR Shankar Raman: I have no idea why this should be so. It feels like the academic community has just painted itself into a corner. There are lots of excellent diamond open access journals. The journals published by Indian Academy of Sciences  are a good example (although they have a weird co-publishing arrangement with Springer Nature, the journals and papers can be freely accessed via the Academy website and there are no charges for authors to publish either). Of course, the number of papers that a diamond open access journal may be able to publish may be lower and many are in niche areas of science rather than multi-disciplinary in scope and hence their reach may be lower than what big-budget commercial journals can achieve with their resources. But this only means that diamond open access journals should be supported more to achieve better reach, not shift to commercial publishers. All public and philanthropic funding for science has everything to gain by supporting and mandating publication in diamond open access journals….

SBMT: How to design a policy in defense of Southern science through the promotion of “diamond/platinum” journals?

Dr. TR Shankar Raman: As individuals, we can each take a stand, as I have tried to in my post—that I will not review for or publish in commercial journals, but will especially do so for diamond open access journals. Particularly, senior scientists and leaders in their fields must set an example by publishing, reviewing for, or accepting to be on the boards of diamond open access journals. But this will not go far unless we also collectively work to change overall policy. As a community, we must petition our academies, funders, and science administrators to change policies to give greater recognition to papers published in diamond open access journals. This can trigger a big change: especially if it begins to count towards jobs and promotions in academia. Impact factor should be trashed as outdated, harmful, and retrogressive. Recipients of public funds should be mandated to publish in diamond open access journals published by nonprofit scientific societies as this is the most cost-effective way to spend the available (limited) funds to achieve publication that is freely, openly, and widely accessible, while supporting and advancing science. Other initiatives such as Gold Open Access, self-archiving of submitted final versions, or pay-to-publish APC models are all half measures or discriminate and exclude large numbers of scientists around the world, who cannot pay the large fees involved. Policies should support membership fee support for scholars and new and tenured faculty to join learned academic societies that publish diamond open access journals so that the funds are kept within the community and to advance science rather than feed the profits of commercial companies….”

Preventing the Matthew principle in science publishing – Speijer – – BioEssays – Wiley Online Library

“During the pandemic, interactions with students had to be from a distance. This got me to think about biases in assessments. In one of the courses, students had to reflect on a technically difficult “Omics” paper, answering questions and giving critical feedback. As individual student-teacher interactions had to be quite minimal, I ended up with a bunch of documents with “faceless” names. In the process of marking different student’s valiant efforts I noticed that, unsurprisingly, names still automatically conjured up faces, as names contain information regarding gender and ethnicity. Of course, I try to combat conscious prejudice. But “ay, there’s the rub”: only a fool would deny unconscious prejudices. I am far from the first to notice that all kinds of (un)conscious biases pervade our (digital) work environment. Some can be more easily circumvented than others. In the case of my exams, a simple intermediary program removing all personal information and generating a random number, would do the trick.

Could such an intake system also be of use in publishing, as is currently being tried out by some publishing start-ups (or should I say “up-starts”?). One objection might be that there are real benefits to how the system currently operates. Aren’t the “top” researchers better known for a reason? Overall, they produce higher quality work, so they should have easier access to widely read journals. Also, lesser known scientists making grandiose claims: should these indeed not be looked at more critically? Thus, it is reasoned, the identity of the people and the institute responsible for the paper just gives another valid criterion to base assessment on. On the other hand, high quality work should be able to stand on its own, and grandiose claims should always be met with skepticism, irrespective of the identity of the claimant….”

Association between productivity and journal impact across disciplines and career age

Abstract:  The association between productivity and impact of scientific production is a long-standing debate in science that remains controversial and poorly understood. Here we present a large-scale analysis of the association between yearly publication numbers and average journal-impact metrics for the Brazilian scientific elite. We find this association to be discipline-specific, career-age dependent, and similar among researchers with outlier and non-outlier performance. Outlier researchers either outperform in productivity or journal prestige, but they rarely do so in both categories. Non-outliers also follow this trend and display negative correlations between productivity and journal prestige but with discipline-dependent intensity. Our research indicates that academics are averse to simultaneous changes in their productivity and journal-prestige levels over consecutive career years. We also find that career patterns concerning productivity and journal prestige are discipline-specific, having in common a raise of productivity with career age for most disciplines and a higher chance of outperforming in journal impact during early career stages.

 

Recognition and rewards – Open Science – Universiteit Utrecht

“Open science means action. And the way we offer recognition and reward to academics and university staff is key in bringing about the transition that Utrecht University aims for. Over the course of the past year the working group on Recognition and Rewards, part of the Open Science Programme, has reflected and thoroughly debated a novel approach to ensuring that we offer room for everyone’s talent, resulting in a new vision (pdf)….

In the current system, researchers and their research are judged by journal impact factors, publisher brands and H-indices, and not by actual quality, real use, real impact and openness characteristics….

Under those circumstances, at best open science practices are seen as posing an additional burden without rewards. At worst, they are seen as actively damaging chances of future funding and promotion & tenure. Early career researchers are perhaps the most dependent on traditional evaluation culture for career progression, a culture held in place by established researchers, as well as by institutional, national and international policies, including funder mandates….”

 

 

Utrecht University Recognition and Rewards Vision

“By embracing Open Science as one of its five core principles1, Utrecht University aims to accelerate and improve science and scholarship and its societal impact. Open science calls for a full commitment to openness, based on a comprehensive vision regarding the relationship with society. This ongoing transition to Open Science requires us to reconsider the way in which we recognize and reward members of the academic community. It should value teamwork over individualism and calls for an open academic culture that promotes accountability, reproducibility, integrity and transparency, and where sharing (open access, FAIR data and software) and public engagement are normal daily practice. In this transition we closely align ourselves with the national VSNU program as well as developments on the international level….”

Open access publishing is the ethical choice | Wonkhe

“I had a stroke half a decade ago and found I couldn’t access the medical literature on my extremely rare vascular condition.

I’m a capable reader, but I couldn’t get past the paywalls – which seemed absurd, given most research is publicly funded. While I had, already, long been an open access advocate by that point, this strengthened my resolve.

The public is often underestimated. Keeping research locked behind paywalls under the assumption that most people won’t be interested in, or capable of, reading academic research is patronising….

While this moral quandary should not be passed to young researchers, there may be benefits to them in taking a firm stance. Early career researchers are less likely to have grants to pay for article processing charges to make their work open access compared to their senior colleagues. Early career researchers are also the ones who are inadvertently paying the extortionate subscription fees to publishers. According to data from the Higher Education Statistics Agency (HESA), the amount of money UK universities fork out each year to access paywalled content from Elsevier – the largest academic publisher in the world – could pay 1,028 academic researchers a salary of £45,000 per year.

We know for-profit publishers, such as Elsevier, hold all the cards with respect to those prestigious titles. What we need are systematic “read and publish” deals that allow people to publish where they want without having to find funding for open access….

The current outlook for prospective researchers to secure an academic position at a university is compromised because so much money is spent propping up for-profit, commercial publishers. Rather than focusing on career damage to those who can’t publish with an Elsevier title, we should focus on the opportunity cost in hundreds of lost careers in academia….”