Why Do We Need to Change Research Evaluation Systems? — Observatory | Institute for the Future of Education

“Can we break out of this vicious cycle? Are there alternatives? Yes, there are. For some years now, various movements worldwide have sought to change the system for evaluating research. In 2012, the “San Francisco Declaration” proposed eliminating metrics based on the impact factor. There was also the Charte de la désexcellence  (“Letter of Dis-Excellence”) mentioned above. In 2015, a group of academicians signed the Leiden Manifesto, which warned of the “widespread misuse of indicators in evaluating scientific performance.” Since 2013, the group Science in Transition has sought to reform the science evaluation system. Finally, since 2016, the Collectiu InDocentia, created at the University of Valencia (Spain), has also been doing its part. …”

DORA receives $1.2M grant from Arcadia to accelerate research assessment reform | DORA

“Research assessment reform is part of the open research movement in academia that asks the question: Who and what is research for? The San Francisco Declaration on Research Assessment (DORA), an initiative that operates under the sponsorship of the American Society for Cell Biology, has been awarded a 3-year, $1.2M grant from Arcadia – a charitable fund of Lisbet Rausing and Peter Baldwin. The generous funding will support Tools to Advance Research Assessment (TARA), a project to facilitate the development of new policies and practices for academic career assessment. Project TARA is a collaboration with Sarah de Rijcke, Professor in Science and Evaluation Studies and director of the Centre for Science and Technology Studies (CWTS) at Leiden University, and Ruth Schmidt, Associate Professor at the Institute of Design at the Illinois Institute of Technology.

The grant for Project TARA will help DORA to identify, understand, and make visible the criteria and standards universities use to make hiring, promotion, and tenure decisions. This information will be used to create resources and practical guidance on the reform of research assessment for academic and scholarly institutions. The grant provides DORA with crucial support to create the following outputs:

An interactive online dashboard that tracks criteria and standards academic institutions use for hiring, review, promotion, and tenure.
A survey of U.S. academic institutions to gain a broad understanding of institutional attitudes and approaches to research assessment reform.
A toolkit of resources informed by the academic community to support academic institutions working to improve policy and practice….”

ERC plans for 2022 announced | ERC: European Research Council

“On the occasion of the adoption of this work programme, the ERC is also announcing its formal endorsement of the San Francisco Declaration on Research Assessment (DORA), in line with its long-standing adherence to the highest standards of research assessment. The ERC is convinced that the broad implementation of research assessment procedures that integrate the DORA principles is the key to an equitable transition to Open Science.”

Impact factor abandoned by Dutch university in hiring and promotion decisions

“A Dutch university says it is formally abandoning the impact factor — a standard measure of scientific success — in all hiring and promotion decisions. By early 2022, every department at Utrecht University in the Netherlands will judge its scholars by other standards, including their commitment to teamwork and their efforts to promote open science, says Paul Boselie, a governance researcher and the project leader for the university’s new Recognition and Rewards scheme. “Impact factors don’t really reflect the quality of an individual researcher or academic,” he says. “We have a strong belief that something has to change, and abandoning the impact factor is one of those changes.” …”

Incorporating Preprints into Academic Assessment | DORA

“Join DORA and ASAPbio on Tuesday Day, June 29, for a joint webinar on preprints and academic assessment….

Speakers will discuss important topics surrounding the incorporation of preprints into academic assessment: the value of considering preprints in academic assessment, how preprints can be included in existing assessment processes, and what challenges may arise along the way. Participants will have the opportunity to engage in the dialogue and ask questions of the speakers in the last section of the webinar.

 

This webinar is free to attend and open to everyone interested in improving research assessment. In particular, this webinar will aim to equip early career researchers, faculty, and academic leadership with the knowledge to advocate for the use of preprints at their institutions.”

Job Opening: DORA Program Manager | DORA

“Since 2013, more than 19,000 individuals and organizations in 145 countries have signed the San Francisco Declaration on Research Assessment (DORA) and committed to improving the ways research and researchers are assessed for hiring, promotion, and funding decisions. As an initiative, DORA raises awareness and facilitates the implementation of good practice in research assessment to catalyze change and improve equity in academia. As the DORA Program Manager, you will be working with the Program Director to ensure the execution of DORA activities and projects. The DORA Program Manager assists, plans, initiates, and executes various projects ensuring that goals are met, projects are completed on time and within budget. …”

SPACE to evolve academic assessment: A rubric for analyzing institutional conditions and progress indicators | DORA

“This is part of DORA’s toolkit of resources to support academic institutions that are improving their policies and practices. Find the other resources in the toolkit here.

Improving research and scholarship assessment practices requires the ability to analyze the outcomes of efforts and interventions. However, when conducted only at the unit level of individual interventions, these evaluations and reflections miss opportunities to understand how institutional conditions themselves set the table for the success of new efforts, or how developing institutional capabilities might improve the effectiveness and impact of these new practices at greater scale. The SPACE rubric was developed to help institutions at any stage of academic assessment reform gauge their institutional ability to support interventions and set them up for success.

Organizations can use the SPACE rubric to support the implementation of fair and responsible academic career assessment practices in two ways: First, it can help establish a baseline for the current state of infrastructural conditions, to gauge an institution’s ability to support the development and implementation of new academic assessment practices and activities. Second, the rubric can be used to retroactively analyze how strengths or gaps in these institutional conditions may have impacted the outcomes of concrete interventions targeted to specific types of academic assessment activities—such as hiring, promotion, tenure, or even graduate student evaluation—either helping or hindering progress toward those goals.

The SPACE rubric is a result of DORA’s partnership with Ruth Schmidt, Associate Professor at the Institute of Design of the Illinois Institute of Technology, who led the iterative participatory design process. The creation of the rubric was informed by nearly 75 individuals in 26 countries and 6 continents, and benefited from multiple rounds of feedback….”

Recognition and rewards – Open Science – Universiteit Utrecht

“Open science means action. And the way we offer recognition and reward to academics and university staff is key in bringing about the transition that Utrecht University aims for. Over the course of the past year the working group on Recognition and Rewards, part of the Open Science Programme, has reflected and thoroughly debated a novel approach to ensuring that we offer room for everyone’s talent, resulting in a new vision (pdf)….

In the current system, researchers and their research are judged by journal impact factors, publisher brands and H-indices, and not by actual quality, real use, real impact and openness characteristics….

Under those circumstances, at best open science practices are seen as posing an additional burden without rewards. At worst, they are seen as actively damaging chances of future funding and promotion & tenure. Early career researchers are perhaps the most dependent on traditional evaluation culture for career progression, a culture held in place by established researchers, as well as by institutional, national and international policies, including funder mandates….”

 

 

Utrecht University Recognition and Rewards Vision

“By embracing Open Science as one of its five core principles1, Utrecht University aims to accelerate and improve science and scholarship and its societal impact. Open science calls for a full commitment to openness, based on a comprehensive vision regarding the relationship with society. This ongoing transition to Open Science requires us to reconsider the way in which we recognize and reward members of the academic community. It should value teamwork over individualism and calls for an open academic culture that promotes accountability, reproducibility, integrity and transparency, and where sharing (open access, FAIR data and software) and public engagement are normal daily practice. In this transition we closely align ourselves with the national VSNU program as well as developments on the international level….”

Are stakeholders measuring the publishing metrics that matter?: Putting research into context

“Perhaps the most fundamental aspect of compiling and implementing more meaningful research metrics that the NISO panelists discussed is the importance of putting data into context. And, as the speakers noted, there are multiple facets of context to consider, including:

The strengths and limitations of different metrics by discipline/subject matter (e.g., some metrics are better suited to certain types of research)
The intended uses and overall strengths and limitations of particular data points (e.g., altmetrics are “indicators” of impact, not measures of quality and the JIF was never meant to be used to measure the impact of individual articles or scholars)
The cultural context that a researcher is operating within and the opportunities, challenges, and biases they have experienced
How and where a research output fits within scholars’ other professional contributions (e.g., recognizing how individual research outputs are part of broader bodies of work and also measuring the impacts of scholarly outputs that do not fit within traditional publication-based assessment systems) …”

Open access publishing is the ethical choice | Wonkhe

“I had a stroke half a decade ago and found I couldn’t access the medical literature on my extremely rare vascular condition.

I’m a capable reader, but I couldn’t get past the paywalls – which seemed absurd, given most research is publicly funded. While I had, already, long been an open access advocate by that point, this strengthened my resolve.

The public is often underestimated. Keeping research locked behind paywalls under the assumption that most people won’t be interested in, or capable of, reading academic research is patronising….

While this moral quandary should not be passed to young researchers, there may be benefits to them in taking a firm stance. Early career researchers are less likely to have grants to pay for article processing charges to make their work open access compared to their senior colleagues. Early career researchers are also the ones who are inadvertently paying the extortionate subscription fees to publishers. According to data from the Higher Education Statistics Agency (HESA), the amount of money UK universities fork out each year to access paywalled content from Elsevier – the largest academic publisher in the world – could pay 1,028 academic researchers a salary of £45,000 per year.

We know for-profit publishers, such as Elsevier, hold all the cards with respect to those prestigious titles. What we need are systematic “read and publish” deals that allow people to publish where they want without having to find funding for open access….

The current outlook for prospective researchers to secure an academic position at a university is compromised because so much money is spent propping up for-profit, commercial publishers. Rather than focusing on career damage to those who can’t publish with an Elsevier title, we should focus on the opportunity cost in hundreds of lost careers in academia….”

Incentivization Blueprint — Open Research Funders Group

“A growing number of funders are eager to encourage grantees to share their research outputs – articles, code and materials, and data. To accelerate the adoption of open norms, deploying the right incentives is of paramount importance. Specifically, the incentive structure needs to both reduce its reliance on publication in high-impact journals as a primary metric, and properly value and reward a range of research outputs.

This Incentivization Blueprint seeks to provide funders with a stepwise approach to adjusting their incentivization schemes to more closely align with open access, open data, open science, and open research. Developed by the Open Research Funders Group, the Blueprint provides organizations with guidance for developing, implementing, and overseeing incentive structures that maximize the visibility and usability of the research they fund.

A number of prominent funders have committed to taking steps to implement the Incentivization Blueprint. Among them are the following: …”

Open Research Funders Group (ORFG) | DORA

“The ORFG released guidance for funders called, Incentivizing the sharing of research outputs through research assessment: a funder implementation blueprint. The group created the document to assist funders in encouraging researchers to maximize the impact of their work by openly sharing research outputs. The blueprint identifies three goals to be successful:

change the perception that publication in high-impact journals is the only metric that counts;
provide demonstrable evidence that, while journal articles are important, we value and reward all types of research outputs; and
ensure that indicators like the venue of publication or journal impact factor are not used as surrogate measures of quality in researcher assessment.

To do this, the blueprint provides three steps with concrete actions for funders: 1) policy development and declarations, 2) implementation, and 3) engagement.  Template language for funders is included in the document to promote easy uptake….”

Indonesia should stop pushing its academics to chase empty indicators – Nikkei Asia

“An assessment system that predominantly evaluates research performance based on journal output and citations is steering academics from developing countries like mine to chasing quantity over quality. And being exploited while doing so.

Researchers in Indonesia are the second most likely in the world to publish in dubious journals that print articles for a fee without proper scientific peer review, a process where several experts in the field review the merit of the research, according to a new study by economists Vit Machacek and Martin Srholec.

 

These predatory journals prey on academics whose career progressions, and therefore salary increase, are determined by credit points. They exploit the processing fees that authors pay to make articles open to the public. They pocket the payment, an average of $178, an amount close to the basic salary of an entry-level lecturer in a state university in Indonesia, without facilitating proper peer review. The papers published by predatory journals are often low-quality, with typographical and grammatical errors….

In addition to the predatory journal problem, the metric also discourages science collaboration. As the metric values article count, academics who want to turn out several journal articles from a data set has an incentive to hold on to them rather than sharing them for other scientists to analyze….”