Impact factor abandoned by Dutch university in hiring and promotion decisions

“A Dutch university says it is formally abandoning the impact factor — a standard measure of scientific success — in all hiring and promotion decisions. By early 2022, every department at Utrecht University in the Netherlands will judge its scholars by other standards, including their commitment to teamwork and their efforts to promote open science, says Paul Boselie, a governance researcher and the project leader for the university’s new Recognition and Rewards scheme. “Impact factors don’t really reflect the quality of an individual researcher or academic,” he says. “We have a strong belief that something has to change, and abandoning the impact factor is one of those changes.” …”

Incorporating Preprints into Academic Assessment | DORA

“Join DORA and ASAPbio on Tuesday Day, June 29, for a joint webinar on preprints and academic assessment….

Speakers will discuss important topics surrounding the incorporation of preprints into academic assessment: the value of considering preprints in academic assessment, how preprints can be included in existing assessment processes, and what challenges may arise along the way. Participants will have the opportunity to engage in the dialogue and ask questions of the speakers in the last section of the webinar.

 

This webinar is free to attend and open to everyone interested in improving research assessment. In particular, this webinar will aim to equip early career researchers, faculty, and academic leadership with the knowledge to advocate for the use of preprints at their institutions.”

Game over: empower early career researchers to improve research quality

Abstract:  Processes of research evaluation are coming under increasing scrutiny, with detractors arguing that they have adverse effects on research quality, and that they support a research culture of competition to the detriment of collaboration. Based on three personal perspectives, we consider how current systems of research evaluation lock early career researchers and their supervisors into practices that are deemed necessary to progress academic careers within the current evaluation frameworks. We reflect on the main areas in which changes would enable better research practices to evolve; many align with open science. In particular, we suggest a systemic approach to research evaluation, taking into account its connections to the mechanisms of financial support for the institutions of research and higher education in the broader landscape. We call for more dialogue in the academic world around these issues and believe that empowering early career researchers is key to improving research quality.

 

Job Opening: DORA Program Manager | DORA

“Since 2013, more than 19,000 individuals and organizations in 145 countries have signed the San Francisco Declaration on Research Assessment (DORA) and committed to improving the ways research and researchers are assessed for hiring, promotion, and funding decisions. As an initiative, DORA raises awareness and facilitates the implementation of good practice in research assessment to catalyze change and improve equity in academia. As the DORA Program Manager, you will be working with the Program Director to ensure the execution of DORA activities and projects. The DORA Program Manager assists, plans, initiates, and executes various projects ensuring that goals are met, projects are completed on time and within budget. …”

Résumé for Researchers | Royal Society

“This module can be used to explain how you have contributed to the generation of new ideas and hypotheses and which key skills you have used to develop ideas and test hypotheses. It can be used to highlight how you have communicated on your ideas and research results, both written and verbally, the funding you have won and any awards that you have received. It can include a small selection of outputs, with a description of why they are of particular relevance and why they are considered in the context of knowledge generation. Outputs can include open data sets, software, publications, commercial, entrepreneurial or industrial products, clinical practice developments, educational products, policy publications, evidence synthesis pieces and conference publications that you have generated….”

Blockchain for scholarly journal evaluation: Potential and prospects – Wang – – Learned Publishing – Wiley Online Library

“Key points

 

Blockchain shows potential for supporting the multidimensional evaluation of scholarly journals.
Blockchain-based scholarly communications will generate new data, which may be used for evaluating aspects of journals that are currently not fully evaluated.
Blockchain can help enrich journal evaluation by extending the evaluation content to the upstream of journal publishing and increase the economic dimensions of journal usage.
Blockchain-based scholarly journal evaluation would be more automated, more open, and more verifiable to the community….”

Industry not harvest: Principles to minimise collateral damage in impact assessment at scale | Impact of Social Sciences

“As the UK closes the curtains on the Research Excellence Framework 2021 (REF2021) and embarks on another round of consultation, there is little doubt that, whatever the outcome, the expectation remains that research should be shown to be delivering impact. If anything, this expectation is only intensifying. Fuelled by the stated success of REF 2014, the appetite for impact assessment also appears – at least superficially – to be increasing internationally, albeit largely stopping short of mirroring a fully formalised REF-type model. Within this context, the UK’s Future Research Assessment Programme was recently announced, with a remit to explore revised or alternative approaches. Everything is on the table, so we are told, and the programme sensibly includes the convening of an external body of international advisors to cast their, hopefully less jaded eyes upon proceedings….”

 

SPACE to evolve academic assessment: A rubric for analyzing institutional conditions and progress indicators | DORA

“This is part of DORA’s toolkit of resources to support academic institutions that are improving their policies and practices. Find the other resources in the toolkit here.

Improving research and scholarship assessment practices requires the ability to analyze the outcomes of efforts and interventions. However, when conducted only at the unit level of individual interventions, these evaluations and reflections miss opportunities to understand how institutional conditions themselves set the table for the success of new efforts, or how developing institutional capabilities might improve the effectiveness and impact of these new practices at greater scale. The SPACE rubric was developed to help institutions at any stage of academic assessment reform gauge their institutional ability to support interventions and set them up for success.

Organizations can use the SPACE rubric to support the implementation of fair and responsible academic career assessment practices in two ways: First, it can help establish a baseline for the current state of infrastructural conditions, to gauge an institution’s ability to support the development and implementation of new academic assessment practices and activities. Second, the rubric can be used to retroactively analyze how strengths or gaps in these institutional conditions may have impacted the outcomes of concrete interventions targeted to specific types of academic assessment activities—such as hiring, promotion, tenure, or even graduate student evaluation—either helping or hindering progress toward those goals.

The SPACE rubric is a result of DORA’s partnership with Ruth Schmidt, Associate Professor at the Institute of Design of the Illinois Institute of Technology, who led the iterative participatory design process. The creation of the rubric was informed by nearly 75 individuals in 26 countries and 6 continents, and benefited from multiple rounds of feedback….”

G7 Research Compact

As Open Societies with democratic values we believe in academic freedom. The freedom to pursue intellectual enquiry and to innovate allows us to make progress on shared issues and drive forward the frontiers of knowledge and discovery for the benefit of the entire world. We recognise that research and innovation are fundamentally global endeavours. Nations, citizens,  institutions,  and  businesses  have  made  huge  strides  forward,  not  otherwise possible, through open research collaboration across borders. Working together we will use our position as leading science nations to collaborate on global challenges, increase the transparency and integrity of research, and facilitate data free flow with trust to drive innovation and advance knowledge.

 

 

Statement on the Scholarly Merit and Evaluation of Open Scholarship in Linguistics | Linguistic Society of America

“Open Scholarship can be a key component for a scholar’s portfolio in a number of situations, including but not limited to hiring, review, promotion, and awards. Because Open Scholarship can take many forms, evaluation of this work may need different tools and approaches from publications like journal articles and books.  In particular, citation counts, a common tool for evaluating publications, are not available for some kinds of Open Scholarship in the same form or from the same providers as they are from publications. Here we share recommendations on how to assess the use of Open Scholarship materials including and beyond citations, including materials that both have formal peer review and those that do not not.

For tenure & promotion committees, program managers, department chairs, hiring committees, and others tasked with evaluating Open Scholarship, NASEM has prepared a discipline-agnostic rubric that can be used as part of hiring, review, or promotion processes. Outside letters of evaluation can also provide insight into the significance and impact of Open Scholarship work. Psychologist Brian Nosek (2017) provides some insight into how a letter writer can evaluate Open Scholarship, and includes several ways that evaluation committees can ask for input specifically about contributions to Open Scholarship. Nosek suggests that letter writers and evaluators comment on ways that individuals have contributed to Open Scholarship through “infrastructure, service, metascience, social media leadership, and their own research practices.” We add that using Open Scholarship in the classroom, whether through open educational materials, open pedagogy, or teaching of Open Scholarship principles, should be included in this list. Evaluators can explicitly ask for these insights in requests to letter writers, for example by including the request to “Please describe the impact that [scholar name]’s openly available research outputs have had from the research, public policy, pedagogic, and/or societal perspectives.” These evaluations can be particularly important when research outputs are not formally peer reviewed.

For scholars preparing hiring, review, promotion, or other portfolios that include Open Scholarship, we recommend not only discussing the Open Scholarship itself, but also its documented and potential impacts on both the academic community as well as broader society. Many repositories housing Open Scholarship materials provide additional metrics such as views, downloads, comments, and forks (or reuse cases) alongside citations in published literature. The use and mention of material with a Digital Object Identifier (DOI) can be tracked using tools such as ImpactStory, Altmetric.com, and other alternative metrics. To aid with evaluation of this work, the creator should share these metrics where available, along with any other qualitative indicators (such as personal thank-yous, reuse stories, or online write-ups) that can give evaluators a sense of the impact of their work. The Metrics Toolkit provides examples and use cases for these kinds of metrics. This is of potential value when peer review of these materials may not take the same form as with published journals or books; thoughtful use and interpretation of metrics can help evaluators understand the impact and importance of the work.

The Linguistic Society of America reaffirms its commitment to fair review of Open Scholarship in hiring, tenure, and promotion, endorses all of these approaches to peer review and evaluation of Open Scholarship, and encourages scholars, departments, and personnel committees to take them into careful consideration and implement language about Open Scholarship in their evaluation processes.”

Recognition and rewards – Open Science – Universiteit Utrecht

“Open science means action. And the way we offer recognition and reward to academics and university staff is key in bringing about the transition that Utrecht University aims for. Over the course of the past year the working group on Recognition and Rewards, part of the Open Science Programme, has reflected and thoroughly debated a novel approach to ensuring that we offer room for everyone’s talent, resulting in a new vision (pdf)….

In the current system, researchers and their research are judged by journal impact factors, publisher brands and H-indices, and not by actual quality, real use, real impact and openness characteristics….

Under those circumstances, at best open science practices are seen as posing an additional burden without rewards. At worst, they are seen as actively damaging chances of future funding and promotion & tenure. Early career researchers are perhaps the most dependent on traditional evaluation culture for career progression, a culture held in place by established researchers, as well as by institutional, national and international policies, including funder mandates….”

 

 

Utrecht University Recognition and Rewards Vision

“By embracing Open Science as one of its five core principles1, Utrecht University aims to accelerate and improve science and scholarship and its societal impact. Open science calls for a full commitment to openness, based on a comprehensive vision regarding the relationship with society. This ongoing transition to Open Science requires us to reconsider the way in which we recognize and reward members of the academic community. It should value teamwork over individualism and calls for an open academic culture that promotes accountability, reproducibility, integrity and transparency, and where sharing (open access, FAIR data and software) and public engagement are normal daily practice. In this transition we closely align ourselves with the national VSNU program as well as developments on the international level….”

Triggle et al. (2021) Requiem for impact factors and high publication charges

Chris R Triggle, Ross MacDonald, David J. Triggle & Donald Grierson (2021) Requiem for impact factors and high publication charges, Accountability in Research, DOI: 10.1080/08989621.2021.1909481

Abstract: Journal impact factors, publication charges and assessment of quality and accuracy of scientific research are critical for researchers, managers, funders, policy makers, and society. Editors and publishers compete for impact factor rankings, to demonstrate how important their journals are, and researchers strive to publish in perceived top journals, despite high publication and access charges. This raises questions of how top journals are identified, whether assessments of impacts are accurate and whether high publication charges borne by the research community are justified, bearing in mind that they also collectively provide free peer-review to the publishers. Although traditional journals accelerated peer review and publication during the COVID-19 pandemic, preprint servers made a greater impact with over 30,000 open access articles becoming available and accelerating a trend already seen in other fields of research. We review and comment on the advantages and disadvantages of a range of assessment methods and the way in which they are used by researchers, managers, employers and publishers. We argue that new approaches to assessment are required to provide a realistic and comprehensive measure of the value of research and journals and we support open access publishing at a modest, affordable price to benefit research producers and consumers.