Carrots and Sticks: A Qualitative Study of Library Responses to the UK’s Research Excellence Framework (REF) 2021 Open Access Policy | DeSanto | College & Research Libraries

Abstract:  This study examines how academic libraries in the UK responded to the Research Excellence Framework (REF) 2021 open access policy. Thirteen information professionals at twelve institutions across the UK took part in semi-structured interviews. Findings from the interviews reveal how libraries created and deployed new infrastructures, workflows, and staffing as well as the methods through which universities communicated the policy’s requirements. The study describes respondents’ experiences of the changes brought about by REF 2021 as well as their thoughts on how the REF 2021 open access policy will affect future REF assessments. Results provide insight for libraries responding to US initiatives such as the August 2022 White House Office of Science and Technology Policy memo directing the open publishing of federally funded research.

 

Overemphasis on publications may disadvantage historically excluded groups in STEM before and during COVID-19: A North American survey-based study | PLOS ONE

Abstract:  Publishing is a strong determinant of academic success and there is compelling evidence that identity may influence the academic writing experience and writing output. However, studies rarely quantitatively assess the effects of major life upheavals on trainee writing. The COVID-19 pandemic introduced unprecedented life disruptions that may have disproportionately impacted different demographics of trainees. We analyzed anonymous survey responses from 342 North American environmental biology graduate students and postdoctoral scholars (hereafter trainees) about scientific writing experiences to assess: (1) how identity interacts with scholarly publication totals and (2) how the COVID-19 pandemic influenced trainee perceptions of scholarly writing productivity and whether there were differences among identities. Interestingly, identity had a strong influence on publication totals, but it differed by career stage with graduate students and postdoctoral scholars often having opposite results. We found that trainees identifying as female and those with chronic health conditions or disabilities lag in publication output at some point during training. Additionally, although trainees felt they had more time during the pandemic to write, they reported less productivity and motivation. Trainees who identified as female; Black, Indigenous, or as a Person of Color [BIPOC]; and as first-generation college graduates were much more likely to indicate that the pandemic affected their writing. Disparities in the pandemic’s impact on writing were most pronounced for BIPOC respondents; a striking 85% of BIPOC trainees reported that the pandemic affected their writing habits, and overwhelmingly felt unproductive and unmotivated to write. Our results suggest that the disproportionate impact of the pandemic on writing output may only heighten the negative effects commonly reported amongst historically excluded trainees. Based on our findings, we encourage the academy to consider how an overemphasis on publication output during hiring may affect historically excluded groups in STEM—especially in a post-COVID-19 era.

The SCOPE framework – implementing the ideals of responsible research assessment

Abstract:  Background: Research and researchers are heavily evaluated, and over the past decade it has become apparent that the consequences of evaluating the research enterprise and particularly individual researchers are considerable. This has resulted in the publishing of several guidelines and principles to support moving towards more responsible research assessment (RRA). To ensure that research evaluation is meaningful, responsible, and effective the International Network of Research Management Societies (INORMS) Research Evaluation Group created the SCOPE framework enabling evaluators to deliver on existing principles of RRA. SCOPE bridges the gap between principles and their implementation by providing a structured five-stage framework by which evaluations can be designed and implemented, as well as evaluated.

Methods: SCOPE is a step-by-step process designed to help plan, design, and conduct research evaluations as well as check effectiveness of existing evaluations. In this article, four case studies are presented to show how SCOPE has been used in practice to provide value-based research evaluation.

Results: This article situates SCOPE within the international work towards more meaningful and robust research evaluation practices and shows through the four case studies how it can be used by different organisations to develop evaluations at different levels of granularity and in different settings.

Conclusions: The article demonstrates that the SCOPE framework is rooted firmly in the existing literature. In addition, it is argued that it does not simply translate existing principles of RRA into practice, but provides additional considerations not always addressed in existing RRA principles and practices thus playing a specific role in the delivery of RRA. Furthermore, the use cases show the value of SCOPE across a range of settings, including different institutional types, sizes, and missions.

Research(er) assessment that considers open science – Leiden Madtrics

“When relating this ambiguity to research(er) assessment, it becomes evident that the proliferation of open science practices challenges the monochromatic properties of publication-driven research(er) assessments. This urges research assessment to be reconsidered in the light of open science. Luckily, there has been much work in making research(er) assessment more polychromatic. This has been spearheaded by initiatives such as DORA, has stabilised with the formulation of the Hong Kong Principles and recently has institutionalised in the Coalition for Advancing Research Assessment (COARA); a massive undertaking that supports the adoption of responsible research assessment practices across knowledge producing organisations….

 

Indeed, there is ongoing work to illuminate these complexities in the form of various European Commission-funded projects. How research(er) assessment that considers opens science can play out in practice is being researched by GraspOS, a project which is concerned with how infrastructures afford the uptake of research assessment that values open science. The project OPUS focuses on indicators for the assessment of researchers in the context of open science. PathOS tries to better understand the impacts of open science. Similarly, the SUPER MoRRI project developed an evaluation and monitoring framework for responsible research and innovation in Europe that, in many ways, relates to reconfigurations in knowledge production as posited under the banner of open science as well….

We hope to hear stories about issues, frustrations and successes of research assessment in relation to open science. Finally, the goal of this community is to create a bouquet of stories from which we can learn and draw inspiration for our own research assessments. If this is something of interest to you, please feel free to register here and share this post.”

 

 

Transforming Research Assessment for an Equitable Scientific Culture | Septentrio Conference Series

Abstract:  Science plays a pivotal role in the advancement of democratic societies, and there is a growing consensus advocating for its recognition as both a common good and a fundamental human right. To effectively fulfil this role, science necessitates the trust of society, the support of policy makers, and robust international collaboration, enabling the mobility of researchers and the free flow of knowledge. To encourage this, our responsibilities as researchers extend beyond the realm of academic publishing. They encompass science outreach, education, diplomacy, policy advocacy, entrepreneurship, and collaborations aimed at addressing global challenges or progress towards more equitable societies. However, this is hampered by current research assessment practices and the academic reward system, which perpetuate a ‘publish or perish’ research culture that confines the scope of science to academic publishing, fosters privilege-based biases, and prioritises quantity over quality, as well as prestige over integrity. During this talk, I will share my personal journey as an early career researcher from the Global South, now affiliated with one of the most innovative research labs worldwide. My research journey, which was enabled by securing highly competitive funding since early stages of my career, provided me with first-hand insight into the biases and repercussions of current research assessment practices on the trajectories of researchers. Further validating this perspective is a ground-breaking study I co-led with colleagues from the Global Young Academy, exploring research assessment for career advancement on a global scale. This study shows that research institutions worldwide heavily rely on bibliometrics to evaluate career progression, irrespective of the academic discipline. However, while more established institutions appear to be walking away from these practices, these are becoming more popular in emerging research institutions from low-middle income countries. These findings highlight the need for transformational global (inclusive) initiatives. I am privileged to be part of one such initiative – The Coalition for Advancing Research Assessment (CoARA). CoARA brings together a community of researchers and research enablers dedicated to reforming this perilous research culture. CoARA’s guiding principles centre on acknowledging the diversity of contributions and careers in science, shifting research evaluation towards qualitative aspects where research ethics and integrity are at the core, and recognizing that excellence is context-dependent, varying for each candidate, role, and projects. A standout feature of CoARA is its unwavering commitment to early career researchers, placing them at the heart of its principles, governance, structures, and interventions. Thus, ensuring that future generation of scientific leaders is well-equipped to navigate and transform the landscape of research assessment and scientific culture.

Global movement to reform researcher assessment gains traction | Physics Today | AIP Publishing

“A growing global movement toward holistic approaches to evaluating researchers and research aims to value a broader range of contributions than an institute’s reputation and such metrics as numbers of publications in high-impact journals, citations, and grant monies. Contributions that go largely unrewarded include committee service, outreach to the public and to policymakers, social impact, and entrepreneurship.

An early push was the San Francisco Declaration on Research Assessment in 2013. DORA has grown into a worldwide initiative for which reducing the emphasis on journal impact factor has been a “hobbyhorse,” says program director Zen Faulkes. “But we are broadening our efforts in assessment reform.” As of September, more than 20?000 individuals and about 3000 organizations in 164 countries had signed DORA.

A related effort spearheaded by the European Commission, the European University Association, and Science Europe—an association of funding agencies that spends more than €22 billion (roughly $24 billion) annually—is widely seen as having the most punch. In July 2022 they laid out guiding principles for reform, and in December 2022 they established the Coalition for Advancing Research Assessment (CoARA). More than 600 universities, funders, learned societies, and other organizations, overwhelmingly in Europe, had signed on as of late August. Signatories commit to examining their research assessment procedures within a year and to trying out and reporting on alternative approaches within five years….”

Frontiers | Editorial: Linked Open Bibliographic Data for Real-time Research Assessment

“.Despite the value of open bibliographic resources, they can involve inconsistencies that should be solved for better accuracy. As an example, OpenCitations mistakenly includes 1370 self-citations and 1498 symmetric citations as of April 30, 20221 . As well, they can involve several biases that can provide a distorted mirror of the research efforts across the world (Martín-Martín, Thelwall, Orduna-Malea, & Delgado López-Cózar, 2021). That is why these databases need to be enhanced from the perspective of data modeling, data collection, and data reuse. This goes in line with the current perspective of the European Union on reforming research assessment (CoARA, 2022). In this topical collection, we are honored to feature novel research works in the context of allowing the automatic generation of realtime research assessment reports based on open bibliographic resources. We are happy to host research efforts emphasizing the importance of open research data as a basis for transparent and responsible research assessment, assessing the data quality of open resources to be used in real-time research evaluation, and providing implementations of how online databases can be combined to feed dashboards for real-time scholarly assessment….”

REPORT: Best Practices for Institutional Publishing Service Providers – DIAMAS

“DIAMAS plans to improve Open Access publishing practices. To do so, we will create Extensible Quality Standard for Institutional Publishing (EQSIP), which aim to ensure the quality and transparency of governance, processes and workflows in institutional publishing. The Best practices report is an initial step in this process.

The report is based on an analysis of existing quality evaluation criteria, best practices, and assessment systems in publishing developed by international publishers’ associations, research funding organisations, international indexing databases, etc (full dataset available here). If you are an institutional publisher, a service provider involved in Open Access publishing, or a journal editor, this report can help you learn about current best practices and identify where you need to align.

Our recommendations and tips cover seven categories, which are also the core components of the Extensible Quality Standard for Institutional Publishing (EQSIP): 1) Funding; 2) Ownership and governance; 3) Open science practices; 4) Editorial quality, editorial management, and research integrity; 5) Technical service efficiency; 6) Visibility; and 7) Equity, Diversity and Inclusion (EDI).

A self-assessment checklist summarises the best practices outlined in the report. Institutional publishers, service providers and journal editors can use it to get an idea of the future Extensible Quality Standard for Institutional Publishing (EQSIP), and assess their current practices and see where to make improvements.”

Documentation, Working Groups, and National Chapters [CoARA]

“Working Groups are central to CoARA’s mission to enable systemic reform of research assessment. Based on a bottom-up approach with members’ voluntary involvement, Working Groups operate as ’communities of practice’, providing mutual learning and collaboration on specific thematic areas. Participating members exchange knowledge, learn from each other’s experience, discuss and develop outputs to advance research assessment and support the implementation of members’ commitments.

Please note that for now Working Groups are open to CoARA members only.

If you would like to join the Coalition as a member, please follow the link here to sign up. If you are already signatory and would like to become a member, please contact the CoARA secretariat….”

Community of Practice on Open Science and Responsible Research Assessment – GraspOS

“In bimonthly conversations, research funders, research managers, researchers and anyone interested in these topics come together to discuss the multiple ways in which research assessment considers Open Science. Our guests tell stories about issues, frustrations and the successes of research assessment in relation to Open Science. The goal of this community is to create a bouquet of stories of translation from which we can learn and draw inspiration for our own research assessments.”

Interventions in scholarly communication: Design lessons from public health | First Monday

Abstract:  Many argue that swift and fundamental interventions in the system of scholarly communication are needed. However, there are substantial disagreements over the short- and long-term benefits of most proposed approaches to changing the practice of science communication, and the lack of systematic, empirically based research in this area makes these controversies difficult to resolve. We argue that experience within public health can be usefully applied to scholarly communication. Starting with the history of DDT (Dichlorodiphenyltrichloroethane) application, we illustrate four ways complex human systems threaten reliable predictions and blunt ad-hoc interventions. We then show how these apply to interventions in scholarly publication – open access based on the article processing charge (APC), and preprints – to yield surprising results. Finally, we offer approaches to help guide the design of future interventions: identifying measures and outcomes, developing infrastructure, incorporating assessment, and contributing to theories of systemic change.

Statement of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) on European Council Conclusions on “High-quality, transparent, open, trustworthy and equitable scholarly publishing”

“In particular, the DFG underpins the propositions that scholarly publication channels ? should continue to evolve as high-quality, openly accessible, sustainably funded digital infrastructures for research; ? should be organised in such a way that they protect the principles of the freedom of research, contribute to research integrity and quality, and ensure the highest possible accessibility and re-usability of research results; ? must apply the highest standards to the quality assurance of publications, the trustworthiness of processes and the reliability and reproducibility of content; ? should make even more effective use of the innovative possibilities of digital publishing…”

DFG, German Research Foundation – DFG welcomes EU Council Conclusions on Scholarly Publishing

“The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) welcomes the Council Conclusions on scholarly publishing adopted today, Tuesday 23 May 2023, by the Com-petitiveness Council of the European Union.

In the opinion of the largest research funding organisation and central self-governing body of the research community in Germany, the conclusions adopted under the title “On high-quality, transparent, open, trustworthy and equitable scholarly publishing” contain a series of trend-setting recommendations. These are commented on in detail in a statement issued simultane-ously by the DFG.

The DFG underlines that the academic publication system should continue to develop based on high-quality, openly accessible, sustainably funded digital infrastructures for research. It must be organised in such a way that the principles of the freedom of research are protected, scien-tific integrity and quality are guaranteed and the accessibility and re-usability of research re-sults are enabled….”

AI and Publishing: Moving forward requires looking backward – TL;DR – Digital Science

“One area where the use of generative AI to write research papers poses a serious challenge is open research. The proprietary nature of the most widely used tools means the underlying model used is not available for independent inspection or verification. This lack of disclosure of the material on which the model has trained “threatens hard-won progress on research ethics and the reproducibility of results”.

In addition, the current inability for AI to document the provenance of its data sources through citation, and lack of identifiers of those data sources means there is no ability to replicate the ‘findings’ that have been generated by AI. This has raised calls for the development of a formal specification or standard for AI documentation that is backed by a robust data model. Our current publishing environment does not prioritise reproducibility, with code sharing optional and a slow uptake of requirements to share data. In this environment, the generation of fake data is of particular concern. However, ChatGPT “is not the creator of these issues; it instead enables this problem to exist at a much larger scale”.

And that leads me to my provocation – In the same way that a decade ago, open access was a scapegoat for scholarly communication*, now generative AI is a scapegoat for the research assessment system. Let me explain….”

Evaluation of researchers in action: Updates from UKRI and a discussion on the utility of CRediT | DORA

“Funding organizations play a key role in setting the tone for evaluation standards and practices. In recent years, an increasing number of funders have shifted their evaluation practices away from an outsized focus on quantitative metrics (e.g., H-Index, journal impact factors, etc.) as proxy measures of quality and towards more holistic means of evaluating applicants. At one of DORA’s March Funder Community of Practice (CoP) meetings, we heard how the UK Research Institute (UKRI) has implemented narrative style CVs for choosing promising research and innovation talent. At the second March Funder CoP meeting, we held a discussion with Alex Holcombe, co-creator of the Tenzing tool, about how the movement to acknowledge authors for the broad range of roles they play to contribute to a research project could also be applied to help funders in decision making processes….”