Industry not harvest: Principles to minimise collateral damage in impact assessment at scale | Impact of Social Sciences

“As the UK closes the curtains on the Research Excellence Framework 2021 (REF2021) and embarks on another round of consultation, there is little doubt that, whatever the outcome, the expectation remains that research should be shown to be delivering impact. If anything, this expectation is only intensifying. Fuelled by the stated success of REF 2014, the appetite for impact assessment also appears – at least superficially – to be increasing internationally, albeit largely stopping short of mirroring a fully formalised REF-type model. Within this context, the UK’s Future Research Assessment Programme was recently announced, with a remit to explore revised or alternative approaches. Everything is on the table, so we are told, and the programme sensibly includes the convening of an external body of international advisors to cast their, hopefully less jaded eyes upon proceedings….”

 

SPACE to evolve academic assessment: A rubric for analyzing institutional conditions and progress indicators | DORA

“This is part of DORA’s toolkit of resources to support academic institutions that are improving their policies and practices. Find the other resources in the toolkit here.

Improving research and scholarship assessment practices requires the ability to analyze the outcomes of efforts and interventions. However, when conducted only at the unit level of individual interventions, these evaluations and reflections miss opportunities to understand how institutional conditions themselves set the table for the success of new efforts, or how developing institutional capabilities might improve the effectiveness and impact of these new practices at greater scale. The SPACE rubric was developed to help institutions at any stage of academic assessment reform gauge their institutional ability to support interventions and set them up for success.

Organizations can use the SPACE rubric to support the implementation of fair and responsible academic career assessment practices in two ways: First, it can help establish a baseline for the current state of infrastructural conditions, to gauge an institution’s ability to support the development and implementation of new academic assessment practices and activities. Second, the rubric can be used to retroactively analyze how strengths or gaps in these institutional conditions may have impacted the outcomes of concrete interventions targeted to specific types of academic assessment activities—such as hiring, promotion, tenure, or even graduate student evaluation—either helping or hindering progress toward those goals.

The SPACE rubric is a result of DORA’s partnership with Ruth Schmidt, Associate Professor at the Institute of Design of the Illinois Institute of Technology, who led the iterative participatory design process. The creation of the rubric was informed by nearly 75 individuals in 26 countries and 6 continents, and benefited from multiple rounds of feedback….”

G7 Research Compact

As Open Societies with democratic values we believe in academic freedom. The freedom to pursue intellectual enquiry and to innovate allows us to make progress on shared issues and drive forward the frontiers of knowledge and discovery for the benefit of the entire world. We recognise that research and innovation are fundamentally global endeavours. Nations, citizens,  institutions,  and  businesses  have  made  huge  strides  forward,  not  otherwise possible, through open research collaboration across borders. Working together we will use our position as leading science nations to collaborate on global challenges, increase the transparency and integrity of research, and facilitate data free flow with trust to drive innovation and advance knowledge.

 

 

Statement on the Scholarly Merit and Evaluation of Open Scholarship in Linguistics | Linguistic Society of America

“Open Scholarship can be a key component for a scholar’s portfolio in a number of situations, including but not limited to hiring, review, promotion, and awards. Because Open Scholarship can take many forms, evaluation of this work may need different tools and approaches from publications like journal articles and books.  In particular, citation counts, a common tool for evaluating publications, are not available for some kinds of Open Scholarship in the same form or from the same providers as they are from publications. Here we share recommendations on how to assess the use of Open Scholarship materials including and beyond citations, including materials that both have formal peer review and those that do not not.

For tenure & promotion committees, program managers, department chairs, hiring committees, and others tasked with evaluating Open Scholarship, NASEM has prepared a discipline-agnostic rubric that can be used as part of hiring, review, or promotion processes. Outside letters of evaluation can also provide insight into the significance and impact of Open Scholarship work. Psychologist Brian Nosek (2017) provides some insight into how a letter writer can evaluate Open Scholarship, and includes several ways that evaluation committees can ask for input specifically about contributions to Open Scholarship. Nosek suggests that letter writers and evaluators comment on ways that individuals have contributed to Open Scholarship through “infrastructure, service, metascience, social media leadership, and their own research practices.” We add that using Open Scholarship in the classroom, whether through open educational materials, open pedagogy, or teaching of Open Scholarship principles, should be included in this list. Evaluators can explicitly ask for these insights in requests to letter writers, for example by including the request to “Please describe the impact that [scholar name]’s openly available research outputs have had from the research, public policy, pedagogic, and/or societal perspectives.” These evaluations can be particularly important when research outputs are not formally peer reviewed.

For scholars preparing hiring, review, promotion, or other portfolios that include Open Scholarship, we recommend not only discussing the Open Scholarship itself, but also its documented and potential impacts on both the academic community as well as broader society. Many repositories housing Open Scholarship materials provide additional metrics such as views, downloads, comments, and forks (or reuse cases) alongside citations in published literature. The use and mention of material with a Digital Object Identifier (DOI) can be tracked using tools such as ImpactStory, Altmetric.com, and other alternative metrics. To aid with evaluation of this work, the creator should share these metrics where available, along with any other qualitative indicators (such as personal thank-yous, reuse stories, or online write-ups) that can give evaluators a sense of the impact of their work. The Metrics Toolkit provides examples and use cases for these kinds of metrics. This is of potential value when peer review of these materials may not take the same form as with published journals or books; thoughtful use and interpretation of metrics can help evaluators understand the impact and importance of the work.

The Linguistic Society of America reaffirms its commitment to fair review of Open Scholarship in hiring, tenure, and promotion, endorses all of these approaches to peer review and evaluation of Open Scholarship, and encourages scholars, departments, and personnel committees to take them into careful consideration and implement language about Open Scholarship in their evaluation processes.”

Recognition and rewards – Open Science – Universiteit Utrecht

“Open science means action. And the way we offer recognition and reward to academics and university staff is key in bringing about the transition that Utrecht University aims for. Over the course of the past year the working group on Recognition and Rewards, part of the Open Science Programme, has reflected and thoroughly debated a novel approach to ensuring that we offer room for everyone’s talent, resulting in a new vision (pdf)….

In the current system, researchers and their research are judged by journal impact factors, publisher brands and H-indices, and not by actual quality, real use, real impact and openness characteristics….

Under those circumstances, at best open science practices are seen as posing an additional burden without rewards. At worst, they are seen as actively damaging chances of future funding and promotion & tenure. Early career researchers are perhaps the most dependent on traditional evaluation culture for career progression, a culture held in place by established researchers, as well as by institutional, national and international policies, including funder mandates….”

 

 

Utrecht University Recognition and Rewards Vision

“By embracing Open Science as one of its five core principles1, Utrecht University aims to accelerate and improve science and scholarship and its societal impact. Open science calls for a full commitment to openness, based on a comprehensive vision regarding the relationship with society. This ongoing transition to Open Science requires us to reconsider the way in which we recognize and reward members of the academic community. It should value teamwork over individualism and calls for an open academic culture that promotes accountability, reproducibility, integrity and transparency, and where sharing (open access, FAIR data and software) and public engagement are normal daily practice. In this transition we closely align ourselves with the national VSNU program as well as developments on the international level….”

Triggle et al. (2021) Requiem for impact factors and high publication charges

Chris R Triggle, Ross MacDonald, David J. Triggle & Donald Grierson (2021) Requiem for impact factors and high publication charges, Accountability in Research, DOI: 10.1080/08989621.2021.1909481

Abstract: Journal impact factors, publication charges and assessment of quality and accuracy of scientific research are critical for researchers, managers, funders, policy makers, and society. Editors and publishers compete for impact factor rankings, to demonstrate how important their journals are, and researchers strive to publish in perceived top journals, despite high publication and access charges. This raises questions of how top journals are identified, whether assessments of impacts are accurate and whether high publication charges borne by the research community are justified, bearing in mind that they also collectively provide free peer-review to the publishers. Although traditional journals accelerated peer review and publication during the COVID-19 pandemic, preprint servers made a greater impact with over 30,000 open access articles becoming available and accelerating a trend already seen in other fields of research. We review and comment on the advantages and disadvantages of a range of assessment methods and the way in which they are used by researchers, managers, employers and publishers. We argue that new approaches to assessment are required to provide a realistic and comprehensive measure of the value of research and journals and we support open access publishing at a modest, affordable price to benefit research producers and consumers.

Open access publishing is the ethical choice | Wonkhe

“I had a stroke half a decade ago and found I couldn’t access the medical literature on my extremely rare vascular condition.

I’m a capable reader, but I couldn’t get past the paywalls – which seemed absurd, given most research is publicly funded. While I had, already, long been an open access advocate by that point, this strengthened my resolve.

The public is often underestimated. Keeping research locked behind paywalls under the assumption that most people won’t be interested in, or capable of, reading academic research is patronising….

While this moral quandary should not be passed to young researchers, there may be benefits to them in taking a firm stance. Early career researchers are less likely to have grants to pay for article processing charges to make their work open access compared to their senior colleagues. Early career researchers are also the ones who are inadvertently paying the extortionate subscription fees to publishers. According to data from the Higher Education Statistics Agency (HESA), the amount of money UK universities fork out each year to access paywalled content from Elsevier – the largest academic publisher in the world – could pay 1,028 academic researchers a salary of £45,000 per year.

We know for-profit publishers, such as Elsevier, hold all the cards with respect to those prestigious titles. What we need are systematic “read and publish” deals that allow people to publish where they want without having to find funding for open access….

The current outlook for prospective researchers to secure an academic position at a university is compromised because so much money is spent propping up for-profit, commercial publishers. Rather than focusing on career damage to those who can’t publish with an Elsevier title, we should focus on the opportunity cost in hundreds of lost careers in academia….”

Manipulation of bibliometric data by editors of scientific journals

“Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status….

An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references….

Some international journals intervene arbitrarily to revise the citations in articles they receive: I submitted a paper with my colleagues to an American journal in 2017, and one of the reviewers demanded that we replace references in Russian language with references in English. Two of us responded with a correspondence note titled ‘Don’t dismiss non-English citations’ that we had then submitted to Nature: in publishing that note, the editors of Nature removed some references – from the paper2 that condemned the practice of replacing an author’s references with those more to the editor’s liking – and replaced them with, maybe more relevant, reference to a paper that we had never read by that moment! … 

Editors of many international journals are now looking not for quality papers but for papers that will not lower the impact factor of their journals….”