A one-size-fits-all approach to research evaluation helps exploitative publishers, warns Emanuel Kulczyck
A one-size-fits-all approach to research evaluation helps exploitative publishers, warns Emanuel Kulczyck
The transition to an open science system affects the entire research process. The reward systems also need to be adjusted in order to support and mirror the open research landscape, but what will this work look like, and what will change? We met Gustav Nilsonne, chair of the European working group dealing with the issue and a participant in the SUHF working group on merit reviews.
“The review, promotion, and tenure (RPT) process is central to academic life and workplace advancement. It influences where faculty direct their attention, research, and publications. By unveiling the RPT process, we can inform actions that lead towards a greater opening of research.
Between 2017 and 2022, we conducted a multi-year research project involving the collection and analysis of more than 850 RPT guidelines and 338 surveys with scholars from 129 research institutions across Canada and the US. Starting with a literature review of academic promotion and tenure processes, we launched six studies applying mixed methods approaches such as surveys and matrix coding.
So how do today’s universities and colleges incentivize open access research? Read on for 6 key takeaways from our studies….”
Abstract: To discourage faculty members from publishing in questionable journals, tenure and promotion standards in which the librarians play an active role can been developed. These standards have been effective in terms of identifying publications in questionable outlets. However, we need to explore how these systems are perceived by the main actors in research, which are the researchers. This study explores the perception of the researchers at a university in Ghana who have been evaluated by a system implemented to discourage publishing in questionable publication outlets. We collected data using an online, largely qualitative questionnaire distributed to all faculty members that had applied for promotion since the implementation of the verification process. The results show that the majority of the faculty members are satisfied or very satisfied with the new tenure and promotion standards. There are differences across faculties, and this seems to be tied to concerns about the choice of publication outlets. Furthermore, the dissatisfied faculty members are concerned with the role of the library in the verification process whereas the satisfied trust the judgement of the librarians. We discuss implications of the results as well as future development of the standards.
“This way of sharing science has some benefits: peer review, for example, helps to ensure (even if it never guarantees) scientific integrity and prevent inadvertent misuse of data or code. But the status quo also comes with clear costs: it creates barriers (in the form of publication paywalls), slows the pace of innovation, and limits the impact of research. Fast science is increasingly necessary, and with good reason. Technology has not only improved the speed at which science is carried out, but many of the problems scientists study, from climate change to COVID-19, demand urgency. Whether modeling the behavior of wildfires or developing a vaccine, the need for scientists to work together and share knowledge has never been greater. In this environment, the rapid dissemination of knowledge is critical; closed, siloed knowledge slows progress to a degree society cannot afford. Imagine the consequences today if, as in the 2003 SARS disease outbreak, the task of sequencing genomes still took months and tools for labs to share the results openly online didn’t exist. Today’s challenges require scientists to adapt and better recognize, facilitate, and reward collaboration….
This tension between individual and institutional incentives and the progress of science must be recognized and resolved in a manner that contributes to solving the great challenges of today and the future. To change the culture, researchers must do more than take a pledge; they must change the game—the structures, the policies, and the criteria for success. In a word, open science must be institutionalized….
A powerful open science story can be found in the World Climate Research Programme’s Coupled Model Intercomparison Project (CMIP), established in 1995. Before CMIP, with the internet in its infancy, climate model results were scattered around the world and difficult to access and use. CMIP inspired 40 modeling groups and about 1,000 researchers to collaborate on advancing modeling techniques and setting guidelines for how and where to share results openly. That simple step led to an unexpected transformation: as more people were able to access the data, the community expanded, and more groups contributed data to CMIP. More people asking questions and pointing out issues in their results helped drive improvements. In its assessment reports, the Intergovernmental Panel on Climate Change relied on research publications using CMIP data to assess climate change. As a platform, CMIP enabled thousands of scientists to work together, self-correct their work, and create further ways to collaborate—a virtuous circle that attracted more scientists and more data, and increased the speed and usefulness of the work….
The most important message from these reports is that all parts of science, from individual researchers to universities and funding agencies, need to coordinate their efforts to ensure that early adopters aren’t jeopardizing their careers by joining the open science community. The whole enterprise has to change to truly realize the full benefits of open science. Creating this level of institutional adoption also requires updating policies, providing training, and recognizing and rewarding collaborative science….”
“The Roundtable on Aligning Incentives for Open Science of the National Academies of Sciences, Engineering, and Medicine brings together stakeholders to discuss the effectiveness of current incentives for adopting open science practices, barriers to adoption, and ways to move forward. According to the 2018 report Open Science by Design: Realizing a Vision for 21st Century Research, open science “aims to ensure the free availability and usability of scholarly publications, the data that result from scholarly research, and the methodologies, including code or algorithms that were used to generate those data.” With the Roundtable coming to the end of its initial phase, a virtual workshop, held December 7, 2021, provided an opportunity to review lessons learned over the past 3 years and discuss next steps for Roundtable members, the National Academies, and others interested in advancing open science and open scholarship. This publication summarizes the presentations and discussion of the workshop.”
On March 31, 2022, presidents and high-level presidential representatives from 65 colleges and universities participated in the first convening of the Higher Education Leadership Initiative for Open Scholarship (HELIOS). HELIOS emerges from the work of the National Academies of Sciences, Engineering, and Medicine’s Roundtable on Aligning Incentives for Open Science. Current members collectively represent 1.8 million students, faculty, and staff. The key outcome of the meeting was a clear commitment to collective action to advance open scholarship.
In light of replication and translational failures, biomedical research practices have recently come under scrutiny. Experts have pointed out that the current incentive structures at research institutions do not sufficiently incentivise researchers to invest in robustness and transparency and instead incentivise them to optimize their fitness in the struggle for publications and grants. This cross-sectional study aimed to describe whether and how relevant policies of university medical centres in Germany support the robust and transparent conduct of research and how prevalent traditional metrics are.
For 38 German university medical centres, we searched for institutional policies for academic degrees and academic appointments as well as websites for their core facilities and research in general between December 2020 and February 2021. We screened the documents for mentions of indicators of robust and transparent research (study registration; reporting of results; sharing of research data, code and protocols; open access; and measures to increase robustness) and for mentions of more traditional metrics of career progression (number of publications; number and value of awarded grants; impact factors; and authorship order).
While open access was mentioned in 16% of PhD regulations, other indicators of robust and transparent research were mentioned in less than 10% of institutional policies for academic degrees and academic appointments. These indicators were more frequently mentioned on the core facility and general research websites. Institutional policies for academic degrees and academic appointments had frequent mentions of traditional metrics.
References to robust and transparent research practices are, with a few exceptions, generally uncommon in institutional policies at German university medical centres, while traditional criteria for academic promotion and tenure still prevail.
Abstract: The call for greater openness in research data is quickly growing in many scientific fields. Psychology as a field, however, still falls short in this regard. Research is vulnerable to human error, inaccurate interpretation, and reporting of study results, and decisions during the research process being biased toward favorable results. Despite the obligation to share data for verification and the importance of this practice for protecting against human error, many psychologists do not fulfill their ethical responsibility of sharing their research data. This has implications for the accurate and ethical dissemination of specific research findings and the scientific development of the field more broadly. Open science practices provide promising approaches to address the ethical issues of inaccurate reporting and false-positive results in psychological research literature that hinder scientific growth and ultimately violate several relevant ethical principles and standards from the American Psychological Association’s (APA’s) Ethical Principles of Psychologists Code of Conduct (APA, 2017). Still, current incentive structures in the field for publishing and professional advancement appear to induce hesitancy in applying these practices. With each of these considerations in mind, recommendations on how psychologists can ethically proceed through open science practices and incentive restructuring—in particular, data management, data and code sharing, study preregistration, and registered reports—are provided.
“The open data revolution won’t happen unless the research system values the sharing of data as much as authorship on papers….
Such a practice is neither new nor confined to a specific field. But the result tends to be the same: that authors of openly shared data sets are at risk of not being given credit in a way that counts towards promotion or tenure, whereas those who are named as authors on the publication are more likely to reap benefits that advance their careers.
Such a situation is understandable as long as authorship on a publication is the main way of getting credit for a scientific contribution. But if open data were formally recognized in the same way as research articles in evaluation, hiring and promotion processes, research groups would lose at least one incentive for keeping their data sets closed….”
“A second front was opened about ten years ago now from an entirely different and mostly unanticipated direction. More than just flush with funds, but this time financed by academia herself, academic publishers started (escalated?) their own attack on science by gobbling up and developing digital surveillance technologies. To expand the sources of user data, these corporations bought digital tools covering all aspects of academic life, from literature search, data analysis, writing, citing or outreach, all the way to citation analysis for research assessment. These corporations formerly known as publishers are using their expanded digital surveillance network to accomplish two separate goals. First, a copy of the data is aggregated with private data from scholarly users and sold, either to advertisers, to law enforcement agencies not allowed to collect such intrusive data themselves, or to any authoritarian government interested in identifying potential opposition intelligentsia. The second goal is to expand the monopolies they enjoy on scholarly content, to a monopoly on all scholarly services, i.e., the mother of all vendor lock-ins. Packaging all the different tools in a single bundle and selling it to institutions akin to subscription “Big Deals”, would make it impossible for any institution buying such a package to ever switch to a different provider again. An analogy outside of academia would be a merger of Microsoft, SAP, Google and Facebook. There are two corporations so far that are standing ready to deploy such bundles, RELX (parent of Elsevier) and Holtzbrinck (SpringerNature, Digital Science). A related data analytics corporation specializing on scholarly data is Clarivate (Web of Science, ProQuest)….”
“We’re testing a new experimental open science feature intended to promote data sharing and reuse across the PLOS journal portfolio. A subset of PLOS articles that link to shared research data in a repository will display a prominent visual cue designed to help researchers find accessible data, and encourage best practice in data sharing….”
“Open, collaborative research accelerates scientific discovery, yet there are serious roadblocks to sharing data and insights. First, team science requires time and attention. Second, the current incentive system of ‘publish or perish’ positions collaborators as competitors. Our solutions include tools, facilitated sharing, and rewards….”
The benefits of increasing public access to data from clinical trials are widely accepted. Such benefits extend to the sharing of data from high-quality systematic reviews, given the time and cost involved with undertaking reviews. We describe the application of open sources of review data, outline potential challenges and highlight efforts made to address these challenges, with the intent of encouraging publishers, funders and authors to consider sharing review data more broadly.
We describe the application of systematic review data in: (i) advancing understanding of clinical trials and systematic review methods, (ii) repurposing of data to answer public health policy and practice relevant questions, (iii) identification of research gaps and (iv) accelerating the conduct of rapid reviews to inform decision making. While access, logistical, motivational and legal challenges exist, there has been progress made by systematic review, academic and funding agencies to incentivise data sharing and create infrastructure to support greater access to systematic review data.
There is opportunity to maximize the benefits of research investment in undertaking systematic reviews by ensuring open sources of systematic review data. Efforts to create such systems should draw on learnings and principles outlined for sharing clinical trial data.
“PLOS has released a preprint and supporting data on research conducted to understand the needs and habits of researchers in relation to code sharing and reuse as well as to gather feedback on prototype code notebooks and help determine strategies that publishers could use to increase code sharing.
Our previous research led us to implement a mandatory code sharing policy at PLOS Computational Biology in March 2021 to increase the amount of code shared alongside published articles. As well as exploring policy to support code sharing, we have also been collaborating with NeuroLibre, an initiative of the Canadian Open Neuroscience Platform, to learn more about the potential role of technological solutions for enhancing code sharing. Neurolibre is one of a growing number of interactive or executable technologies for sharing and publishing research, some of which have become integrated with publishers’ workflows….”