PsyArXiv Preprints | Nudging Open Science

Abstract:  In this article, we provide a toolbox of resources and nudges for those who are interested in advancing open scientific practice. Open Science encompasses a range of behaviours that aim to include the transparency of scientific research and how widely it is communicated. The paper is divided into seven sections, each dealing with a different stakeholder in the world of research (researchers, students, departments and faculties, universities, academic libraries, journals, and funders). With two frameworks in mind — EAST and the Pyramid of Culture Change — we describe the influences and incentives that sway behaviour for each of these stakeholders, we outline changes that can foster Open Science, and suggest actions and resources for individuals to nudge these changes. In isolation, a small shift in one person’s behaviour may appear to make little difference, but when combined, these small shifts can lead to radical changes in culture. We offer this toolbox to assist individuals and institutions in cultivating a more open research culture.

 

Nudging Open Science: Useful Tips for Academic Libraries? | ZBW MediaTalk

“An international group of eleven behavioural scientists from eight countries (Australia, New Zealand, Germany, Austria, Poland, USA, Netherlands) recently addressed this question in the report „Nudging Open Science“ and developed recommendations for action for seven groups in the scientific system. Academic libraries are one of these groups. The other groups (they are called “nodes” in the report) are researchers, students, departments and faculties, universities, journals and funding organisations. The team of behavioural scientists classifies each of these groups and gives practical tips on who can nudge each group, and how, to practice more Open Science. However, the report does not contain any approaches on how libraries themselves can actively nudge other stakeholders. We present approaches and results of the report with a special focus on academic libraries….

Now the group of behavioural scientists has started thinking about how the potential of nudging could be used to further advance Open Science in the scientific ecosystem. Their thesis: Whether researchers and institutions choose to engage in Open Science practices is not necessarily a matter of rational choice. On the contrary: Most decisions are routinely made in the course of emotional, automatic or impulsive processes that are often influenced by psychosocial factors (example: peer pressure). When faced with a decision, a person usually chooses the path of least resistance or least effort. The status quo is maintained….”

Statement on the Scholarly Merit and Evaluation of Open Scholarship in Linguistics | Linguistic Society of America

“Open Scholarship can be a key component for a scholar’s portfolio in a number of situations, including but not limited to hiring, review, promotion, and awards. Because Open Scholarship can take many forms, evaluation of this work may need different tools and approaches from publications like journal articles and books.  In particular, citation counts, a common tool for evaluating publications, are not available for some kinds of Open Scholarship in the same form or from the same providers as they are from publications. Here we share recommendations on how to assess the use of Open Scholarship materials including and beyond citations, including materials that both have formal peer review and those that do not not.

For tenure & promotion committees, program managers, department chairs, hiring committees, and others tasked with evaluating Open Scholarship, NASEM has prepared a discipline-agnostic rubric that can be used as part of hiring, review, or promotion processes. Outside letters of evaluation can also provide insight into the significance and impact of Open Scholarship work. Psychologist Brian Nosek (2017) provides some insight into how a letter writer can evaluate Open Scholarship, and includes several ways that evaluation committees can ask for input specifically about contributions to Open Scholarship. Nosek suggests that letter writers and evaluators comment on ways that individuals have contributed to Open Scholarship through “infrastructure, service, metascience, social media leadership, and their own research practices.” We add that using Open Scholarship in the classroom, whether through open educational materials, open pedagogy, or teaching of Open Scholarship principles, should be included in this list. Evaluators can explicitly ask for these insights in requests to letter writers, for example by including the request to “Please describe the impact that [scholar name]’s openly available research outputs have had from the research, public policy, pedagogic, and/or societal perspectives.” These evaluations can be particularly important when research outputs are not formally peer reviewed.

For scholars preparing hiring, review, promotion, or other portfolios that include Open Scholarship, we recommend not only discussing the Open Scholarship itself, but also its documented and potential impacts on both the academic community as well as broader society. Many repositories housing Open Scholarship materials provide additional metrics such as views, downloads, comments, and forks (or reuse cases) alongside citations in published literature. The use and mention of material with a Digital Object Identifier (DOI) can be tracked using tools such as ImpactStory, Altmetric.com, and other alternative metrics. To aid with evaluation of this work, the creator should share these metrics where available, along with any other qualitative indicators (such as personal thank-yous, reuse stories, or online write-ups) that can give evaluators a sense of the impact of their work. The Metrics Toolkit provides examples and use cases for these kinds of metrics. This is of potential value when peer review of these materials may not take the same form as with published journals or books; thoughtful use and interpretation of metrics can help evaluators understand the impact and importance of the work.

The Linguistic Society of America reaffirms its commitment to fair review of Open Scholarship in hiring, tenure, and promotion, endorses all of these approaches to peer review and evaluation of Open Scholarship, and encourages scholars, departments, and personnel committees to take them into careful consideration and implement language about Open Scholarship in their evaluation processes.”

Open Science: read our statement – News – CIVIS – A European Civic University

“CIVIS universities promote the development of new research indicators to complement the conventional indicators for research quality and impact, so as to do justice to open science practices and, going beyond pure bibliometric indicators, to promote also non-bibliometric research products. In particular, the metrics should extend the conventional bibliometric indicators in order to cover new forms of research outputs, such as research data and research software….

Incentives and Rewards for researchers to engage in Open Science activities 

Research career evaluation systems should fully acknowledge open science activities. CIVIS members encourage the inclusion of Open Science practices in their assessment mechanisms for rewards, promotion, and/or tenure, along with the Open Science Career Assessment Matrix….”

Recognition and rewards – Open Science – Universiteit Utrecht

“Open science means action. And the way we offer recognition and reward to academics and university staff is key in bringing about the transition that Utrecht University aims for. Over the course of the past year the working group on Recognition and Rewards, part of the Open Science Programme, has reflected and thoroughly debated a novel approach to ensuring that we offer room for everyone’s talent, resulting in a new vision (pdf)….

In the current system, researchers and their research are judged by journal impact factors, publisher brands and H-indices, and not by actual quality, real use, real impact and openness characteristics….

Under those circumstances, at best open science practices are seen as posing an additional burden without rewards. At worst, they are seen as actively damaging chances of future funding and promotion & tenure. Early career researchers are perhaps the most dependent on traditional evaluation culture for career progression, a culture held in place by established researchers, as well as by institutional, national and international policies, including funder mandates….”

 

 

Utrecht University Recognition and Rewards Vision

“By embracing Open Science as one of its five core principles1, Utrecht University aims to accelerate and improve science and scholarship and its societal impact. Open science calls for a full commitment to openness, based on a comprehensive vision regarding the relationship with society. This ongoing transition to Open Science requires us to reconsider the way in which we recognize and reward members of the academic community. It should value teamwork over individualism and calls for an open academic culture that promotes accountability, reproducibility, integrity and transparency, and where sharing (open access, FAIR data and software) and public engagement are normal daily practice. In this transition we closely align ourselves with the national VSNU program as well as developments on the international level….”

Halt the h-index – Leiden Madtrics

“Sometimes, bringing home a message requires a more visual approach. That’s why recently, we teamed up with a graphic designer to create an infographic on the h-index – or rather, on the reasons why not to use the h-index.

In our experience with stakeholders in research evaluation, debates about the usefulness of the h-index keep popping up. This happens even in contexts that are more welcoming towards responsible research assessment. Of course, the h-index is well-known, as are its downsides. Still, the various issues around it do not yet seem to be common knowledge. At the same time, current developments in research evaluation propose more holistic approaches. Examples include the evaluative inquiry developed at our own centre as well as approaches to evaluate academic institutions in context. Scrutinizing the creation of indicators itself, better contextualization has been called for, demanding to derive them out “in the wild” and not in isolation.
Moving towards more comprehensive research assessment approaches that consider research in all its variants is supported by the larger community of research evaluators as well, making a compelling case to move away from single-indicator thinking.
Still, there is opposition to reconsidering the use of metrics. When first introducing the infographic on Twitter, this evoked responses questioning misuse of the h-index in practice, disparaging more qualitative assessments, or simply shrugging off responsibility for taking action due to a perceived lack of alternatives. This shows there is indeed a need for taking another look at the h-index….”

bjoern.brembs.blog » Minimizing the collective action problem

“Thus, researchers need to modernize the way they do their scholarship, institutions need to modernize their infrastructure such that researchers are enabled to modernize their scholarship. These have now had more than 30 years for this modernization and neither of them have acted. At this point it is fair to assume, barring some major catastrophe forcing their hands, that such modernization is not going to magically appear within the next three decades, either. Funders, therefore, are in a position to incentivize this long overdue modernization which institutions and hence researchers have been too complacent or too reticent to tackle.

If, as I would tend to agree, we are faced with a collective action problem and the size of the collective is the major determinant for effective problem solving, then it is a short step to realize that funders are in a uniquely suited position to start solving this collective action problem. Conversely, then, it is only legitimate to question the motives of those who seek to make the collective action problem unnecessary difficult by advocating to target individual researchers or institutions. What could possibly be the benefit of making the collective action problem numerically more difficult to solve?”

 

Data Sharing in Biomedical Sciences: A Systematic Review of Incentives | Biopreservation and Biobanking

Abstract:  Background: The lack of incentives has been described as the rate-limiting step for data sharing. Currently, the evaluation of scientific productivity by academic institutions and funders has been heavily reliant upon the number of publications and citations, raising questions about the adequacy of such mechanisms to reward data generation and sharing. This article provides a systematic review of the current and proposed incentive mechanisms for researchers in biomedical sciences and discusses their strengths and weaknesses.

Methods: PubMed, Web of Science, and Google Scholar were queried for original research articles, editorials, and opinion articles on incentives for data sharing. Articles were included if they discussed incentive mechanisms for data sharing, were applicable to biomedical sciences, and were written in English.

 

Results: Although coauthorship in return for the sharing of data is common, this might be incompatible with authorship guidelines and raise concerns over the ability of secondary analysts to contest the proposed research methods or conclusions that are drawn. Data publication, citation, and altmetrics have been proposed as alternative routes to credit data generators, which could address these disadvantages. Their primary downsides are that they are not well-established, it is difficult to acquire evidence to support their implementation, and that they could be gamed or give rise to novel forms of research misconduct.

Conclusions: Alternative recognition mechanisms need to be more commonly used to generate evidence on their power to stimulate data sharing, and to assess where they fall short. There is ample discussion in policy documents on alternative crediting systems to work toward Open Science, which indicates that that there is an interest in working out more elaborate metascience programs.

COVID-19 and the research scholarship ecosystem: help! – Journal of Clinical Epidemiology

Highlights

Data sharing is not common as part of biomedical publications
To increase data sharing biomedical journals, funders and academic institutions should introduce policies that will enhance data sharing and other open science practices
As part of research assessments incentives and rewards need to be introduced

Abstract

Objectives

Data sharing practices remain elusive in biomedicine. The COVID-19 pandemic has highlighted the problems associated with the lack of data sharing. The objective of this article is to draw attention to the problem and possible ways to address it.

Study Design and Setting

This article examines some of the current open access and data sharing practices at biomedical journals and funders. In the context of COVID-19 the consequences of these practices is also examined.

Results

Despite the best of intentions on the part of funders and journals, COVID-19 biomedical research is not open. Academic institutions need to incentivize and reward data sharing practices as part of researcher assessment. Journals and funders need to implement strong polices to ensure that data sharing becomes a reality. Patients support sharing of their data.

Conclusion

Biomedical journals, funders and academic institutions should act to require stronger adherence to data sharing policies.

The Impact of the German ‘DEAL’ on Competition in the Academic Publishing Market by Justus Haucap, Nima Moshgbar, Wolfgang Benedikt Schmal :: SSRN

Abstract:  The German DEAL agreements between German universities and research institutions on the one side and Springer Nature and Wiley on the other side facilitate easy open access publishing for researchers located in Germany. We use a dataset of all publications in chemistry from 2016 to 2020 and apply a difference-in-differences approach to estimate the impact on eligible scientists’ choice of publication outlet. We find that even in the short period following the conclusion of these DEAL agreements, publication patterns in the field of chemistry have changed, as eligible researchers have increased their publications in Wiley and Springer Nature journals at the cost of other journals. From that two related competition concerns emerge: First, academic libraries may be, at least in the long run, left with fewer funds and incentives to subscribe to non-DEAL journals published by smaller publishers or to fund open access publications in these journals. Secondly, eligible authors may prefer to publish in journals included in the DEAL agreements, thereby giving DEAL journals a competitive advantage over non-DEAL journals in attracting good papers. Given the two-sided market nature of the academic journal market, these effects may both further spur the concentration process in this market.

 

Open access journal publishing in the business disciplines: A closer look at the low uptake and discipline-specific considerations – Mikael Laakso, Bo-Christer Björk, 2021

Abstract:  The Internet has enabled efficient electronic publishing of scholarly journals and Open Access business models. Recent studies have shown that adoption of Open Access journals has been uneven across scholarly disciplines, where the business and economics disciplines in particular seem to lag behind all other fields of research. Through bibliometric analysis of journals indexed in Scopus, we find the share of articles in Open Access journals in business, management, and accounting to be only 6%. We further studied the Open Access availability of articles published during 2014–2019 in journals included in the Financial Times 50 journal list (19,969 articles in total). None of the journals are full Open Access, but 8% of the articles are individually open and for a further 35% earlier manuscript versions are available openly on the web. The results suggest that the low adoption rate of Open Access journals in the business fields is a side-effect of evaluation practices emphasizing publishing in journals included, in particular, ranking lists, creating disincentives for business model innovation, and barriers for new entrants among journals. Currently, most business school research has to be made Open Access through other ways than through full Open Access journals, and libraries play an important role in facilitating this in a sustainable way.

Increasing transparency through open science badges

“Authors who adopt transparent practices for an article in Conservation Biology are now able to select from 3 open science badges: open data, open materials, and preregistration. Badges appear on published articles as visible recognition and highlight these efforts to the research community. There is an emerging body of literature regarding the influences of badges, for example, an increased number of articles with open data (Kidwell et al 2016) and increased rate of data sharing (Rowhani?Farid et al. 2018). However, in another study, Rowhani?Farid et al. (2020) found that badges did not “noticeably motivate” researchers to share data. Badges, as far as we know, are the only data?sharing incentive that has been tested empirically (Rowhani?Farid et al. 2017).

Rates of data and code sharing are typically low (Herold 2015; Roche et al 2015; Archmiller et al 2020; Culina et al 2020). Since 2016, we have asked authors of contributed papers, reviews, method papers, practice and policy papers, and research notes to tell us whether they “provided complete machine and human?readable data and computer code in Supporting Information or on a public archive.” Authors of 31% of these articles published in Conservation Biology said they shared their data or code, and all authors provide human?survey instruments in Supporting Information or via a citation or online link (i.e., shared materials)….”

Incentivization Blueprint — Open Research Funders Group

“A growing number of funders are eager to encourage grantees to share their research outputs – articles, code and materials, and data. To accelerate the adoption of open norms, deploying the right incentives is of paramount importance. Specifically, the incentive structure needs to both reduce its reliance on publication in high-impact journals as a primary metric, and properly value and reward a range of research outputs.

This Incentivization Blueprint seeks to provide funders with a stepwise approach to adjusting their incentivization schemes to more closely align with open access, open data, open science, and open research. Developed by the Open Research Funders Group, the Blueprint provides organizations with guidance for developing, implementing, and overseeing incentive structures that maximize the visibility and usability of the research they fund.

A number of prominent funders have committed to taking steps to implement the Incentivization Blueprint. Among them are the following: …”

No, it’s not The Incentives—it’s you – [citation needed]

“There is, of course,  an element of truth to this kind of response. I’m not denying that perverse incentives exist; they obviously do. There’s no question that many aspects of modern scientific culture systematically incentivize antisocial behavior, and I don’t think we can or should pretend otherwise. What I do object to quite strongly is the narrative that scientists are somehow helpless in the face of all these awful incentives—that we can’t possibly be expected to take any course of action that has any potential, however small, to impede our own career development.

“I would publish in open access journals,” your friendly neighborhood scientist will say. “But those have a lower impact factor, and I’m up for tenure in three years.” …

It’s also aggravating on an intellectual level, because the argument that we’re all being egregiously and continuously screwed over by The Incentives is just not that good. I think there are a lot of reasons why researchers should be very hesitant to invoke The Incentives as a justification for why any of us behave the way we do. I’ll give nine of them here, but I imagine there are probably others….”