Innovation, entrepreneurship, promotion, and tenure

“Academic promotion and tenure (P&T) processes that typically prioritize faculty grants and publications can fail to fully assess and value entrepreneurial, innovative endeavors (1) that can produce the kind of societal impacts that universities are increasingly being called on to provide and that many faculty and students increasingly prioritize (2, 3). A more inclusive assessment of scholarship and creative activity to better recognize and reward innovation and entrepreneurship (I&E) will require “broadening the bar” (4) to reflect evolving forms of faculty impact without diluting or increasing the requirements for advancement. Expanding what we value as scholarship can also help augment who we value as scholars and thus support a more innovative and diverse professoriate. We highlight work by the Promotion and Tenure–Innovation and Entrepreneurship (PTIE) coalition to promote policies and practices to recognize the impact of faculty I&E. We posit that this strategy can be broadly applicable (beyond I&E) to recognize the many and evolving dimensions along which faculty create societal impacts….

I&E—along with diversity, equity, and inclusion (DEI); interdisciplinary team science; open science; community engagement; and others—represent examples of the many evolving forms of scholarship for the 21stcentury faculty member. That said, these types of scholarship can be overlooked or undervalued in the process by which universities review, reward, and advance the academic workforce (8, 11, 12). As these evolutions are incorporated into the fabric of higher education, the faculty evaluation process thus needs to be updated to reflect this changing landscape….”

Coalition Members | Promotion and Tenure – Innovation and Entrepreneurship (PTIE) Summit

“Coalition members are universities committed to being a part of the conversation with on this topic. Membership as a coalition member does not constitute endorsing specific solutions or promotion & tenure (P&T) policies. Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of the organizations listed below. By joining the non-binding PTIE Coalition, the representative(s) from the institution are committing to the following:

 

Stay engaged with us as we develop our program for the 2020 PTIE summit – providing suggestions and insights and serving as a sounding board for ideas.
Provide a representative from your institution who will attend the Virtual National Summit on September 16-18, 2020. 
Consider adopting the recommendations from the 2020 PTIE summit for expanding P&T guidelines on your own campus.
Allow the PTIE organizing committee to list your institution’s name and/or logo on a webpage as an institution (along with our other coalition institutions) committed to advancing I&E on their campus for their faculty and students. The webpage will be housed on our www.ptie.org website and will list that your participation in this coalition consists of these four bullet points listed here….”

How should Dora be enforced? – Research Professional News

“One lesson is that the declaration’s authors did not consider redundancy as a possible outcome of research assessment, focusing instead on hiring, promotion and funding decisions. However, in my view, redundancy processes should not be delegated to crude metrics and should be informed by the principles of Dora. 

That said, it is not Dora’s job as an organisation to intervene in the gritty particulars of industrial disputes. Nor can we arbitrate in every dispute about research assessment practices within signatory organisations. …

Recently, we have re-emphasised that university signatories must make it clear to their academic staff what signing Dora means. Organisations should demonstrate their commitment to Dora’s principles to their communities, not seek accreditation from us. In doing so, they empower their staff to challenge departures from the spirit of the declaration. Grant conditions introduced by signatory funders such as the Wellcome Trust and Research England buttress this approach. 

Dora’s approach to community engagement taps into the demand for research assessment reform while acknowledging the lack of consensus on how best to go about it. The necessary reforms are complex, intersecting with the culture change needed to make the academy more open and inclusive. They also have to overcome barriers thrown up by academics comfortable with the status quo and the increasing marketisation of higher education. In such a complex landscape, Dora has no wish to be prescriptive. Rather, we need to help institutions find their own way, which will sometimes mean allowing room for course corrections….”

How misconduct helped psychological science to thrive

“Despite this history, before Stapel, researchers were broadly unaware of these problems or dismissed them as inconsequential. Some months before the case became public, a concerned colleague and I proposed to create an archive that would preserve the data collected by researchers in our department, to ensure reproducibility and reuse. A council of prominent colleagues dismissed our proposal on the basis that competing departments had no similar plans. Reasonable suggestions that we made to promote data sharing were dismissed on the unfounded grounds that psychology data sets can never be safely anonymized and would be misused out of jealousy, to attack well-meaning researchers. And I learnt about at least one serious attempt by senior researchers to have me disinvited from holding a workshop for young researchers because it was too critical of suboptimal practices….

Much of the advocacy and awareness has been driven by early-career researchers. Recent cases show how preregistering studies, replication, publishing negative results, and sharing code, materials and data can both empower the self-corrective mechanisms of science and deter questionable research practices and misconduct….

For these changes to stick and spread, they must become systemic. We need tenure committees to reward practices such as sharing data and publishing rigorous studies that have less-than-exciting outcomes. Grant committees and journals should require preregistration or explanations of why it is not warranted. Grant-programme officers should be charged with checking that data are made available in accordance with mandates, and PhD committees should demand that results are verifiable. And we need to strengthen a culture in which top research is rigorous and trustworthy, as well as creative and exciting….”

Open Education in Promotion, Tenure,& Faculty Development · Open Education in Promotion, Tenure, and Faculty Development

“The Iowa Open Education Action Team (Iowa OER) has built upon DOERS3’s OER in Tenure & Promotion Matrix to help faculty and staff advocate for the inclusion of open educational practices (OEP) in the promotion, tenure, and faculty evaluation practices at their institutions….”

Open Education in Promotion, Tenure, and Faculty Development

“This resource was developed by a working group from the Iowa Open Education Action Team (Iowa OER). Our team built upon DOERS3’s OER in Tenure & Promotion Matrix to help faculty and staff advocate for the inclusion of open educational practices (OEP) in promotion, tenure, and faculty evaluation practices at their institutions. Below, you can find our main document, directions for interacting with the text, and handouts you can use or adapt for your own advocacy work….”

Impact of “impact factor” on early-career scientists | Rising Kashmir

“Usage of JIF by the scientific community as a predictor of impact has also increased, even while evidence of its predictive value has eroded; both correlations between article citation rate and JIF and proportions of highly cited articles published in high-impact journals have declined since 1990. Because digitization of journal content and proliferation of open-access articles have profoundly changed how relevant literature is located and cited. Having reviewed its history, a Web of Science search was carried out for articles published last year relevant to JIF; of 88 articles, about half are critiques of JIF, yet the other half, for the most part, are journal editorials touting a new or higher impact factor for the year….

Hiring and promotion decisions are too important to be subject to the influence of a metric so vulnerable to manipulation and misrepresentation. Journals can boost their JIF by encouraging selective journal self-citation and by changing journal composition through a preferential publication of reviews, articles in fields with large constituencies, or articles on research topics with short half-lives. JIF has degenerated into a marketing tool for journals as illustrated by the use of “Unofficial Impact Factors” in promotional material for journals that are not even indexed in Web of Science; also they are marketing tools for academic institutions as illustrated by the practice of Clarivate Analytics (which now owns Science Citation Index) of awarding paper certificates and electronic “badges” for scientists determined to be Highly Cited Researchers (HCRs, #HighlyCited) by virtue of publishing papers in the top 1% by citations for their field and publication year. …

 

In Science, it has been widely noted that using JIF as a proxy for scientific excellence undermines incentives to pursue novel, time-consuming, and potentially groundbreaking work…”

 

Dashboard will track hiring and promotion criteria

“A US$1.2 million grant will fund an effort to identify and publicize the criteria that universities around the world use to hire and promote researchers. The Declaration on Research Assessment (DORA), a global initiative to reform the evaluation of researchers, will use part of the funds to create an interactive dashboard that will shine much-needed light on a process that is often opaque and controversial, says programme director Anna Hatch, who is based in Washington DC. “When criteria are visible and transparent, universities can be held accountable,” she says. “Researchers will know how their contributions will be measured, so they can make a better case for themselves.”

DORA, conceived in 2012 at the annual meeting of the American Society for Cell Biology, called for improvements to the evaluation of researchers and the outputs of scholarly research. The declaration specifically calls for doing away with impact factors as a way to judge the merit of academics. So far, it has been signed by more than 20,000 individuals and institutions around the world.

The grant is from the Arcadia Fund, a UK-based charity that has supported many academic initiatives since its founding in 2001….”

Why Do We Need to Change Research Evaluation Systems? — Observatory | Institute for the Future of Education

“Can we break out of this vicious cycle? Are there alternatives? Yes, there are. For some years now, various movements worldwide have sought to change the system for evaluating research. In 2012, the “San Francisco Declaration” proposed eliminating metrics based on the impact factor. There was also the Charte de la désexcellence  (“Letter of Dis-Excellence”) mentioned above. In 2015, a group of academicians signed the Leiden Manifesto, which warned of the “widespread misuse of indicators in evaluating scientific performance.” Since 2013, the group Science in Transition has sought to reform the science evaluation system. Finally, since 2016, the Collectiu InDocentia, created at the University of Valencia (Spain), has also been doing its part. …”

DORA receives $1.2M grant from Arcadia to accelerate research assessment reform | DORA

“Research assessment reform is part of the open research movement in academia that asks the question: Who and what is research for? The San Francisco Declaration on Research Assessment (DORA), an initiative that operates under the sponsorship of the American Society for Cell Biology, has been awarded a 3-year, $1.2M grant from Arcadia – a charitable fund of Lisbet Rausing and Peter Baldwin. The generous funding will support Tools to Advance Research Assessment (TARA), a project to facilitate the development of new policies and practices for academic career assessment. Project TARA is a collaboration with Sarah de Rijcke, Professor in Science and Evaluation Studies and director of the Centre for Science and Technology Studies (CWTS) at Leiden University, and Ruth Schmidt, Associate Professor at the Institute of Design at the Illinois Institute of Technology.

The grant for Project TARA will help DORA to identify, understand, and make visible the criteria and standards universities use to make hiring, promotion, and tenure decisions. This information will be used to create resources and practical guidance on the reform of research assessment for academic and scholarly institutions. The grant provides DORA with crucial support to create the following outputs:

An interactive online dashboard that tracks criteria and standards academic institutions use for hiring, review, promotion, and tenure.
A survey of U.S. academic institutions to gain a broad understanding of institutional attitudes and approaches to research assessment reform.
A toolkit of resources informed by the academic community to support academic institutions working to improve policy and practice….”

Impact factor abandoned by Dutch university in hiring and promotion decisions

“A Dutch university says it is formally abandoning the impact factor — a standard measure of scientific success — in all hiring and promotion decisions. By early 2022, every department at Utrecht University in the Netherlands will judge its scholars by other standards, including their commitment to teamwork and their efforts to promote open science, says Paul Boselie, a governance researcher and the project leader for the university’s new Recognition and Rewards scheme. “Impact factors don’t really reflect the quality of an individual researcher or academic,” he says. “We have a strong belief that something has to change, and abandoning the impact factor is one of those changes.” …”

Incorporating Preprints into Academic Assessment | DORA

“Join DORA and ASAPbio on Tuesday Day, June 29, for a joint webinar on preprints and academic assessment….

Speakers will discuss important topics surrounding the incorporation of preprints into academic assessment: the value of considering preprints in academic assessment, how preprints can be included in existing assessment processes, and what challenges may arise along the way. Participants will have the opportunity to engage in the dialogue and ask questions of the speakers in the last section of the webinar.

 

This webinar is free to attend and open to everyone interested in improving research assessment. In particular, this webinar will aim to equip early career researchers, faculty, and academic leadership with the knowledge to advocate for the use of preprints at their institutions.”

Game over: empower early career researchers to improve research quality

Abstract:  Processes of research evaluation are coming under increasing scrutiny, with detractors arguing that they have adverse effects on research quality, and that they support a research culture of competition to the detriment of collaboration. Based on three personal perspectives, we consider how current systems of research evaluation lock early career researchers and their supervisors into practices that are deemed necessary to progress academic careers within the current evaluation frameworks. We reflect on the main areas in which changes would enable better research practices to evolve; many align with open science. In particular, we suggest a systemic approach to research evaluation, taking into account its connections to the mechanisms of financial support for the institutions of research and higher education in the broader landscape. We call for more dialogue in the academic world around these issues and believe that empowering early career researchers is key to improving research quality.

 

Open Scholarship Support Guide.pdf(Shared)- Adobe Document Cloud

“Steps to Support Open Scholarship

Open scholarship entails a culture shift in how research is conducted in universities. It requires action on the part of university administration, working in concert with faculty, sponsors and disciplinary communities.  Universities should consider steps in three areas:

•  Policies:  Language and guidance should be reviewed for alignment with open scholarship, in particular: (1) academic hiring, review, tenure and promotion (valuing diverse types of research products; metrics that  incentivize the open dissemination of articles, data, and other research outputs; and valuing collaborative research); (2) intellectual property (ownership, licensing and distribution of data, software, materials and publications); (3) research data protection (for data to be stored and shared through repositories); (4) attribution (recognizing full range of contributions);  and (5) privacy (insuring that privacy obligations are met). 

•  Services and Training:  Researchers need support to assure that data and other research objects are managed according to FAIR Principles: findable, accessible, interoperable and reusable.  While the specific solution must be tailored to the discipline and research, common standards, including Digital Object Identifiers (DOIs), must be followed.

•  Infrastructure:  Archival storage is required for data, materials, specimens and publications to permit reuse.  Searchable portals are needed to register research products where they can be located and accessed. Universities can recognize efficiencies by utilizing external resources (including existing disciplinary repositories) and by developing shared resources that span the institution when external resources do not exist.

Presidents and provosts are encouraged to work with their academic senates to create an open scholarship initiative that promotes institution-wide actions supporting open scholarship practices, while remaining sufficiently flexible to accommodate disciplinary differences and norms….”

Statement on the Scholarly Merit and Evaluation of Open Scholarship in Linguistics | Linguistic Society of America

“Open Scholarship can be a key component for a scholar’s portfolio in a number of situations, including but not limited to hiring, review, promotion, and awards. Because Open Scholarship can take many forms, evaluation of this work may need different tools and approaches from publications like journal articles and books.  In particular, citation counts, a common tool for evaluating publications, are not available for some kinds of Open Scholarship in the same form or from the same providers as they are from publications. Here we share recommendations on how to assess the use of Open Scholarship materials including and beyond citations, including materials that both have formal peer review and those that do not not.

For tenure & promotion committees, program managers, department chairs, hiring committees, and others tasked with evaluating Open Scholarship, NASEM has prepared a discipline-agnostic rubric that can be used as part of hiring, review, or promotion processes. Outside letters of evaluation can also provide insight into the significance and impact of Open Scholarship work. Psychologist Brian Nosek (2017) provides some insight into how a letter writer can evaluate Open Scholarship, and includes several ways that evaluation committees can ask for input specifically about contributions to Open Scholarship. Nosek suggests that letter writers and evaluators comment on ways that individuals have contributed to Open Scholarship through “infrastructure, service, metascience, social media leadership, and their own research practices.” We add that using Open Scholarship in the classroom, whether through open educational materials, open pedagogy, or teaching of Open Scholarship principles, should be included in this list. Evaluators can explicitly ask for these insights in requests to letter writers, for example by including the request to “Please describe the impact that [scholar name]’s openly available research outputs have had from the research, public policy, pedagogic, and/or societal perspectives.” These evaluations can be particularly important when research outputs are not formally peer reviewed.

For scholars preparing hiring, review, promotion, or other portfolios that include Open Scholarship, we recommend not only discussing the Open Scholarship itself, but also its documented and potential impacts on both the academic community as well as broader society. Many repositories housing Open Scholarship materials provide additional metrics such as views, downloads, comments, and forks (or reuse cases) alongside citations in published literature. The use and mention of material with a Digital Object Identifier (DOI) can be tracked using tools such as ImpactStory, Altmetric.com, and other alternative metrics. To aid with evaluation of this work, the creator should share these metrics where available, along with any other qualitative indicators (such as personal thank-yous, reuse stories, or online write-ups) that can give evaluators a sense of the impact of their work. The Metrics Toolkit provides examples and use cases for these kinds of metrics. This is of potential value when peer review of these materials may not take the same form as with published journals or books; thoughtful use and interpretation of metrics can help evaluators understand the impact and importance of the work.

The Linguistic Society of America reaffirms its commitment to fair review of Open Scholarship in hiring, tenure, and promotion, endorses all of these approaches to peer review and evaluation of Open Scholarship, and encourages scholars, departments, and personnel committees to take them into careful consideration and implement language about Open Scholarship in their evaluation processes.”