Metrics for Data Repositories and Knowledgebases: Working Group Report | Data Science at NIH

“The National Institutes of Health (NIH) Data Resources Lifecycle and Metrics Working Group and Metrics for Repositories (MetRe) subgroup have released “Metrics for Data Repositories and Knowledgebases: A Working Group Report”. This report presents the findings of an exploration of the current landscape of biomedical data repository metrics. The work was carried out in two phases consisting of a small pilot (phase 1) and a public survey (phase 2).

Below is an excerpt from the report:

“This report includes input from representatives of 13 NIH repositories from Phase 1 and 92 repository managers in Phase 2. The metrics these respondents reported using are divided into several broad categories, including (from most to least commonly collected) User Behavior Characteristics, Scientific Contribution/Impact, and Repository Operations, and the respondents from the two groups reported similar patterns in the metrics they collect. The majority of respondents (77%) also indicated a willingness to share their metrics data – an encouraging finding given that such metrics can be helpful to NIH in better understanding how datasets and repositories are used.” …”

Research evaluation in context 1: Introducing research evaluation in the Netherlands – Leiden Madtrics

“As a member of the working group for the monitoring and further development of the evaluation protocol – and as an employee of CWTS – let me provide insight and context. In a series of blog posts I will focus on the evaluation procedure and the evaluation goals as described in the current protocol for the evaluation of research units. Furthermore, I will focus on the bigger picture and pay attention to the context in which the evaluation protocols have been developed and function….”

HuMetricsHSS Initiative Receives $650,000 Mellon Grant – College of Arts & Letters

“Michigan State University has received a $650,000 grant from The Andrew W. Mellon Foundation to continue the work being done by the Humane Metrics for the Humanities and Social Sciences (HuMetricsHSS) initiative, an international partnership committed to establishing more humane indicators of excellence in academia with a particular focus on the humanities and social sciences.  

The goal of the HuMetricsHSS initiative is to empower people at all levels of academic institutions by identifying core values and aligning reward mechanisms in every area — from grades and funding to promotion and tenure — with those values. …”

How should Dora be enforced? – Research Professional News

“One lesson is that the declaration’s authors did not consider redundancy as a possible outcome of research assessment, focusing instead on hiring, promotion and funding decisions. However, in my view, redundancy processes should not be delegated to crude metrics and should be informed by the principles of Dora. 

That said, it is not Dora’s job as an organisation to intervene in the gritty particulars of industrial disputes. Nor can we arbitrate in every dispute about research assessment practices within signatory organisations. …

Recently, we have re-emphasised that university signatories must make it clear to their academic staff what signing Dora means. Organisations should demonstrate their commitment to Dora’s principles to their communities, not seek accreditation from us. In doing so, they empower their staff to challenge departures from the spirit of the declaration. Grant conditions introduced by signatory funders such as the Wellcome Trust and Research England buttress this approach. 

Dora’s approach to community engagement taps into the demand for research assessment reform while acknowledging the lack of consensus on how best to go about it. The necessary reforms are complex, intersecting with the culture change needed to make the academy more open and inclusive. They also have to overcome barriers thrown up by academics comfortable with the status quo and the increasing marketisation of higher education. In such a complex landscape, Dora has no wish to be prescriptive. Rather, we need to help institutions find their own way, which will sometimes mean allowing room for course corrections….”

Gadd (2021) Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up | Frontiers in Research Metrics and Analytics

Gadd, Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up. Frontiers in Research Metrics and Analytics. https://doi.org/10.3389/frma.2021.680023

Abstract: Draws parallels between the problematic use of GDP to evaluate economic success with the use of global university rankings to evaluate university success. Inspired by Kate Raworth’s Doughnut Economics, this perspective argues that the pursuit of growth as measured by such indicators creates universities that ‘grow’ up the rankings rather than those which ‘thrive’ or ‘mature.’ Such growth creates academic wealth divides within and between countries, despite the direction of growth as inspired by the rankings not truly reflecting universities’ critical purpose or contribution. Highlights the incompatibility between universities’ alignment with socially responsible practices and continued engagement with socially irresponsible ranking practices. Proposes four possible ways of engendering change in the university rankings space. Concludes by calling on leaders of ‘world-leading’ universities to join together to ‘lead the world’ in challenging global university rankings, and to set their own standards for thriving and maturing universities.

Open Grant Proposals · Business of Knowing, summer 2021

“One of those informal frontiers is crowdfunding for scientific research. For the past year, I’ve worked on Experiment, helping hundreds of scientists design and launch crowdfunding campaigns for their research questions. Experiment has been doing this for almost a decade, with more than 1,000 successfully funded projects on the platform. The process is very different than the grant funding mechanisms set up by agencies and foundations. It’s not big money yet, as the average fundraise is still ~$5,000. But in many ways, the process is better: faster, transparent, and more encouraging to early-career scientists. Of all the lessons learned, one stands out for broader consideration: grant proposals and processes should be open by default.

Grant proposals that meet basic requirements for scientific merit and rigor should be posted online, ideally in a standardized format, in a centralized (or several) database or clearinghouse. They should include more detail than just the abstract and dollar amount totals that are currently shown now on federal databases, especially in terms of budgets and costs. The proposals should include a DOI number so that future work can point back to the original question, thinking, and scope. A link to these open grant proposals should be broadly accepted as sufficient for submission to requests from agencies or foundations….

Open proposals would make research funding project-centric, rather than funder-centric….

Open proposals would promote more accurate budgets….

Open proposals would increase the surface area of collaboration….

Open proposals would improve citation metrics….

Open proposals would create an opportunity to reward the best question-askers in addition to the best question-answerers….

Open proposals would give us a view into the whole of science, including the unfunded proposals and the experiments with null results….”

A ‘no update’ update: setting the record straight – Altmetric

“You may have seen a blog post by Kent Anderson last week which indicated that Altmetric has changed the way we score Twitter as part of the Altmetric Attention Score. This is incorrect. We have not changed the Altmetric scoring algorithm. What we have done recently is update our documentation. Like everyone, we do this from time to time whenever we feel we can provide users with better clarity about what we do.  …”

Boost for academic recognition and reward revolution

“Dutch academics are putting their foot on the gas in the rebellion against the uncritical use of journal impact factors to recognise and reward researchers, which was set in motion by the 2012 San Francisco Declaration on Research Assessment, or DORA.

From early next year, Utrecht University in the Netherlands will officially stop using the so-called ‘impact factor’ in all its hiring and promotions and judge its researchers by their commitment to open science, teamwork, public engagement and data sharing.

And despite opposition from some Dutch professors, the sweeping changes are gathering pace, with Leiden University among the Dutch institutions also pledging their support with their Academia in Motion paper….”

Dryad Data — Repository Analytics and Metrics Portal (RAMP) 2020 data

“The Repository Analytics and Metrics Portal (RAMP) is a web service that aggregates use and performance use data of institutional repositories. The data are a subset of data from RAMP, the Repository Analytics and Metrics Portal (http://rampanalytics.org), consisting of data from all participating repositories for the calendar year 2020. For a description of the data collection, processing, and output methods, please see the “methods” section below….”

Repository Analytics and Metrics Portal – Web analytics for institutional repositories

“The Repository Analytics and Metrics Portal (RAMP) tracks repository items that have surfaced in search engine results pages (SERP) from any Google property. RAMP does this by aggregating Google Search Console (GSC) data from all registered repositories.

RAMP data are collected from GSC in two separate sets: page-click data and country-device data. The page-click data include the handle (aka URL) of every item that appeared in SERP. This dataset creates significant possibilities for additional research if the metadata of those items were mined. RAMP data are as free of robot traffic as possible and they contain no personally identifiable information.

RAMP data include the following metrics:

Impressions – number of times an item appears in SERP
Position – location of the item in SERP
Clicks – number times an item URL is clicked
Click-Through Ratios – number of clicks divided by the number of impressions
Date – date of the search
Device – device used for the search
Country – country from which the search originated….”

TRANSPARENT RANKING: All Repositories (August 2021) | Ranking Web of Repositories

During the last months, we realized the indexing of records of several open access repositories by Google Scholar is not as complete as previously without a clear reason. From the experience of a few cases, it looks that GS penalizes error in the metadata descriptions, so it is important to the affected repositories to check their level of indexing and to try to identify potential problems. Please, consider the following Indexing GS guidelines https://scholar.google.com/intl/en/scholar/inclusion.html https://www.or2015.net/wp-content/uploads/2015/06/or-2015-anurag-google-scholar.pdf and the following material: Exposing Repository Content to Google Scholar A few suggestions for improving the web visibility of the contents of your institutional OA repository “Altmetrics of the Open Access Institutional Repositories: A Webometrics Approach” As a service for the OA community we are providing five lists of repositories (all (institutional+subject), institutional, portals, data, and CRIS) with the raw numbers of records in GS for their web domains (site:xxx.yyy.zz excluding citations and patents) ranked by decreasing number of items as collected during the second week of AUGUST 2021. The list is still incomplete as we are still adding new repositories.

Impact of “impact factor” on early-career scientists | Rising Kashmir

“Usage of JIF by the scientific community as a predictor of impact has also increased, even while evidence of its predictive value has eroded; both correlations between article citation rate and JIF and proportions of highly cited articles published in high-impact journals have declined since 1990. Because digitization of journal content and proliferation of open-access articles have profoundly changed how relevant literature is located and cited. Having reviewed its history, a Web of Science search was carried out for articles published last year relevant to JIF; of 88 articles, about half are critiques of JIF, yet the other half, for the most part, are journal editorials touting a new or higher impact factor for the year….

Hiring and promotion decisions are too important to be subject to the influence of a metric so vulnerable to manipulation and misrepresentation. Journals can boost their JIF by encouraging selective journal self-citation and by changing journal composition through a preferential publication of reviews, articles in fields with large constituencies, or articles on research topics with short half-lives. JIF has degenerated into a marketing tool for journals as illustrated by the use of “Unofficial Impact Factors” in promotional material for journals that are not even indexed in Web of Science; also they are marketing tools for academic institutions as illustrated by the practice of Clarivate Analytics (which now owns Science Citation Index) of awarding paper certificates and electronic “badges” for scientists determined to be Highly Cited Researchers (HCRs, #HighlyCited) by virtue of publishing papers in the top 1% by citations for their field and publication year. …

 

In Science, it has been widely noted that using JIF as a proxy for scientific excellence undermines incentives to pursue novel, time-consuming, and potentially groundbreaking work…”

 

World Journal Clout Index (WJCI)

“he World Journal Clout Index (WJCI) Report (2020 STM) is the research result of the “Research on the Comprehensive Evaluation Method of World Science & Technology Journal Impact” commissioned by the China Association for Science and Technology. This project aims to establish a new journal evaluation system and explore a global-oriented journal impact evaluation method on scientific, comprehensive and reasonable basis, with a view to contributing Chinese wisdom and Chinese solutions in the field of academic evaluation and promoting the fair evaluation and equal use of sci-tech journals worldwide.

The WJCI Report (2020 STM) determines the proportion of source journals in each country/region from four dimensions: R&D input, output of research papers, number of researchers, and the scale and level of journals. This report selects about 15,000 high-level journals representative of the region, discipline and industry as source journals from 63,000 or more active sci-tech academic journals worldwide. On the basis of thorough research on the journal classification systems of different citation databases, our research group creates a novel journal classification system that contains 5 Level-1 categories, 45 Level-2 categories and 279 Level-3 categories. This novel system comprehensively covers all sci-tech fields and reflects the development of emerging and cross-disciplinary disciplines, following the general outline of the Classification and Code Disciplines of the People’s Republic of China , with reference to the Chinese Library Classification and Disciplinary Classification for Degree Granting and Talent Training, etc. The project has also established the World Citation Database 2019, under the support of CrossRef and Digital Science, for calculating indexes and obtained the downloads on CNKI, Wanfang and Altmetric. Furthermore, a new journal impact evaluation index that integrates citation and web usage—World Journal Clout Index (WJCI) is formulated….”

Why Open Access: Economics and Business Researchers’ Perspectives

Abstract:  Public research policies have been promoting open-access publication in recent years as an adequate model for the dissemination of scientific knowledge. However, depending on the disciplines, its use is very diverse. This study explores the determinants of open-access publication among academic researchers of economics and business, as well as their assessment of different economic measures focused on publication stimulus. To do so, a survey of Spanish business and economics researchers was conducted. They reported an average of 19% of their publications in open-access journals, hybrids or fully Gold Route open access. Almost 80% of the researchers foresee a future increase in the volume of open-access publications. When determining where to publish their research results, the main criterion for the selection of a scientific journal is the impact factor. Regarding open access, the most valued aspect is the visibility and dissemination it provides. Although the cost of publication is not the most relevant criterion in the choice of a journal, three out of four researchers consider that a reduction in fees and an increase in funding are measures that would boost the open-access model.