Unpacking The Altmetric Black Box

Article Attention Scores for papers don’t seem to add up, leading one to question whether Altmetric data are valid, reliable, and reproducible.

The post Unpacking The Altmetric Black Box appeared first on The Scholarly Kitchen.

The ResearchGate Score: a good example of a bad metric

According to ResearchGate, the academic social networking site, their RG Score is “a new way to measure your scientific reputation”. With such high aims, Peter Kraker, Katy Jordan and Elisabeth Lex take a closer look at the opaque metric. By reverse engineering the score, they find that a significant weight is linked to ‘impact points’ – a similar metric to the widely discredited journal impact factor. Transparency in metrics is the only way scholarly measures can be put into context and the only way biases – which are inherent in all socially created metrics – can be uncovered.

Gross Domestic Clean Water

 Alternative text: why not measure Gross Domestic Clean Water because this is more essential than Gross Domestic Product. If you’re not convinced, try going a couple of days without clean water, in any form. This word picture is dedicated to the public domain.

This is an alternative metric.

Altmetrics – thoughts about the purpose

Should altmetrics take a step back and reconsider what the main purpose / research question is? I should suggest that what we need is an alternative to the current power of the impact factor in assessing the work of scholars. This may or may not involve metrics of any kind. My suggestion for starters is that we need a system that is not as reliant on metrics of any kind.

Having said that, some metrics studies that might actually be useful:
–  does an emphasis on quantity of publication increase duplication of content and/or reduce quality? With respect to the latter, this is what I have heard from senior experts in scholarly publishing and I think both Brown and Harley touch on this in their reports – at least with respect to books, pushing scholars to publish two books rather than one to get tenure means pressure to publish in less time than it takes to write a good book. So pushing for quantity seems likely to correlate with reduced quality (a hypothesis worth testing?)

One advantage to studying the disadvantages of pushing for quantity is that if the hypothesis (quantity correlates negatively with quality) is correct, then that is evidence that can reduce the workload of scholars – something I expect that scholars are likely to support

Other possibilities:
– scholars might want to know about journals:
– average and range of time from submission to decision
– level of “peer” doing the peer review (grad student? senior professor?)
– extent and quality of contents (this has to be qualitative analysis; sampling makes sense)

Shifting from a print-based scholarly communication system to an open access knowledge commons, while retaining or increasing quality and reducing costs, is possible – but it’s not easy. It is worth taking the time to think things through and get at least some stuff right.