Monthly Archives: June 2007
“Academics strike back at spurious rankings”
Academics strike back at spurious rankings
D Butler, Nature 447, 514-515 (31 May 2007)
This news item in Nature lists some of the (very valid) objections to the many unvalidated university rankings — both subjective and objective — that are in wide use today.
These problems are all the more reason for extending Open Access (OA) and developing OA scientometrics, which will provide open, validatable and calibratable metrics for research, researchers, and institutions in each field — a far richer, more sensitive, and more equitable spectrum of metrics than the few, weak and unvalidated measures available today.
Some research groups that are doing relevant work on this are, in the UK: (1) our own OA scientometrics group (Les Carr, Tim Brody, Alma Swan, Stevan Harnad) at Southampton (and UQaM, Canada), and our collaborators Charles Oppenheim (Loughborough) and Arthur Sale (Tasmania); (2) Mike Thelwall (Wolverhampton); in the US: (3) Johan Bollen & Herbert van de Sompel at LANL; and in the Netherlands: (5) Henk Moed & Anthony van Raan (Leiden; cited in the Nature news item).
Below are excerpts from the Nature article, followed by some references.
Universities seek reform of ratings
[A] group of US colleges [called for a] boycott [of] the most influential university ranking in the United States… Experts argue that these are based on dubious methodology and spurious data, yet they have huge influence…
“All current university rankings are flawed to some extent; most, fundamentally,”
The rankings in the U.S. News & World Report and those published by the British Times Higher Education Supplement (THES) depend heavily on surveys of thousands of experts – a system that some contest. A third popular ranking, by Jiao Tong University in Shanghai, China, is based on more quantitative measures, such as citations, numbers of Nobel prizewinners and publications in Nature and Science. But even these measures are not straightforward.
Thomson Scientific’s ISI citation data are notoriously poor for use in rankings; names of institutions are spelled differently from one article to the next, and university affiliations are sometimes omitted altogether. After cleaning up ISI data on all UK papers for such effects… the true number of papers from the University of Oxford, for example, [were] 40% higher than listed by ISI…
Researchers at Leiden University in the Netherlands have similarly recompiled the ISI database for 400 universities: half a million papers per year. Their system produces various rankings based on different indicators. One, for example, weights citations on the basis of their scientific field, so that a university that does well in a heavily cited field doesn’t get an artificial extra boost.
The German Center for Higher Education Development (CHE) also offers rankings… for almost 300 German, Austrian and Swiss universities… the CHE is expanding the system to cover all Europe.
The US Commission on the Future of Higher Education is considering creating a similar public database, which would offer competition to the U.S. News & World Report.
Isidro Aguillo is the Scientific Director of the Laboratory of Quantitative Studies of the Internet of the Centre for Scientific Information and Documentation Spanish National Research Council and editor of Cybermetrics, the International Journal of Scientometrics, Informetrics and Bibliometrics.
In a posting to the American Scientist Open Access Forum, Dr. Aguillo makes the very valid point (in response to Declan Butler’s Nature news article about the use of unvalidated university rankings) that web metrics provide new and potentially useful information not available elsewhere. This is certainly true, and web metrics should certainly be among the metrics that are included in the multiple regression equation that should be tested and validated in order to weight each of the candidate component metrics and to develop norms and benchmarks for reliable widespread use in ranking and evaluation.
Among other potential useful sources of candidate metrics are:
Harzing‘s Google-Scholar-based metrics
and of course Google Scholar itself.
Bollen, Johan and Herbert Van de Sompel. Mapping the structure of science through usage. Scientometrics, 69(2), 2006
Hardy, R., Oppenheim, C., Brody, T. and Hitchcock, S. (2005) Open Access Citation Information. ECS Technical Report.
Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, Chandos.
Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. Invited Keynote, 11th Annual Meeting of the International Society for Scientometrics and Informetrics. Madrid, Spain, 25 June 2007
Kousha, Kayvan and Thelwall, Mike (2006) Google Scholar Citations and Google Web/URL Citations: A Multi-Discipline Exploratory Analysis. In Proceedings International Workshop on Webometrics, Informetrics and Scientometrics & Seventh COLLNET Meeting, Nancy (France).
Moed, H.F. (2005). Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer.
van Raan, A. (2007) Bibliometric statistical properties of the 100 largest European universities: prevalent scaling rules in the science system. Journal of the American Society for Information Science and Technology (submitted)
American Scientist Open Access Forum