Ranking Web (Webometrics) of Universities
Loet Leydesdorff
loet at LEYDESDORFF.NET
Sat Sep 1 07:06:04 EDT 2012
> But asking for it is not enough, action is needed. For example consider the huge impact of the publication of Shanghai ranking (ARWU) in 2003. Probably we can agree that it is merely high school level bibliometrics, but this is not the important question. In my humble opinion the success of ARWU is probably a illustrating a collective failure of our discipline.
Dear Isidro,
We are making steps and reaching agreements in the field. For example, since Ahlgren et al. (2003) one increasingly began to use the cosine as a similarity measure. (Even I have given up on the superior Kulback-Leibler divergence, and the cosine is implemented in my software.) Similarly since a year or so, one can witness consensus about the top-10% most-cited papers as an excellence indicator. Granada and Leiden use it in the ranking; you use it, and Lutz and I use it in the overlays to Google Maps. We recently had a special issue of Scientometrics debating the impact factor as perhaps obsolete. Etc.
We also know much more about how to count and evaluate citation distributions over publications. In my opinion, averaging is not such a good idea, but adding citation numbers to publication numbers—as you seem to advocate (?)—is perhaps even worse.
In my opinion, one should mistrust any indicator for which no uncertainty (error bar) can be specified.
Best wishes,
Loet
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20120901/fc89f914/attachment.html>
More information about the SIGMETRICS
mailing list