"Academics strike back at spurious rankings" (Nature,31 May)

"Enrique Wulff (Cádiz. CSIC)" enrique.wulff at ICMAN.CSIC.ES
Wed Jun 6 07:37:03 EDT 2007


Good morning all

At the moment the institution as the best choice 
as an evaluation unit seems an option compromised 
with this RAE-associated pedagogical innovation action.
A sincere effort was developed in the past to fit 
individual criteria by Jan Vlachy. And preference 
has been assigned to the department as an 
unavoidable priority eg. by the andalusian 
administration (http://www.ucua.es). Did it start 
a polemical debate on the technical or rather 
political reasons behind the institutional-oriented decision?
I do not know. Have you got some bibliography?

I expect your opinion.
Enrique.
http://www.ucm.es/BUCM/revistas/inf/02104210/articulos/DCIN9595110245A.PDF

At 12:34 05/06/2007, you wrote:
>Adminstrative info for SIGMETRICS (for example unsubscribe):
>http://web.utk.edu/~gwhitney/sigmetrics.html
>
>Dear all:
>
>As our ranking (Webometrics) is listed in the 
>Nature paper, we wish to add some points to the debate:
>
>- Rankings has been very successful in certain 
>areas. Original purpose of Chinese Ranking was 
>to help students for choosing foreign 
>universities and from our own data this is still 
>the major use, so there is clearly a gap to be 
>filled.  In other area,  Shanghai data is 
>probably behind major reorganization of French 
>university system in 2006. As the Web data 
>includes larger number of institutions in 
>developing countries we have noticed similar 
>debates in the Middle East and South East Asia.
>
>- Our main aim for preparing the Web ranking was 
>not to classify institutions but to encourage 
>Web publication, even farther than the current 
>OA initiatives as they are focused on the formal 
>scholar publication and we call for open 
>archives of raw data, teaching material, 
>multimedia resources, software and other 
>academic and para-academic material. It was a 
>great surprise to discover that there is already 
>a big academic digital divide in web contents 
>that affects not only to developing regions but to many European countries
>
>- Taking into account the naming system of the 
>Web domains, the institution is a "natural" unit 
>in the webometric analysis. An additional 
>advantage is that webpages reflect a lot of more 
>activities that only scientific publications but 
>unfortunately in a way that is difficult to 
>discriminate specific contributions. As the 
>evaluation of universities should consider other 
>aspects than research output, Web indicators 
>could be combined with other indicators  as we intend to do in the near future.
>
>A new edition of our ranking 
>(http://www.webometrics.info) covering over 4000 
>universities worldwide is scheduled for the 
>July. Any comments and suggestions are welcomed.
>
>Best regards,
>
>
>
>Loet Leydesdorff escribió:
>>Adminstrative info for SIGMETRICS (for example unsubscribe):
>>http://web.utk.edu/~gwhitney/sigmetrics.html
>>
>>OK. Let's assume that we need a structural equation model in which journals
>>are one of the predictive variables. Since one wishes (in the Nature
>>article) to compare Oxford and Cambridge with Lausanne and Leiden, nation
>>should be another independent variable. You also wish to take expert
>>judgement (peer review) as a predictor?
>>But what would be the dependent (predicted) variable?
>>With best wishes,
>>
>>Loet
>>
>>________________________________
>>
>>Loet Leydesdorff Amsterdam School of 
>>Communications Research (ASCoR), 
>>Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: 
>>+31-20- 525 6598; fax: +31-20- 525 3681 
>>loet at leydesdorff.net ; http://www.leydesdorff.net/
>>
>>
>>
>>>-----Original Message-----
>>>From: ASIS&T Special Interest Group on Metrics 
>>>[mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad
>>>Sent: Sunday, June 03, 2007 1:57 PM
>>>To: SIGMETRICS at listserv.utk.edu
>>>Subject: Re: [SIGMETRICS] "Academics strike 
>>>back at spurious rankings" (Nature, 31 May)
>>>
>>>Adminstrative info for SIGMETRICS (for example unsubscribe):
>>>http://web.utk.edu/~gwhitney/sigmetrics.html
>>>
>>>On Sun, 3 Jun 2007, Loet Leydesdorff wrote:
>>>
>>>
>>>>>     "All current university rankings are flawed to some
>>>extent; most,
>>>
>>>>>     fundamentally,"
>>>>>
>>>>The problem is that institutions are not the right unit of
>>>analysis for the
>>>
>>>>bibliometric comparison because citation and publication
>>>practices vary
>>>
>>>>among disciplines and specialties. Universities are mixed bags.
>>>>
>>>Yes and no. It is correct that the right unit 
>>>of analysis is the field or even
>>>subfield of the research being compared. But 
>>>it is also true that in comparing
>>>universities one is also comparing their field and subfield coverage.
>>>The general way to approach this problem is with a rich and diverse set of
>>>predictor metrics, in a joint multiple 
>>>regression equation that can adjust the
>>>weightings of each depending on the field, and on the use to which the
>>>spectrum of metrics is being put: There can, for example, be "discipline
>>>coverage" metrics (from narrow to wide) as well as "field size" and
>>>"institutional size" metrics, whose regression weights can be adjusted
>>>depending on what it is that the equation is being used to predict,
>>>and hence to rank. The differential weightings can be validated against
>>>other means of ranking (including expert judgments).
>>>
>>>     Harnad, S. (2007) Open Access Scientometrics and the UK Research
>>>     Assessment Exercise. Invited Keynote, 11th Annual Meeting of the
>>>     International Society for Scientometrics and Informetrics. Madrid,
>>>     Spain, 25 June 2007 http://arxiv.org/abs/cs.IR/0703131
>>>
>>>
>>>>Our Leiden colleagues try to correct for this by
>>>normalizing on the journal
>>>
>>>>set which the group uses itself, but one can also ask
>>>whether the group is
>>>
>>>>using the best possible set given its research profile.
>>>Should one not first
>>>
>>>>determine a journal set and then compare groups within it?
>>>The three things that are needed are (1) a far 
>>>richer and more diverse set of potential 
>>>metrics, (2) insurance that like is being compared with like, and (3)
>>>validation of the ranking against face-valid external criteria, so that the
>>>metrics can eventually function as benchmarks and norms.
>>>
>>>None of this can be done a priori; the methodology is similar to the
>>>methodology of validating batteries of psychometric or biometric tests:
>>>Correlate the joint set of metrics with 
>>>external, face-valid criteria, and adjust their respective weights accordingly.
>>>
>>>It is unlikely, however, that the relevant and predictive frame of
>>>reference and basis of comparison will be journal sets. Breadth/narrowness
>>>of journal coverage is just one among many, many potential parameters. The
>>>interest is in comparing researchers and research groups or institutions,
>>>within or across fields. The journal does carry some predictive and
>>>normative power in this, and it is one indirect way of equating for field,
>>>but it is one among many ways that one might wish to weight -- or equate
>>>-- metrics, particularly in an Open Access database in which all journals
>>>(and all individual articles and all individual researchers, and their
>>>respective download, citation, co-citation, hub/authority, consanguinity,
>>>chronometric, and many other metrics are all available for weighting,
>>>equating, and validating).
>>>
>>>What we have to remember is that the imminent Open Access (OA) world
>>>is incomparably wider and richer -- and more open -- than the narrow,
>>>impoverished classical-ISI world to which we were constrained in the
>>>Closed Access paper-based era.
>>>
>>>
>>>>Furthermore, Brewer et al. (2001) made the point that one
>>>should also
>>>
>>>>distinguish between prestige and reputation. Reputation is
>>>field specific;
>>>
>>>>prestige is more historical. (Brewer, D. J., Gates, S. M.,
>>>& Goldman, C. A.
>>>
>>>>(2001). In Pursuit of Prestige: Strategy and Competition in
>>>U.S. Higher
>>>
>>>>Education. Piscataway, NJ: Transaction Publishers, Rutgers
>>>University.)
>>>
>>>This is still narrow journal- and journal-average-centred thinking. Yes,
>>>journals will still be the entities in which 
>>>papers are published, and journals
>>>will vary both in their field of coverage and 
>>>their quality, and this can and
>>>will be taken into account. But those 
>>>variables constitute only a small fraction
>>>of OA scientometric and semiometric space.
>>>
>>>     Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open
>>>     Research Web: A Preview of the Optimal and the Inevitable, in Jacobs,
>>>     N., Eds. Open Access: Key Strategic, Technical and Economic Aspects,
>>>     Chandos. http://eprints.ecs.soton.ac.uk/12453/
>>>
>>>
>>>>Many of the evaluating teams are institutionally dependent
>>>on the contracts
>>>
>>>>for the evaluations. Quis custodies custodes?
>>>OA itself is transparency's, diversity's and equitability's best defender.
>>>
>>>Stevan Harnad
>>>AMERICAN SCIENTIST OPEN ACCESS FORUM:
>>>http://amsci-forum.amsci.org/archives/American-Scientist-Open-
>>>Access-Forum.html
>>>     http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/
>>>
>>>UNIVERSITIES and RESEARCH FUNDERS:
>>>If you have adopted or plan to adopt an policy of providing Open Access
>>>to your own research article output, please describe your policy at:
>>>     http://www.eprints.org/signup/sign.php
>>>     http://openaccess.eprints.org/index.php?/archives/71-guid.html
>>>     http://openaccess.eprints.org/index.php?/archives/136-guid.html
>>>
>>>OPEN-ACCESS-PROVISION POLICY:
>>>     BOAI-1 ("Green"): Publish your article in 
>>> a suitable toll-access journal
>>>     http://romeo.eprints.org/
>>>OR
>>>     BOAI-2 ("Gold"): Publish your article in 
>>> an open-access journal if/when     a suitable one exists.
>>>     http://www.doaj.org/
>>>AND
>>>     in BOTH cases self-archive a supplementary version of your article
>>>     in your own institutional repository.
>>>     http://www.eprints.org/self-faq/
>>>     http://archives.eprints.org/
>>>     http://openaccess.eprints.org/
>>>
>>>
>>
>>__________ Información de NOD32, revisión 2308 (20070604) __________
>>
>>Este mensaje ha sido analizado con  NOD32 antivirus system
>>http://www.nod32.com
>>
>>
>>
>>
>
>
>--
>***************************************
>Isidro F. Aguillo
>isidro @ cindoc.csic.es
>Ph:(+34) 91-5635482 ext. 313
>
>Cybermetrics Lab
>CINDOC-CSIC
>Joaquin Costa, 22
>28002 Madrid. SPAIN
>
>www.webometrics.info
>www.cindoc.csic.es/cybermetrics
>internetlab.cindoc.csic.es
>****************************************



More information about the SIGMETRICS mailing list