Johan Bollen, Marko A. Rodriguez, and Herbert Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1 9 Jan 2006
Stephen J Bensman
notsjb at LSU.EDU
Thu Mar 9 12:17:38 EST 2006
The literature is full of these screw ups. Perhaps the funniest example is
what happened with music, which was evaluated on the same basis as physics.
However, musicians play music and do not write papers. Therefore, the
Juillard School of Music was rated very low. It refused to undergo another
such humiliating experience.
Probably the most important example causing the biggest foul up involved
the biosciences. This involved an error under discussion here. Ratings
were traditionally done on an organizational basis. However, biosciences
are organized differently at different institutions. Some have med
schools, some do not, Some have agricultural schools, some do not. The
organizational basis caused an inability to rate comparable sets at
different universities, and this severely affected LSU. In 1981 LSU put
two small departments in the College of Basic Sciences up for ratings, and
these were creamed. But in 1993 it was decided base the ratings on subject
categories instead organizational units. As a result, LSU put up for
ratings all its bioscientists not only in the College of Basic Sciences but
all at Vet Med and in the College of Agriculture on the Baton Rouge campus.
Since scientific significance is a function of size, LSU jumped in the
rankings in a probabilistically impossible fashion. LSU had its med
schools at New Orleans and Shreveport rated separately, and if these had
been thrown into the mix, its rankings would have been even higher. So it
seems that from 1910 through 1993 all bioscience ratings were incorrect due
to comparing incomparable sets.
Hope you found this bit of person experience interesting.
SB
"Boyack, Kevin W" <kboyack at SANDIA.GOV>@LISTSERV.UTK.EDU> on 03/09/2006
09:31:26 AM
Please respond to ASIS&T Special Interest Group on Metrics
<SIGMETRICS at LISTSERV.UTK.EDU>
Sent by: ASIS&T Special Interest Group on Metrics
<SIGMETRICS at LISTSERV.UTK.EDU>
To: SIGMETRICS at LISTSERV.UTK.EDU
cc: (bcc: Stephen J Bensman/notsjb/LSU)
Subject: Re: [SIGMETRICS] Johan Bollen, Marko A. Rodriguez, and Herbert
Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1 9 Jan 2006
Stephen,
Can you point me to any references that would quantify your statement:
"What the Americans have found is that no matter how carefully you do
it, you always crap it up somehow."
I would love to read the about the lessons learned.
Thanks,
Kevin
-----Original Message-----
From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman
Sent: Thursday, March 09, 2006 7:28 AM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] Johan Bollen, Marko A. Rodriguez, and Herbert
Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1 9 Jan 2006
Loet,
That is pretty basic, but the problem is that due to Bradford's Law and
Garfield's Law definition of such sets is impossible. You are always
going to have exogenous variables if you use citations. If you going to
use citations in evaluations, they must be used together with other
variables--the best being experta ratings if such are available. Then
you can check for extreme outliers indicating sources of distortion.
The evaluation must be specific to those scientists being evaluated.
It is not possible define mathematically sets universally applicable.
The one thing that really bothers me about European research is that
they seem to assume that citations are valid measures of quality. It
then concentrates on find some mathematical technique supposedly capable
of measuring quality. This research seems woefully short of studies of
the opinions of actual scientists as well as the institutional and
social bases of citations. It seems to boil down to fascination with
new gimmickry--latest being the present fad with the Hirsch index,
Compared with the work done by the American Council on Education and the
US National Research Council it is quite crude--even the vaunted British
RAE. What the Americans have found is that no matter how carefully you
do it, you always crap it up somehow.
SB
Loet Leydesdorff <loet at LEYDESDORFF.NET>@LISTSERV.UTK.EDU> on 03/09/2006
12:25:45 AM
Please respond to ASIS&T Special Interest Group on Metrics
<SIGMETRICS at LISTSERV.UTK.EDU>
Sent by: ASIS&T Special Interest Group on Metrics
<SIGMETRICS at LISTSERV.UTK.EDU>
To: SIGMETRICS at LISTSERV.UTK.EDU
cc: (bcc: Stephen J Bensman/notsjb/LSU)
Subject: Re: [SIGMETRICS] Johan Bollen, Marko A. Rodriguez, and
Herbert
Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1 9 Jan 2006
> What's not to laugh about it? There probably is no better way to do
> it.
>
> SB
Let me then repeat the problem: comparisons in terms of impact factors,
etc., are valid only within cognitive domains with common citation and
publication practices. In other words, citation graphs among journals
have different densities and this affects the impact factors in the
corresponding domains. For example, impact factors of immunology
journals are much higher than impact factors of toxicology journals.
The delineation of the sets in which one can compare thus matters. We
know that this delineation cannot be perfect, but it matters how good
they are.
Increasingly evaluation commission and scientometric researchers seem to
assume that the ISI subject categories are valid delineation of domains
within which one can make comparisons. The article by Bollen et al. was
a point in case.
With best wishes,
Loet
________________________________
Loet Leydesdorff
Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal
48, 1012 CX Amsterdam.
Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ;
http://www.leydesdorff.net/
More information about the SIGMETRICS
mailing list