Johan Bollen, Marko A. Rodriguez, and Herbert Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1 9 Jan 2006

Boyack, Kevin W kboyack at SANDIA.GOV
Thu Mar 9 10:31:26 EST 2006


Stephen,

Can you point me to any references that would quantify your statement:

"What the Americans have found is that no matter how carefully you do
it, you always crap it up somehow."

I would love to read the about the lessons learned.

Thanks,
Kevin



-----Original Message-----
From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman
Sent: Thursday, March 09, 2006 7:28 AM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] Johan Bollen, Marko A. Rodriguez, and Herbert
Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1 9 Jan 2006


Loet,
That is pretty basic, but the problem is that due to Bradford's Law and
Garfield's Law definition of such sets is impossible.  You are always
going to have exogenous variables if you use citations.  If you going to
use citations in evaluations, they must be used together with other
variables--the best being experta ratings if such are available.  Then
you can check for extreme outliers indicating sources of distortion.
The evaluation must be specific to those scientists  being evaluated.
It is not possible define mathematically sets universally applicable.

The one thing that really bothers me about European research is that
they seem to assume that citations are valid measures of quality.  It
then concentrates on find some mathematical technique supposedly capable
of measuring quality.  This research seems woefully short of studies of
the opinions of actual scientists as well as the institutional and
social bases of citations.  It seems to boil down to fascination with
new gimmickry--latest being the present fad with the Hirsch index,
Compared with the work done by the American Council on Education and the
US National Research Council it is quite crude--even the vaunted British
RAE.  What the Americans have found is that no matter how carefully you
do it, you always crap it up somehow.

SB




Loet Leydesdorff <loet at LEYDESDORFF.NET>@LISTSERV.UTK.EDU> on 03/09/2006
12:25:45 AM

Please respond to ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at LISTSERV.UTK.EDU>

Sent by:    ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at LISTSERV.UTK.EDU>


To:    SIGMETRICS at LISTSERV.UTK.EDU
cc:     (bcc: Stephen J Bensman/notsjb/LSU)

Subject:    Re: [SIGMETRICS] Johan Bollen, Marko A. Rodriguez, and
Herbert
       Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1 9 Jan 2006


> What's not to laugh about it?  There probably is no better way to do
> it.
>
> SB

Let me then repeat the problem: comparisons in terms of impact factors,
etc., are valid only within cognitive domains with common citation and
publication practices. In other words, citation graphs among journals
have different densities and this affects the impact factors in the
corresponding domains. For example, impact factors of immunology
journals are much higher than impact factors of toxicology journals.

The delineation of the sets in which one can compare thus matters. We
know that this delineation cannot be perfect, but it matters how good
they are.
Increasingly evaluation commission and scientometric researchers seem to
assume that the ISI subject categories are valid delineation of domains
within which one can make comparisons. The article by Bollen et al. was
a point in case.

With best wishes,


Loet
________________________________
Loet Leydesdorff
Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal
48, 1012 CX Amsterdam.
Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ;
http://www.leydesdorff.net/



More information about the SIGMETRICS mailing list