Johan Bollen, Marko A. Rodriguez, and Herbert Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1 9 Jan 2006
Stephen J Bensman
notsjb at LSU.EDU
Mon Mar 6 17:35:10 EST 2006
Loet,
You and I are perhaps looking at two different aspects of the problem. I
am looking at it from the perspective of a librarian trying to decide which
journals should be provided with permanent access on a subscription basis
and which should be accessed through some form of intermittent document
delivery. Therefore, I am interested not only in prestige but also in
functionality, i.e., what function does the journal serve--reporting of
research, reviewing of literature, assistance in teaching, provision of
current news, etc. Citation measures either capture one facet of
functionality--total citations seem to capture reporting of research,
impact factor, review literature--or fail to capture the functionality at
all, i.e., teaching or reporting of current news. Total citations cannot
capture the review literature, because review journals are usually very
small even though highly rated by scientists, but since impact factor
captures both review literature and current research significance--which is
usually the same as historical historical research significance due the
stability of patterns--and the correlation of total citations with impact
factor is high enough so that journals high on both can be captured in a
broad category robust against random error, it seems to me that impact
factor can capture two facets of functionality unlike total citations,
which can only capture one. The hypothesis remains to be tested.
Prestige appears to operate separately from functionality. The greatest
cause of variance in all four measures is their belonging to the category
of US association journals. The journals of the American Chemical Society
are dominant on all four measures. Through various evaluations of US
research-doctorate programs by peer ratings and citations, I can trace this
dominance to scientists employed by the traditionally elite US research
insitutions. Thus, variance and prestige in all four measures is a
function of the social stratification system of US scientific institutions.
There remains the question of how do foreign scientists relate to the US
social stratification system. If they form a part of it, the foreigners
can use ISI citations for evaluation and other puposes. If they do not
form part of it, then foreigners using ISI citations may only be rating
themselves by how much their work is being accepted by the scientists
within this system. I have no answer to this question.
I suppose that now you are more confused than ever. I did my best, but it
is complicated as all hell.
SB
Loet Leydesdorff <loet at LEYDESDORFF.NET>@listserv.utk.edu> on 03/06/2006
03:12:54 PM
Please respond to ASIS&T Special Interest Group on Metrics
<SIGMETRICS at listserv.utk.edu>
Sent by: ASIS&T Special Interest Group on Metrics
<SIGMETRICS at listserv.utk.edu>
To: SIGMETRICS at listserv.utk.edu
cc: (bcc: Stephen J Bensman/notsjb/LSU)
Subject: Re: [SIGMETRICS] Johan Bollen, Marko A. Rodriguez, and Herbert
Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1 9 Jan 2006
Dear Stephen,
I apologize if I dragged you into the discussion with improper argument,
but I did not want to mention your idea of using total cites as an
indicator without providing a proper reference.
The reasoning in your posting is difficult for me to follow, but I look
forward to reading the full paper. My experience is that reading the full
paper, one begins to understand. I found your previous argument about using
total cites very convincing because of its high correlation with faculty
ratings and its orthogonality to the impact factor. It seemed to me that
the impact factor measures something very different from the prestige of a
journal.
(Embedded image moved to file: pic17086.gif)
Figure 1: Component plot in rotated space (sources: JCR, 1993; Bensman,
2001; forthcoming; Bensman & Wilder, 1998).
From: Visualization of the Citation Impact Environments of Scientific
Journals: An online mapping exercise, Journal of the American Society for
Information Science and Technology (forthcoming). . <pdf-version>
With best wishes,
Loet
________________________________
Loet Leydesdorff
Amsterdam School of Communications Research (ASCoR),
Kloveniersburgwal 48, 1012 CX Amsterdam.
Tel.: +31-20- 525 6598; fax: +31-20- 525 3681;
loet at leydesdorff.net ; http://www.leydesdorff.net/
> -----Original Message-----
> From: ASIS&T Special Interest Group on Metrics
> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman
> Sent: Monday, March 06, 2006 6:41 PM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] Johan Bollen, Marko A. Rodriguez,
> and Herbert Van de Sompel "Journal Status"
> arXiv:cs.GL/0601030 v1 9 Jan 2006
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
>
>
>
>
> Loet,
> I see that you have once again taken my name in vain and
> again given me the opportunity to spout my ideas on
> SIGMETRICS. I must admit that I have not read the paper you
> discuss, because my doctor warned me against reading too many
> such papers, since I am fairly close to OD-ing on them.
> However, the conclusions you mention do seem a little peculiar.
>
> Due to detailed study of Gene Garfield's development and
> utilization of impact factor, I am coming to change my mind
> on this measure somewhat. It is for rather complicated
> reasons, which I shall try to explain below.
>
> In general I think that there is too much random error in
> citation data for the utilization of such precise techniques
> as correlation--Pearson, Spearman, whatever. Much results
> from exogenous citations due to an inability to define
> precise sets--a logical consequence of Bradford's Law of
> Scattering and Garfield's Law of Concentration. Impact
> factor suffers from a further source of error due to an
> inability to classify precisely sources into citable and
> non-citable--something which honest persons can disagree on.
> This inability severely affects the denominator of the impact
> factor equation. What is therefore needed is a technique
> that is crude and robust against such error. I have
> personally found it in the chi-square test of independence,
> which allows the conversion of citation measures into ordinal
> variables defined by broad categories. It also allows one to
> define the amount of error one is willing to accept, i.e.,
> upper 10% vs.
> upper 25%.
>
> Use of this chi-square test may vindicate impact factor by
> demonstrating that it has the same strong relationship to
> expert ratings as do total citations. As a matter of fact,
> it may be a superior measure in that it will not only capture
> the importance of reseach journals but also of review
> journals. Close inspection of the top 10% of the journals
> recommended by the LSU chemistry faculty reveals it to be a
> balanced mix of research journals, review, journals, and the
> main teaching journal of chemistry.
> In other words, most facets of journal importance are
> captured by this measure, whereas total citations captures
> mainly research, and impact factor captures chiefly the
> review journals. However, broadening the categories may
> cause impact factor to capture both research and review
> though not the teaching facet. In any case I am going to
> test this in the revision of the JASIST paper I am now engaged in.
>
> Impact factor has the ability to do this for the very reasons
> Seglen denounces it. His main case against is based on the
> reasoning of the law of error and the role of the arithmetic
> mean in this law. This requires the normal distribution for
> the arithmetic mean to be an accurate estimate of central
> tendency. However, due to the highly skewed distributions
> with which we deal, the arithmetic mean is always way above
> the other estimates of central tendency such as the median or
> the geometric mean due to the high degree of variance caused
> the dominant observations. Seglen's reasoning collapses once
> one realizes that a journal's or scientist's importance is
> not measured by central tendency but by the variance caused
> by the few important articles published by the journal or scientist.
> Therefore, scientific importance is the result of variance
> and not central tendency. The arithmetic mean, which impact
> factor attempts to estimate, better captures the variance.
>
> To demonstrate, I have converted Garfield's constant for the
> year 1993 into binomial p and the Poisson lambda The way I
> did this is in the attached Excel file. You will see the
> binomial p is a lousy 0.0003, which converts into a Poisson
> lambda or Garfield's constant of 2.15 for the year. This is
> the probability or the rate articles were cited in 1993 on
> the assumption of probabilistic homogeneity. However, since
> there is probabilistic heterogeneity, most articles have to
> have a citation rate below Garfield's constant. True to
> form, of the 5000 journals covered that year, 4500 journals
> were below to Garfield's constant. 2.15 is an awful small
> range to squeeze 4500 journals into and expect meaningful
> quantitative distinctions. Utilization of a central tendency
> measure puts one right smack in the middle of that tight
> range. Small as this may be, the probabilities and lambda
> were actually much smaller, for Garfield's constant is based
> on the set of articles actually cited that year, i.e., it it
> truncated on the left and does not take into account the
> articles that could have been cited but were not. I do not
> have the technical or intellectual ability to estimate this
> zero class. I do know that Sir Maurice Kendall backed off
> from the problem when he confronted it in Bradford's Law, and
> who the hell am I compared to Maurice Kendall. I wish that
> somebody would write an article understandable to simpletons
> on how to make such estimates. From my perspective, this
> would be one of the most important articles ever written.
>
> Sorry for the tirade, but I thought I'd float a few trial
> balloons to be shot down.
>
> SB
>
> (See attached file: GarConst.xls)
>
>
>
>
>
>
>
>
> Loet Leydesdorff <loet at LEYDESDORFF.NET>@LISTSERV.UTK.EDU> on
> 03/04/2006
> 07:14:57 AM
>
> Please respond to ASIS&T Special Interest Group on Metrics
> <SIGMETRICS at LISTSERV.UTK.EDU>
>
> Sent by: ASIS&T Special Interest Group on Metrics
> <SIGMETRICS at LISTSERV.UTK.EDU>
>
>
> To: SIGMETRICS at LISTSERV.UTK.EDU
> cc: (bcc: Stephen J Bensman/notsjb/LSU)
>
> Subject: Re: [SIGMETRICS] Johan Bollen, Marko A.
> Rodriguez, and Herbert
> Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1
> 9 Jan 2006
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> Dear colleagues,
>
> The idea is interesting. However, there a few problems with
> this paper.
> First, the authors should not have used Pearson correlation
> coefficients to compare the rankings, but rank correlations
> (Spearman's rho or Kendall's tau). Second, it would have been
> interesting to have a rank correlation with "total cites"
> given recent discussions (Bensman). Third, the delineation of
> fields in terms of the ISI subject categories is very questionnable.
>
> However, the authors are very clear about their results: "We
> identified ...
> , but were unable to recognize a meaningful pattern in the
> results." (p.
> 9).
> I don't understand why one should then multiply the one
> measure with the other. What does multiplication to the error?
>
> Does one of you know a place where the ISI subject categories
> are justified?
> How are they produced? People seem to use them increasingly
> both in evaluation and research practices, but I have never
> been able to reproduce them using journal citation measures.
>
> With best wishes,
>
>
> Loet
>
>
> ________________________________
> Loet Leydesdorff
> Amsterdam School of Communications Research (ASCoR),
> Kloveniersburgwal 48, 1012 CX Amsterdam.
> Tel.: +31-20- 525 6598; fax: +31-20- 525 3681;
> loet at leydesdorff.net ; http://www.leydesdorff.net/
>
>
>
> > -----Original Message-----
> > From: ASIS&T Special Interest Group on Metrics
> > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Eugene Garfield
> > Sent: Friday, March 03, 2006 6:37 PM
> > To: SIGMETRICS at LISTSERV.UTK.EDU
> > Subject: [SIGMETRICS] Johan Bollen, Marko A. Rodriguez, and Herbert
> > Van de Sompel "Journal Status" arXiv:cs.GL/0601030 v1
> > 9 Jan 2006
> >
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> >
> > Further to yesterday's posting, "Prestige is factored into journal
> > ratings", here is another interesting and informative article
> >
> > FULL TEXT AVAILABLE AT :
> > http://www.arxiv.org/PS_cache/cs/pdf/0601/0601030.pdf
> >
> > email: {jbollen, marko, herbertv}@lanl.gov
> >
> > TITLE : Journal Status
> >
> > AUTHORS : Johan Bollen, Marko A. Rodriguez, and Herbert
> Van de Sompel
> >
> > SOURCE : arXiv:cs.GL/0601030 v1 9 Jan 2006
> >
> > Abstract
> > The status of an actor in a social context is commonly defined in
> > terms of two factors: the total number of endorsements the actor
> > receives from other actors and the prestige of the
> endorsing actors.
> > These two factors indicate the distinction between popularity and
> > expert appreciation of the actor, respectively. We refer to
> the former
> > as popularity and to the latter as prestige. These notions of
> > popularity and prestige also apply to the domain of scholarly
> > assessment. The ISI Impact Factor (ISI IF) is defined as the mean
> > number of citations a journal receives over a 2 year
> period. By merely
> > counting the amount of citations and disregarding the
> prestige of the
> > citing journals, the ISI IF is a metric of popularity, not of
> > prestige. We demonstrate how a weighted version of the popular
> > PageRank algorithm can be used to obtain a metric that reflects
> > prestige. We contrast the rankings of journals according to
> their ISI
> > IF and their weighted PageRank, and we provide an analysis that
> > reveals both significant overlaps and differences.
> > Furthermore, we introduce the Y-factor which is a simple
> combination
> > of both the ISI IF and the weighted PageRank, and find that the
> > resulting journal rankings correspond well to a general
> understanding
> > of journal status.
> >
> >
> > ______________________________________________
> >
> >
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> >
> > FULL TEXT AVAILABLE AT :
> >
> http://www.nature.com/nature/journal/v439/n7078/pdf/439770a.pdf OR
> > http://guide.labanimal.com/news/2006/060213/full/439770a.html
> >
> >
> > Philip Ball : p.ball at nature.com
> > www.philipball.com
> >
> > Title: Prestige is factored into journal ratings
> >
> > Author(s): Ball P
> >
> > Source: NATURE 439 (7078): 770-771 FEB 16 2006
> >
> > Document Type: News Item Language: English
> > Cited References: 0 Times Cited: 0
> >
> > Publisher: NATURE PUBLISHING GROUP, MACMILLAN BUILDING, 4
> CRINAN ST,
> > LONDON
> > N1 9XW, ENGLAND
> > Subject Category: MULTIDISCIPLINARY SCIENCES IDS Number: 012JA
> >
> > ISSN: 0028-0836
> >
>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pic17086.gif
Type: image/gif
Size: 3683 bytes
Desc: not available
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20060306/f04ef9a0/attachment.gif>
More information about the SIGMETRICS
mailing list