exaemplar references and significance of scholarly reviews

Stephen J Bensman notsjb at LSU.EDU
Tue Jan 30 11:48:21 EST 2007


Sir,
For the hell of it, I am posting below a screed I wrote during a debate
over the nature of the scientific journal market in the SERIALS PRICING
NEWSLETTER.  As is usual with my stuff, it is highly ideological and
sarcastic, but I do lay out a lot of the issues involved in journal
evaluation and pricing.  In this screed, I take up the issue of a journal's
subject scope to its position in the scientific social stratification
system.  You will note that the screed opens with a denial that I wrote in
the name of LSU Libraries.  I still have the mark on my knuckles, from
where my dean rapped them.

SB

225.2 BENSMAN'S RIPOSTE TO CAMERON
Stephen Bensman, Louisiana State University, notjsb at lsu.edu


In his response to the screeds by Johnson and me (220), Cameron has made
several points which themselves call for responses. In making these
responses I want to emphasize that I am only expressing my personal
opinions, and these opinions in no way reflect official policy at LSU
Libraries, particularly in respect to SPARC.


First among the points made by Cameron is his statement, "I don't believe
there are any generalisations that can usefully be made about journal
titles and quality of content -- even if one can agree on an objective
definition of quality." One of the main themes in the writings of Robert K.
Merton, a founder of the sociology of science, is that science operates on
universalistic principles. Thus, he wrote (On Social Structure and Science,
1996, p. 269), "The imperative of universalism is rooted deep in the
impersonal character of science." If Merton were correct, then there should
be measures of scientific quality which will manifest high degrees of
consensus and consistency. And, indeed, this proved to be the case in the
research done here at Louisiana State University (Library Resources &
Technical Services 40, 1996: 145-183; Library Resources & Technical
Services 42, 1998: 147-242). For example, high intercorrelations ranging
from 0.72 to 0.86 were found in the field of chemistry between LSU faculty
ratings, total Institute of Scientific Information citations, and library
use at the University of Illinois at Urbana-Champaign, revealing these to
be virtually equivalent measures of universal scientific value. As a
further indication of the universalism -- and stability -- of the
scientific information system, the journals supplying the most documents
from the British Library Lending Division in 1975 were also among the ones
most highly rated by the LSU faculty in 1995. The fact that the dominance
of US association journals over commercial ones manifested itself in LSU
faculty ratings in every one of 33 subject areas is surely proof that there
is something fundamental taking place.


I should also like to comment on Cameron's statement, "There are many
examples of new journals ... in niche areas ... which have rapidly become
highly respected and contain extremely high value material." In my approach
to scientific value I based myself on philosophic idealism, particularly
Bishop Berkeley's dictum that the essence of an object is in its
perception. Therefore, LSU faculty ratings became my main criterion, and
other measures had to correlate with them. In general, I found LSU faculty
ratings of quality to be a confounding of the following factors: 1)
something subjective the raters perceived to be "quality" or "utility"; 2)
personal advantage or whether the raters could publish in the journal; 3)
the social status of the scientists publishing in the journal; 4) the size
of the journal in both its physical and time aspects; and 5) the subject
comprehensiveness of the journal. It is the last point I want to focus on,
because it is this point which Noll and Steinmueller (Serials Review 18,
No. 2, 1992: 32-37) emphasize in their monopoly competition model.


In general, I found that the broader the subject scope of the journal, the
more highly the LSU faculty rated it, because the broader subject scope
made it pertinent to a wider spectrum of raters. Therefore, the two most
highly rated journals were Science and Nature. Because LSU faculty ratings
were so highly correlated with total citations and library use, it can be
deduced that the same processes are also operative in these measures. This
brings us to the problem of niche journals. An inspection of the articles
in Science and Nature should reveal that although the subject scope of
these journals is broad, the subject scope of the articles is not, giving
credence to the Noll and Steinmueller contention that the constant
narrowing of the subject scope of new journals is a device for creating
smaller social hierarchies to open publication space for research of lesser
quality. In their opinion -- and my research appears to bear them out --
these smaller journals are leading to the dysfunctions of monopoly
competition. The smaller journals may have played an important role when
the scientific information system was based upon a seventeenth-century
paper technology. For example, the noted historian of science, Derek J. de
Solla Price (Science since Babylon, 1961, p. 70; Little Science, Big
Science, 1963, p. 73) regarded the founding of a new journal as one of the
traditional ways his "invisible colleges" of scientists communicated with
each other and the rise of specialized journals as marking the attainment
of near autonomy by each of the separate disciplines. However, the niche
journals have clearly become dysfunctional, and in the era of the Internet
it seems that their purposes could be fulfilled in more cost-effective
ways.


The last point made by Cameron with which I want to deal pertains to his
critique of Johnson's defense of SPARC. Here Cameron makes the astute
observation, "On the whole journals are not competitive with one another
and it is not my understanding that the SPARC journals set out in head to
head competition with other journals but maybe I am wrong here." With this
statement Cameron puts his finger on the entire basis of monopoly
competition and one of the major fallacies underlying the thinking behind
SPARC. Scientific articles are not fungible, and therefore one journal
cannot be substituted for another. Each journal comprises a little
monopoly, including those that will be promoted by SPARC. The 1999 US
Periodical Price Index just published in American Libraries (30, May 1999:
84-92) shows that the inflationary spiral of serials prices is continuing,
and the only way many libraries will be able to subscribe to the new SPARC
titles with the given level of funding will be to cancel other titles.
Since the careers of many faculty members are dependent on these other
titles, librarians will find themselves in the midst of a class war among
scientists. A look at the line-up of the social forces involved in the
infamous Heinz Barschall affair should tell them that. From this
perspective the SPARC project appears to be an act of mass political
suicide on the part of the ARL directors. The only way to break up these
little monopolies from a politically neutral position is the adoption of a
free market through document delivery.


It strikes me as highly ironical that scientists -- supposedly the most
intelligent and rational creatures on earth -- have spawned an information
system that is economically inefficient by every definition of this term.
Until now there has been precious little science applied in the analysis of
the scientific information system. However, we are now entering the age of
computer information, and the one thing computers do best is count.
Therefore, the first step in applying science to the scientific information
system is to understand the operation and effect of the counting
distributions that underlie this system. Once you attempt to do this, you
are staring down the barrel of Karl Pearson and the other British
biometricians who launched a revolution in probability theory in the late
nineteenth century. As has been the case so often in the advance of human
knowledge, it is back to the future.










Eugene Garfield <eugene.garfield at THOMSON.COM>@listserv.utk.edu> on
01/30/2007 09:31:54 AM

Please respond to ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at listserv.utk.edu>

Sent by:    ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at listserv.utk.edu>


To:    SIGMETRICS at listserv.utk.edu
cc:     (bcc: Stephen J Bensman/notsjb/LSU)

Subject:    Re: [SIGMETRICS] exaemplar references and significance of
       scholarly reviews


This is a topic suitable for a series of doctoral dissertations. Teasing
out all the relevant factors will not be easy. And it will probably be
different in each field. It would be quite a challenge just to define what
is meant by "subject comprehensiveness". One might argue that the number of
cited references in the review is one such measure. EG

-----Original Message-----
From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman
Sent: Monday, January 29, 2007 8:48 PM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] exaemplar references and significance of
scholarly reviews



Sir,
The recent posting of the finding of a high correlation of references in
articles to citations received by articles has caused me to rethink my
position somewhat.  Up to now, I considered the higher citation rate of
review articles to be purely due to the important function review articles
play in scientific literature.  However, the person stated that he had
excluded review articles.  If this finding is true, there may be another
factor at work.  This is subject comprehensiveness.  It is well known that
general journals like the multidisciplinary Science and Nature as well as
such general field journals like the Journal of the American Chemical
Society tend to be much larger and attract citations at a higher rate.  In
faculty surveys I have noted that these general journals have higher
ratings, because their subject comprehensiveness makes them pertinent to
more faculty than narrowly specialized journals.  The same factors may be
at play with larger articles with more refer!
 ences.  These articles may be more subject comprehensive and, due to this
fact, may attract a larger readership and more citations.  The same may
hold true for review articles, which may be more subject comprehensive.
Therefore, the higher citation rate of review articles may not only be due
to their function but also due to their subject scope.  In any case, it
does seem to be an aspect that somebody should investigate.

SB




Eugene Garfield <eugene.garfield at THOMSON.COM>@listserv.utk.edu> on
01/29/2007 06:04:13 PM

Please respond to ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at listserv.utk.edu>

Sent by:    ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at listserv.utk.edu>


To:    SIGMETRICS at listserv.utk.edu
cc:     (bcc: Stephen J Bensman/notsjb/LSU)

Subject:    Re: [SIGMETRICS] exaemplar references and significance of
       scholarly reviews


Stephen: I fully agree with your last statement. And upon reflection I am
surprised that those of us who have been involved in publishing review
articles have not stimulated a better understanding of why the review
literature is so important. Since I have served on the Board of Annual
Reviews for over 20 years I will take up this question perhaps at our next
annual meeting in May. In the meantime I am forwarding these comments to my
colleague there and hope they will have some input.

A major problem with the scientometrics literature is the heavy focus on
the literature of information and library science rather than the kind of
reviewing that goes on in the natural and physical sciences. Readers of
this listserv interested in this topic would do well to look at the
characteristics of the several dozen winners of the National Academy of
Sciences annual award for excellence in scientific reviewing. They should
keep in mind that several hundred leading scientists and scholars devote an
enormous amount of time and energy to writing reviews. They are not
universally applauded for this effort, but in my personal experience most
of them consider it an activity that is crucial to their success as
creative scientists and teachers. Gene Garfield

-----Original Message-----
From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman
Sent: Sunday, January 28, 2007 9:01 PM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] exaemplar references and significance of
scholarly reviews



>From my readings I have been able to distill three basic reasons that are
advanced for why review articles are cited more than other articles. First,
there is the theory that review articles are longer, than other articles,
and long articles with many citations are more likely to be cited than
short articles with few citations.  I find this doubtful.  Second, there is
the view that scientists are lazy, and it easier and quicker to read a
review article than to plow through the literature yourself.  Some persons
of the this opinion dismiss review articles as mindless compendiums of
abstracts and feel that citations to review articles are less worthy than
citations to research articles.  And, third, review articles are
authoritative summaries of research that distinguish between the good and
the bad, providing guidance for further research.  The last two are
functional explanations, and I would tend to believe that it is the
functional role of review articles that causes them !
 !  to be more highly cited than others.

However, there certainly needs to be a lot more research on this question.

SB




Eugene Garfield <eugene.garfield at THOMSON.COM>@LISTSERV.UTK.EDU> on
01/27/2007 04:59:50 PM

Please respond to ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at LISTSERV.UTK.EDU>

Sent by:    ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at LISTSERV.UTK.EDU>


To:    SIGMETRICS at LISTSERV.UTK.EDU
cc:     (bcc: Stephen J Bensman/notsjb/LSU)

Subject:    Re: [SIGMETRICS] exaemplar references and significance of
       scholarly reviews


If you export the results of a search in WebofKnowledge into the HistCite
software (e.g. your 1000 hits, then the default result after you request an
historiograph will be the "exemplar references".

I would be cautious in describing the siglificance of review articles in
quch simplistic terms. I don't recall any studies in which there is an
analysis of why people cite reviews. Really good scholarly reviews are a
lot more than mere bibliographic surrogates, though they may be useful in
that respect as well. Interpretative reviews often play a key role in the
historical development of topics. Gene Garfield

-----Original Message-----
From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Morris, Steven (BA)
Sent: Saturday, January 27, 2007 11:55 AM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] question



Ronald,


I agree that you'd probably only find a weak correlation between number of
references cited and citations received if you don't distinguish between
the type of paper (review or not) and the way it is used as a reference
(well-cited exemplar reference or not).

In my mind the relation is very much tied to the dynamics of specialty
growth.  In a recent paper [1] I asserted that after a discovery that
prompts the birth of a specialty, there is a period of rapid growth in the
specialty where scientists extend the discovery, and present evidence to
support those extensions. The discovery paper and other early important
papers become heavily cited 'exemplar references' during this growth
period. At the end of the growth period, 'consolidation' review papers
appear that codify and summarize the newly generated base knowledge in the
new specialty. These consolidation papers can become highly cited exemplar
references in the sense that they are cited as summaries of collected base
knowledge. Some of these reviews become highly cited, some don't,  I
suspect it has to do both with timing (written at a point when the newly
generated knowledge was ready to be codified), quality and
comprehensiveness, and perceived authority of the review autho!
 !  !  r.

Given the growth and exemplar process described above, you'd expect the
following:

1) Discovery papers, written before all the base knowledge in the specialty
is generated, wouldn't cite many references, but would be cited heavily. I
think there is evidence out there that discovery papers tend to have few
references. I heard Kate McCain mention this once at a
conference ;-),   but I don't have a reference to support that.

2) Consolidation papers, written to summarize base knowledge immediately
after initial growth, would cite many references and be cited heavily.
Here, the problem is that only some of the consolidation papers become
exceptionally heavily cited exemplar references (the winning reviews that
provide the first good consolidation of the new knowledge), while others
may just be cited at a 'normal' rate for reviews, which is probably a
greater rate than non-review papers.

Some notes:

1) There is certainly evidence that the mean number of references per paper
increases over time. I've read this in the literature (though I can't
recall where) and I've seen this in all specialty specific data sets where
I've bothered to check it. I think this is function of specialty growth:
The network of base knowledge in the specialty gets more intricate as the
specialty grows and 'fills in the blanks', so authors of later papers have
to cite more 'marker references' (Hargens' term [3]) to describe the
position of the contribution of their papers in the network of base
knowledge in the specialty...

2) There is a correlation between the mean number of references per paper
and the length of the papers. Evidence for this is given by Abt[2]. So any
correlations you find between number of references in the paper and the
number of citations it receives may be related to length of papers.

3) In my experience, I find that the distribution of the number of
references per paper is log normally distributed and that the mode of that
distribution varies from one specialty to another.  Now, this fact totally
baffles me.  What social or cognitive process would cause this
distribution to appear?   Is it tied to the same process that governs
the distribution of length of papers? Some sort of proportional growth
process? It's a mystery wrapped in an enigma!  If you figure out what
generates that log-normal distribution, I'll send you a one pound bottle of
Tupelo honey as a prize....

Some other notes:

If you want to study the correlation of references per paper to citations
received, I suggest the following:

1) Gather specialty-specific collections of papers for your studies. The
heterogeneity in a large multiple-specialty study will totally screw up
the statistics...   You should get  about 1000 papers citing about
20,000 references for each specialty study...
2) Separate your references in the collection into 'exemplar' and
'non-exemplar', you can do this by applying a citation threshold, see [1].
3) Arrange the exemplar references serially by the order of their
appearance in the specialty.  I have some SQL queries I can send you for
doing this.
4) Look for 'discovery' references at the beginning of this sequence, and
'consolidation' references at the end of the sequence.
5) Study the correlation for 6 classes of reference: 1- general references,
2- general references less exemplar references, 3- discovery exemplar
references, 4- consolidation exemplar references, 5- general review
references, 6- general review references less exemplar references.

Thanks,

Steve

[1] Morris, S. A., 2005,  "Manifestation of emerging specialties in journal
literature: a growth model of papers, references, exemplars, bibliographic
coupling, cocitation, and clustering coefficient distribution" , JASIST,
56(2) 1250-1273 [2] Abt, H. A., 2000,  "The reference-frequency relation in
the physical sciences", Scientometrics, 49(3), 443-451. [3] Hargens, L. L.,
2000, "Using the literature: Reference networks, reference contexts, and
the social structure of scholarship"  American Sociological Review, 65(6),
846-865



=================================================
Steven A. Morris, Ph.D
Electrical Engineer V, Technology Development Group Baker-Atlas/INTEQ
Houston Technology Center 2001 Rankin Road, Houston, Texas 77073
Office: 713-625-5055, Cell: 405-269-6576


-----Original Message-----
From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman
Sent: Saturday, January 27, 2007 8:30 AM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] question


It is well known that review articles summarizing research receive on the
average more citations than other types of articles.  Your question is
considered in the book below:

Narin, F.  (1976).  Evaluative bibliometrics: The use of publication and
citation analysis in the evaluation of scientific activity.  Cherry Hill,
NJ: Computer Horizons, Inc.

Here Nariin write:

CHI (Narin, 1976, pp. 183-219) developed its "influence" method in a report
prepared for the National Science Foundation.  In this report it criticized
Garfield's impact factor as suffering from three basic faults (p. 184).
First, although the impact factor corrects for journal size, it does not
correct for average length of articles, and this caused journals, which
published longer articles such as review journals, to have higher impact
factors.



My guess is that you would find no or low correlation between length of
references and number of citations, but, if you used a chi-squared test of
independence,  you a strong positive association with review articles
dominant in the high reference/high citation cell.  As usual,It would be
best to do this test with well-defined subject sets than globally to avoid
the influence of exogenous subject variables.  However, Narin seems to have
been of a different opinion in respect to correlation, so you might look at
what he did.

SB




Ronald Rousseau <ronald.rousseau at KHBO.BE>@listserv.utk.edu> on 01/27/2007
07:33:34 AM

Please respond to ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at listserv.utk.edu>

Sent by:    ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at listserv.utk.edu>


To:    SIGMETRICS at listserv.utk.edu
cc:     (bcc: Stephen J Bensman/notsjb/LSU)

Subject:    [SIGMETRICS] question


Dear colleagues,

Is there a positive correlation between the length of a reference list of a
publication and the number of citations received? Is this true (or not) in
general, i.e. considering all types of publication? And what if one only
considers 'normal articles', this is when reviews and letters (and other
short
communications) are not taken into account?

Can someone point me to a reference?

Thanks!

Ronald


--
Ronald Rousseau
KHBO (Association K.U.Leuven)- Industrial Sciences and Technology
Zeedijk 101    B-8400  Oostende   Belgium
Guest Professor at the Antwerp University School for Library and
Information
   Science (UA - IBW)
E-mail: ronald.rousseau at khbo.be
web page:  http://users.telenet.be/ronald.rousseau



----------------------------------------------------------------
This message was sent using IMP 3.2.8, the Internet Messaging Program.

--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.17.12/653 - Release Date: 1/26/2007
11:11 AM


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.17.12/653 - Release Date: 1/26/2007
11:11 AM

--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.17.12/654 - Release Date: 1/27/2007
5:02 PM


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.17.14/657 - Release Date: 1/29/2007
9:04 AM

--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.17.14/657 - Release Date: 1/29/2007
9:04 AM


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.17.15/659 - Release Date: 1/30/2007
9:31 AM



More information about the SIGMETRICS mailing list