Ranchachari PK "Cite and oversight" Drug Discovery Today 9(22):954-956, November 15,2004
Eugene Garfield
garfield at CODEX.CIS.UPENN.EDU
Fri Apr 28 16:52:23 EDT 2006
e-mail: patangi.k.rangachari at learnlink.mcmaster.ca
Title: Cite and oversight
Author(s): Rangachari PK
Source: DRUG DISCOVERY TODAY 9 (22): 954-956 NOV 15 2004
Document Type: Editorial Material
Language: English
Cited References: 12 Times Cited: 0
Addresses: Rangachari PK (reprint author), Univ Calgary, Fac Med, B HSc
Program, Calgary, AB Canada
Univ Calgary, Fac Med, B HSc Program, Calgary, AB Canada
Publisher: ELSEVIER SCI LTD, THE BOULEVARD, LANGFORD LANE, KIDLINGTON,
OXFORD OX5 1GB, OXON, ENGLAND
Subject Category: PHARMACOLOGY & PHARMACY
IDS Number: 870EA
ISSN: 1359-6446
THE AUTHOR HAS KINDLY PERMITTED US TO POST THE FULL TEXT OF THIS ARTICLE
Cite and oversight
P.K.Rangachari
Professor of Pharmacology and Therapeutics
Director of Inquiry, B.HSc Program
Faculty of Medicine
University of Calgary
Calgary, Canada
The research publication is the currency of our scientific lives, and the
value of our careers is measured with what we publish. It is not enough to
document what we have done, it is also important to know whether we have
made a meaning ful difference, hence the desire to measure our worth and
value. Enter citation analysis and impact factors. I have been interested
in this issue for along time, but the recent stimulation came from reading
the recent article in Drug Discovery Today by Raymond C. Rowe entitled
Publish or perish [1]. This response to Rowes article represents one
individuals opinion on a controversial issue. It is not a meta-analysis or
a review, therefore, there will not be a surfeit of references; the
citations will be idiosyncratic to suit my purpose. I am following in the
footsteps of many, only honest enough to admit it. If at the end of it all,
I have provoked a response, I will rest content. The research publication
that plays such an inordinate part of our professional lives was largely
the creation of a single individual, Henry Oldenberg. His tale has often
been told [24], therefore, a simple summary will suffice here. Born in
Bremen between 1617 and 1620, he settled in England after 1653, becoming
friendly with those who were trying to propagate a new way of
looking at knowledge and learning.
The proponents of this new learning created the Royal Society and
Oldenberg became its secretary. One of the tenets of the new way was that
knowledge was a public good to be shared by those who contributed to its
production. Medieval secrecy gave way to open discussion and dissemination.
Individuals were not working in private and in secrecy but were
contributing as a community to the creation of an edifice of knowledge.
Thus, information that was gathered by a given individual needed to be
disseminated and shared. This new approach demanded that information be
widely disseminated and the form in which this occurred was a letter.
Oldenberg, as the Secretary of the Royal Society, took the new form of
communication the letter codified it and transformed it into the
research article, as we know it today. Clearly, this laid the beginnings of
the assessment of contributions to knowledge through publication of letters
and their acceptance and recognition by a community of scholars. As time
went on, much of the paraphernalia of the modern research journal came into
being. Oldenberg played a crucial role in all of this. What he set in place
has continued to this day, with minor changes.
The new learning was a creation of dead white European males of the 17th
Century [5] and few perhaps at the time recognized that within a few
centuries that approach would become truly international and legitimized as
the way of contributing to knowledge. But the instruments of expansion of
one generation become the vested interest of the next. Whereas failures can
be written off or ignored, success demands accountability; so began the
publication game. The early versions merely counted; one who published more
was seen as being better than one who did not. That was not enough. After
all, did it matter that one published? What if, like the sound of falling
trees in a forest, no one heard? Enter citation analysis and impact
factors.
The intentions were quite honourable [7]. In all the arguments and
controversies about this issue, it is important to note that Garfield and
others have been scrupulous in pointing out the caveats and pleading for
proper usage. If the scientific community has erred it is entirely their
responsibility. The impact factor is the ratio of citations to papers
published in a given journal to the total number of publications in that
journal over a given time. At a simplistic level, the notion makes sense.
If a given journal scrupulously publishes only those articles that are of
superb quality and those articles are avidly read and properly referred to
by the community of scholars, then that journal would have a tremendous
impact. Conversely, a journal that accepts everything that is submitted to
it might find that most articles remain unread and uncited and would have
no impact at all. Scientists would like their observations to count,
therefore, they would assiduously seek out journals with better editorial
policies and higher impact factors. This seems to be intuitive and obvious,
however, a lot of this founders on the bedrock of reality. The arguments
that have raged on this issue are well documented and I am not going to
cite them all. An easy way to enter the controversy is for readers to refer
to a series of articles that appeared in Trends in Biochemical Sciences in
1989 [8,9].
One of the problems with citation analysis and impact factors relates to
the numerator: the total number of citations to a given paper or papers in
a given journal. This assumes that scientists would be scrupulous in giving
credit where credit is due and only cite those papers that have meaning to
their own paper. How true is this? The evidence is mixed [1012].
If the critics are right and the problem is real, are there any solutions?
I am going to propose several solutions, in descending order of
outrageousness. (1) Return to the ideology of the New Learning, which, in a
sense, began it all. If scholars are building the edifice of science brick
by brick, why should they not remain anonymous? It is the product alone
that matters. There will be no takers for that option. (2) Argue that
modern science can rarely be done without institutional support. Therefore,
the name of the institutions alone should appear on the paper with the
individual contributors in the acknowledgements. I seriously doubt if this
option would find much favour either. (3) Take citations seriously and
annotate them. Certain journals do, but the practice is not widespread.
That way referees can crosscheck whether the citations are real, spurious
or unwarranted. However, before one contemplates any such solutions, I
would suggest that more experiments be done.
Rather than rely on the testimony of experts and metricians of one kind or
another, I propose that we decide on our own, based on simple experiments.
In a Baconian spirit, we can call these Experimenta Reflecta or Illuminata:
Experiment 1: With each of your publications, turn to the reference list,
go through each of the references that you have cited and ask the following
questions: Why did I cite this paper? Is the person a potential referee? Or
even on the editorial board? Did I just happen to have it in my files? Did
I find it from the reference list of some other paper? Have I actually read
this? Am I citing it to show how current I am in my literature search? Did
I just pull it off the web? Is it because it is in English and I cannot
read other languages? Is it from a reputed journal and therefore adds
credibility to my claims? You can repeat this experiment several times to
get more quantitative data.
Experiment 2: Select one to five papers that have cited you. Now take a deep
breath and read the paper carefully. Ask yourself why was your paper cited?
Was it for the sorts of reasons that you found for yourself? Would it have
made any material difference to the publication at hand if the author had
failed to cite your paper? Be brutally honest with yourself. Remember that
you do not have to publish the results or even tell anybody about them.
These experiments should help you decide for yourself the meaning of it
all.
References
1 Rowe, R.C. (2004) Publish or perish. Drug
Discov. Today 9, 590591
2 Porter, J.R. (1964) The Scientific Journal
300th Anniversary. Bacteriol. Rev. 28, 211230
3 Rangachari, P.K. (1994) The Word is the
Deed: The ideology of the research paper in
experimental science. Am. J. Physiol (Adv.
Physiol. Educ.) 12, S120S136
4 Hall, A.R. and Hall, M. (1965) The
Correspondence of Henry Oldenburg. The
University of Wisconsin Press, Madison,
WI, USA
5 Shapin, S. (1994) A Social History of Truth:
Civility and Science in Seventeenth Century
England. p xxii, Chicago Press, Chicago, IL,
USA
6 Godlee, F. and Jefferson, T. (2003) Peer Review
in the Health Sciences. (2nd edn), BMJ Books,
London, UK
7 Garfield, E. (1970) Citation indexing for
studying science. Nature 227, 669671
8 MacRoberts, M.H. and MacRoberts, B.R.
(1989) Citation Analysis and the science
policy arena. Trends Biochem. Sci. 14, 8
9 Cole, S. (1989) Citations and the evaluation
of individual scientists. Trends Biochem. Sci.
14, 913
10 Siekevitz, P. (1991) Citations and the tenor of
the times. FASEB J. 5, 139
11 Moravcsik, M.J. and Murugesan, P. (1975)
Some results on the function and quality of
citations. Soc. Studies Sci. 5, 8692
12 Dumont, J.E. (1989) From bad to worse:
evaluation by journal impact. Trends Biochem.
Sci. 14, 327328
More information about the SIGMETRICS
mailing list