Eston R. "The Impact Factor: a misleading and flawed measure of research quality" Journal of Sports Sciences 23(1):1-3 January 2005

Eugene Garfield eugene.garfield at THOMSON.COM
Fri Jul 8 15:18:08 EDT 2005


Roger Eston :  R.G.Eston at exeter.ac.uk


Title:     The Impact Factor: a misleading and flawed measure
           of research quality

Author(s): Eston R

Source: JOURNAL OF SPORTS SCIENCES 23 (1): 1-3 JAN 2005

Document Type: Editorial Material
Language: English
Cited References: 10

Addresses: Eston R (reprint author), Univ Exeter, Exeter, Devon EX4 4QJ
England
Univ Exeter, Exeter, Devon EX4 4QJ England
Publisher: TAYLOR & FRANCIS LTD, 4 PARK SQUARE, MILTON PARK, ABINGDON OX14
4RN, OXON, ENGLAND
IDS Number: 890OJ

ISSN: 0264-0414

Cited References:
BLOCH S, 2001, AUSTR NZ J PSYCHIAT, V257, P54.
DAVIES J, 2003, NATURE, V421, P210.
ERILL S, 2003, LANCET, V362, P1864.
FATOVICFERENCIC S, 2004, CROAT MED J, V45, P344.
FRANK M, 2003, J MED LIBR ASSOC, V91, P4.
HANSSON S, 1995, LANCET, V346, P906.
NEVILL AM, 2004, SPORT EXERCISE SCI.
OPTHOF T, 1997, CARDIOVASC RES, V33, P1.
SEGLEN PO, 1997, BRIT MED J, V314, P498.
WALTER G, 2003, MED J AUSTRALIA, V178, P280.

_________________________________________________


The editor of _Journal of Sports Sciences_ as well as the author, Roger
Eston, have kindly permitted us to reproduce the article in full text for
readers of the SIG-Metrics list.

Journal of Sports Sciences, Jan 2005 v23(1) p.1-3

The Impact Factor: a misleading and flawed measure of research
quality.(Editorial) Roger Eston.

Full Text: COPYRIGHT 2005 E & F N Spon

The rise in awareness and apparent importance associated with the
Impact Factor (IF) has caused a considerable change in the way researchers
consider outlets for their research. In the light of impending personal
audits or audits of institutional research, many of us feel obliged to
deliberately and strategically select journals for the sake of maximal
impact. In August 2004, the Editor-in Chief, Alan Nevill, issued an urgent
statement to address the erroneous IF of the Journal of Sports Sciences as
originally published by the Institute for Scientific Information (ISI)
(Nevill, 2004). The initial IF of 0.741 ranked the  Journal in 38th place
in the 'Sports Science' subject category. Thanks to meticulous detective
work by Bill Baltzopoulos, it was revealed that this was due to the
inclusion of abstracts. The ISI acknowledged this error in July. The
subsequently corrected IF of 1.255 raises the Journal six places from 22 to
16 out of 71 in the 2004 Journal Citation Report (JCR). For those of us who
publish in the Journal and/or place a lot of faith in the numerology
associated with the IF, this is good news. The editorial board and the
publishers are, of course, delighted that the Journal has climbed higher in
the JCR charts. But what does all this really mean? The IF is seriously
fallible when it comes to judging the quality of research, a factor that
has been highlighted in a many critical reviews and articles. In this
editorial, I summarize some of  the major limitations and flaws of the IF.

How is the IF calculated?

The IF is a simple ratio of citations and articles. The numerator is
the number of current year citations (e.g. citations made in 2003) to
published items in that journal in the previous two years (i.e. 2001 and
2002). The denominator is the total number of articles published in the
journal in the previous two years (i.e. 2001 and 2002). The IF for the
Journal of Sports Sciences was therefore calculated by:

192 cites to 2001 and 2002 items in 2003/([SIGMA]268 articles in 2001 +
85
articles in 2002) = 1.255

Limitations of the IF

* The ISI can make mistakes and report the wrong impact factor! The
fallibility of the IF is exacerbated when the ISI get it wrong. A large and
unexpected reduction in the IF can be damaging to the profile of the
Journal.
At the time of this editorial going to press (12 October 2004),
the IF for the Journal of Sports Sciences continues to read 0.741!

* The journal IF is not wholly representative of the quality of all
articles.  Seglen (1997) reported that 15% of the most cited papers account
for 50% of the citations, and the most cited 50% account for 90% of the
citations. Thus, articles that may never be cited are given full credit for
the impact of the few highly cited papers which deter determine the value
of the IF.

* The quality of scientific research should not be constrained by time.
The selection of a two-year period set by the ISI is entirely arbitrary
(Hansson, 1995; Walter et al., 2003). In relation to this, since articles
from a given journal tend to cite articles in the same journal, rapid
publication is self-serving with respect to the IF (Seglen, 1997). The
danger of constraining the impact factor by time may be elucidated by a
discovery, finding or conceptual reasoning, which although may not be
considered important at the time of publication, may later form the basis
of treatment or define conceptual frameworks of approach to the  subject a
few years later. This is exemplified by the isolation of trypsinogen in
1971, which provides the basic element in the treatment of HIV-1 infection
(Erill, 2003; Fatovic-Ferencic, 2004).

* The ISI database includes only a proportion of subject-relevant
Journals (Bloch and Walter, 2001; Walter et al., 2003). The number of
journals in the ISI database is a minute proportion of those published. The
great majority are English language based.

* High-quality research in non-English language journals is rarely cited.
This explains the low rank of the few non-English language journals in the
list--not the research quality of their articles. Seglen (1997) was
particularly critical of the extent to which American
scientists cited each other, giving rise to an inflated mean journal
impact factor of American science to be 30% above the world average.

* It includes only normal articles and reviews as citable items in the
denominator. However, the numerator includes citations to reports,
editorials, letters, abstracts and short notes.

* It does not take into account 'self-citations'. These may account for
up to a third of citations (Seglen, 1997; Walter et al., 2003). For
example, an editor may increase the numerator in the IF by making
references to previous editorials published in the last two years.

* Reviews are cited more frequently than original research. Researchers
often cite reviews for convenience, particularly in the introduction to a
research paper. The inclusion of review papers may therefore inflate the
impact factor and be a factor for strategic editorial consideration. It is
notable that two of the top four journals in the JCR for Sports Science are
review journals.

* The assumption of a positive link between citations and quality is
ill-founded. Articles may be cited for a number of reasons, including
reference to research that is considered to be poor or lacking in some
aspect (Opthof, 1997; Walter et al., 2003).

* Dynamic research fields with high activity and short publication lags
(e.g,
biochemistry and molecular biology) have higher IFs. Journal impact factors
are largely dependent on the research field. Seglen (1997) reported that
high impact factors are likely for journals that
cover broad areas of basic research with a rapidly expanding but
'short-lived' literature which uses many citations per article. He also
reported that the IF of a research field is directly proportional to the
mean number of references per article, and indicated for example that it
was twice as high in biochemistry as in mathematics. He also noted that
arts and humanities articles use much fewer references.

* Small research fields tend to lack journals with high impact (Seglen,
1997).

* It does not include research published in books.

* The frequency with which an individual researcher's work is cited in
The literature is more informative about the quality and impact of his or
her research than the Journal IF. Researchers do not always publish their
most citable work in journals of the highest impact.

Like it or not, the IF--initially intended to assist librarians to
select journals for their libraries over 40 years ago--has now become a
means of judging the scientific quality of a journal and the quality of
everything that is published in it. The extent of blind faith in the
objectivity and validity of the IF to evaluate the quality of journals
is actually quite astounding. Some institutions and committees of non-
experts in the research area have included the use of the IF to evaluate an
individual's scientific achievement and to allocate
resources (Seglen, 1997; Frank, 2003). Consequently, it has become a
tool to rationalize the funding of projects, an individual's research,
allocation of PhD students to individuals, and as a guide for academic
promotion. Considering the flaws in the IF, anyone who has been judged
by, or suffered because of, the application of this criterion alone has
the right to feel aggrieved. Making snap judgements about the quality of
scientific research obtained by perusing the names of journals in which an
individual has published, without actually reading the papers, is dangerous
(Davies, 2003). Thus, although a perusal of the annual IF
listings makes intriguing reading, it is important to be aware that the IF
lists are merely indicative of the citation behaviour of researchers.

Multiple factors influence the choice of journal where researchers
submit their papers. Equally or perhaps more importantly than the IF, their
choice should be influenced by the subject area of the journal, its
relevance to their speciality, the chances of publication, the speed of the
editorial and refereeing process, the length of time it takes to  be
published once accepted, publication cost, indexing of the journal and,
finally and very importantly, the quality of the editorial and advisory
board of the journal. In this respect, one cannot ignore the quality of the
editorial and editorial advisory board of the Journal of Sports Sciences,
and the other international experts who are frequently called upon to
review papers for the journal. Critics of journals listed in the Sports
Sciences JCR (let's face it--the mean IF of Sports Science in general is a
fraction of other ISI subject areas) should reflect on whether such expert
referees would compromise their standards of quality assessment to suit a
given journal. I suspect not.

Finally, for those of you who are making preparations for the UK's
Research Assessment Exercise (RAE) of 2008, and are a little 'gripped up'
by this IF nonsense, it is reassuring to know that it is highly unlikely
that impact factors will be used to assess the quality of your
research. I am informed from previous and potential members of the RAE
panel that impact factors of journals were not considered in the Panel's
evaluation of research in 1992, 1996 or 2001. Articles must be read by
qualified experts to truly evaluate their scientific quality. This indeed
was the case in 2001 and, from what I understand, will also be  the
procedure for the 2008 RAE. So, irrespective of the Journal's IF, I
encourage you to continue to submit your research to the Journal of Sports
Sciences. In terms of assessing research quality, the IF is
meaningless.

References

Bloch, S. and Walter, G. (2001). The impact factor: time for change.
Australia and New Zealand Journal of Psychiatry, 257, 54-57.

Davies, J. (2003). Journals' impact factors are too highly valued. Nature,
421, 210.

Erill, S. (2003). Let's waste our time! Lancet, 362, 1864.

Fatovic-Ferencic, S. (2004). On judgement, impact factor and feelings: what
can we learn from the impact factor. Croatian Medical Journal, 45, 344-345.

Frank, M. (2003). Impact factors: arbiters of excellence? Journal of the
Medical Library Association, 91, 4-6.

Hansson, S. (1995). Impact factor as a misleading tool in evaluation of
medical journals. Lancet, 346, 906.

Nevill, A.M. (2004). ISI Web of Knowledge acknowledge error. The Sport and
Exercise Scientist, Issue 1, September, 7 pp.

Opthof, T. (1997). Sense and nonsense about the impact factor.
Cardiovascular
Research, 33, 1-7.

Seglen, P.O. (1997). Why the impact factor of journals should not be used
for
evaluating research. British Medical Journal, 314, 498-502.

Walter, G., Bloch, S., Hunt, G. and Fisher, K. (2003). Counting on
citations:
a flawed way to measure quality. Medical Journal of Australia, 178, 280-281.

ROGER ESTON

University of Exeter

Professor Roger G Eston
Children's Health and Exercise Research Centre
School of Sport and Health Sciences
St Luke's Campus
University of Exeter
Exeter EX1 2LU

R.G.Eston at exeter.ac.uk
http://www.ex.ac.uk/sshs/



More information about the SIGMETRICS mailing list