Editorial "Research Assessment: Declaring War on the Impact Factor"

anup kumar das anupdas2072 at GMAIL.COM
Tue May 28 04:46:09 EDT 2013


Research Assessment: Declaring War on the Impact Factor

by P. Balaram

Editorial, *Current Science*, 104(10): 1267-1268, 25 May 2013

Nearly forty years ago, when I began my research career in India, science
proceeded at a leisurely pace. There was little by way of funding or major
facilities even at the best of institutions. Enthusiasm and interest were
the key ingredients in maintaining a focus on research. The environment
still contained many role models, who had made significant contributions to
their chosen fields, under undoubtedly difficult circumstances. The
mid-1970s was a time when political and economic uncertainties precluded a
great deal of government interest in promoting science. There was
relatively little pressure on researchers to publish papers. The age of
awards and financial incentives lay in the distant future. In those more
sedate times, the results of research were written up when the findings
appeared interesting enough to be communicated. The choice of journals was
limited and most scientists seemed to be content with submitting
manuscripts to journals where their peers might indeed read the papers.
Journals were still read in libraries. Note taking was common, photocopies
were rare and the ‘on-line journal’ had not yet been conceived. The
academic environment was not overtly competitive. I never heard the word
‘scooped’ in the context of science, until well into middle age. Eugene
Garfield’s ‘journal impact factor’ (JIF) had not penetrated into the
discourse of scientists, although the parameters for ranking journals had
been introduced into the literature much earlier. The word ‘citation’ was
rarely heard. In the library of the Indian Institute of Science (IISc)
there was a lone volume of the 1975 Science Citation Index (a hardbound,
printed version, extinct today), presumably obtained gratis, which sat
forlorn and unused on rarely visited shelves. Only a few hardy and curious
readers would even venture near this sample of the Citation Index, which
seemed of little use. It required both effort and energy to search the
literature in the 1970s. Few could have foreseen a time when administrators
of science in distant Delhi would be obsessed with the many metrics of
measuring science, of which the JIF was a forerunner. Indeed, the unchecked
and virulent growth of the use of scientometric indices in assessing
science has at last begun to attract a backlash; an ‘insurgency’ that has
resulted in the San Francisco Declaration on Research Assessment (DORA),
whose stated intention is to begin ‘putting science into the assessment of
research’. The declaration is signed by ‘an ad -hoc coalition of unlikely
insurgents – scientists, journal editors and publishers, scholarly
societies, and research funders across many scientific disciplines’, who
gathered at the annual meeting of the American Society for Cell Biology (
am.ascb.org/dora/May 16, 2013). An editorial by Bruce Alberts in the May 17
issue of Science (2013, 340, 787) notes that ‘DORA aims to stop the use of
the “journal impact factor” in judging an individual scientist’s work in
order “to correct distortions in the evaluation of scientific research” ’.

The origins of the ‘impact factor’ may be traced to a largely forgotten
paper that appeared in Science in 1927, which described a study carried out
at Pomona College in California, that begins on an intriguing note:
‘Whether we would have it or not, the purpose of a small college is
changing’. The authors describe an attempt to draw up a priority list of
chemistry journals to be obtained for the library. Budgetary constraints
were undoubtedly a major matter of concern in the late 1920s. I cannot
resist reproducing here the authors’ stated purpose in carrying out this
exercise over eighty five years ago, as their words may strike a chord in
readers interested in the problem of uplifting the science departments of
colleges in India today: ‘What files of scientific periodicals are needed
in a college library successfully to prepare the student for advanced work,
taking into consideration also those materials necessary for the
stimulation and intellectual development of the faculty? This latter need
is quite as important as the first because of the increasing demand of the
colleges for instructors with the doctorate degree. Such men are reluctant
to accept positions in colleges where facilities for continuing the
research which they have learned to love are lacking’ (Gross, P. L. K. and
Gross, E. M., Science, 1927, LXVI, 385). The procedure adopted was simple;
draw up a list of journals most frequently cited in the Journal of the
American Chemical Society (JACS), the flagship publication of the American
Chemical Society. Much can be learnt about the history of chemistry (and,
indeed, more generally about science) by examining the list of the top six
journals (other than JACS) recommended for a college chemistry library in
the United States, in 1927: Berichte der Deutschen Chemischen Gesellschaft,
The Journal of the Chemical Society (London), Zeitschrift für Physikalische
Chemie, Annalen der Chemie (Liebig’s), The Journal of Physical Chemistry
and The Journal of Biological Chemistry. Clearly, in the 1920s the
literature of chemistry was overwhelmingly dominated by European journals.
For students growing up in the frenetic world of modern science, I might
add that Science, Nature and PNAS appear far down the list. A similar
exercise carried out today would reveal a dramatically different list of
journals; undoubtedly a reflection of the turbulent history of the 20th
century.

The journal impact factor emerged in the 1970s as a tool to rank journals.
In the early years, it was largely a metric that was of limited interest.
The revolution in the biomedical sciences resulted in an explosive growth
of journals in the last two decades of the 20th century; a period that
coincided with the dramatic rise of information technology and the
emergence of the internet. The acquisition of the Institute for Scientific
Information by Thomson Reuters lent a hard commercial edge to the marketing
of the tools and databases of scientometrics; the Web of Science began to
enmesh the world of science.  Journal impact factors appear unfailingly,
every year, making the business of publishing science journals an extremely
competitive exercise. Journal editors scramble to devise schemes for
enhancing impact factors and scientists are drawn to submit articles to
journals that appear high on the ranking lists. If JIFs were used only to
compare journals there may have been little to grumble about.
Unfortunately, individuals soon began to be judged by the impact factors of
the journals in which they had published. Some years ago the use of an
‘average impact factor’ was actively promoted in India, to judge both
individuals and institutions. The introduction of the ‘h-index’, a citation
based parameter that appeared in the literature a few years ago, as a means
of ranking individual performance, may have drawn away a few adherents of
the average impact factor. Very few proponents of the JIF as an assessment
tool in India appear conscious of obvious limitations. Most impact factors
are driven up by a few highly cited papers, while others bask in reflected
glory. The field specific nature of the JIF can lead to extremely
misleading conclusions, when comparing individuals and institutions using
this imperfect metric. Despite these drawbacks, the use of JIF as a tool of
research assessment has reached epidemic proportions worldwide, with
countries like India, China and the countries of southern Europe being
among the hardest hit. Students in India, particularly those working in the
biological sciences and chemistry in many of our best institutions, are
especially self conscious; constantly worrying about the JIF when they
submit papers.

The Declaration on Research Assessment (DORA) is a call to take up arms
against the insidious JIF. Its general recommendation is a call for a
boycott: ‘Do not use journal-based metrics, such as Journal Impact Factors,
as a surrogate measure of the quality of individual research articles, to
assess an individual scientist’s contributions, or in hiring, promotion, or
funding decisions.’ Scholarship and achievement can be judged without using
a metric that was never designed for the purpose. The Declaration also has
a message that may well be worth heeding by researchers in India:
‘Challenge research assessment practices that rely inappropriately on
Journal Impact Factors and promote and teach best practice that focuses on
the value and influence of specific research outputs.’ In his Science
editorial, Alberts is trenchant: ‘The misuse of the journal impact factor
is highly destructive, inviting a gaming of the metric that can bias
journals against publishing papers in fields (such as social sciences or
ecology) that are much less cited than others (such as biomedicine).’

Research assessments have also become commonplace in ranking institutions.
The metrics used rely substantially on publication numbers and citations,
invariably based on the Web of Science, although additional parameters
contribute in differing ranking schemes. In recent times, both the Prime
Minister and the President have publicly lamented that no Indian university
or institution appeared in the ‘top 200’ in the world (The Hindu, 5
February 2013 and 16 April 2013). While there may be much to lament about
in Indian higher education, are the rankings really an issue that needs
immediate attention? In an Op-Ed piece in The Hindu (9 March 2013), Philip
Altbach is categorical: ‘For India, or other developing countries to obsess
about rankings is a mistake. There may be lessons, but not rules.... The
global rankings measure just one kind of academic excellence, and even here
the tools of measurement are far from perfect.’ Altbach notes, and many
analysts would undoubtedly agree, that two systems, ‘the Academic Ranking
of World Universities, popularly known as the “Shanghai rankings”, and the
World University Rankings of Times Higher Education (THE) are
methodologically respectable and can be taken seriously’. While the former
measures only research impact, with several parameters weighted towards the
highest level of achievement (number of Nobel prize recipients in an
institution), the latter ‘measures a wider array of variables’. Altbach
adds: ‘Research and its impact is at the top of the list, but reputation is
also included as are several other variables such as teaching quality and
internationalization. But since there is no real way to measure teaching or
internationalization weak proxies are used. Reputation is perhaps the most
controversial element in most of the national and global rankings.’
Altbach’s critique, of an apparent obsession with university rankings in
India, was quickly countered by Phil Baty, the editor of THE rankings who
warns: ‘...it would be a far greater mistake for Indian institutions and
policy makers to under-use the global rankings than to overuse them’ (The
Hindu, 11 April 2013). It may indeed be important for institutions to
appreciate the rules of the game if they are to achieve a competitive
score. Policy makers would also benefit if they set out to understand the
tools of research assessment before they begin to use them.
Source: http://www.currentscience.ac.in/Volumes/104/10/1267.pdf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20130528/4c76ce1f/attachment.html>


More information about the SIGMETRICS mailing list