Trashing the Impact Factor

Stephen J Bensman notsjb at LSU.EDU
Mon May 20 09:55:43 EDT 2013


The article below was recently published by the US Chronicle of Higher Education.  I basically agree with what is presented in it.  I think that it should be read by the Sigmetrics list.  There has recently been an intensive but misbegotten mathematical effort to convert the impact factor into a universal measure able to do cross-disciplinary comparisons for budgetary purposes.  This effort has always been more intensive in Europe and China than in the US.  That below is basically the attitude in the US toward such efforts.  The impact factor is a great measure when properly understood and used, but it is not suitable for cross-disciplinary comparisons due to the different structures of different disciplines.  The opposition of the Nature publishing group is historical in nature.  Nature has been against citation analysis ever since Garfield first proposed it in the 1970s.  Nature was always a thorn in Garfield's side.  The opposition of the Nature group is almost ironical, because the JCRs are almost an advertisement for its products, because Nature Group journals are always at the top of every citation ranking.  It is a function of what my wife calls "the law of opposites."

Respectfully,


Stephen J Bensman, Ph.D.
LSU Libraries
Lousiana State University
Baton Rouge, LA 70803
USA

Researchers and Scientific Groups Make New Push Against Impact Factors - Government - The Chronicle of Higher Education 5/17/13 4:53 PM
http://chronicle.com/article/ResearchersScientific/139337/?cid=at&utm_source=at&utm_medium=en Page 1 of 10
Government
May 16, 2013
Researchers and Scientific Groups MakeNew Push Against Impact Factors
By Paul Basken
More than 150 researchers and 75 scientific groups issued a
declaration on Thursday against the widespread use of journal
"impact factors," blaming the practice for dangerous distortions in
financing and hiring in science.
The impact factor "has a number of well-documented deficiencies
as a tool for research assessment," the scientists said in the letter,
which had been in preparation since a conference led by publishers
and grant-writing agencies last year in San Francisco.
Those deficiencies include the ability of publishers to manipulate
the calculations, and the way the metrics encourage university
hiring and promotion decisions, as well as grant agencies' award
distributions, that can lack an in-depth understanding of scientific
work.
"There is certainly a need for fair and objective methods to evaluate
science and scientists, no doubt about that," said Stefano Bertuzzi,
executive director of the American Society for Cell Biology, which
organized the campaign. "But that need does not change the fact
that the journal impact factor does not measure what it's supposed
to measure when it is applied to evaluations of scientists' work."
For all those who signed the letter, however, the effect may be
overshadowed by those who did not, including some of the world's
leading publishers and representatives of leading research
universities. They include the Nature Publishing Group and
Elsevier, two of the most dominant scientific publishers, and the
Association of American Universities, which represents top-ranked
research institutions.
The editor in chief of Nature, Philip Campbell, said he and other
editors of the company's journals have regularly published
editorials critical of excesses in the use of journal impact factors,
especially in rating researchers.
"But the draft statement contained many specific elements, some of
which were too sweeping for me or my colleagues to sign up to,"
said Mr. Campbell. Among the 18 recommendations in the letter,
journals were asked to "greatly reduce emphasis on the journal
impact factor as a promotional tool."
A spokesman for the AAU, Barry Toiv, said he had no comment on
the matter.
Years of Criticism
The impact factor is a number, calculated annually for each
scientific journal, that reflects the average number of times its
articles have been cited by authors of other articles. Some journals
have been accused of inflating their ratings through practices that
include requiring authors to cite articles that have appeared
previously in the journal.
The measure was first developed more than 50 years ago as a way to
help librarians decide which subscriptions to maintain. The simple
statistic made sense for that purpose, Mr. Bertuzzi said, but not for
its now-common use by universities and grant-writing agencies in
important hiring and financing decisions.
Although Nature declined to sign the letter, another top-ranked
journal, Science, backed the effort. In an editorial timed to the
release of the so-called San Francisco Declaration, the editor in
chief of Science, Bruce Alberts, said problems attributable to the
overreliance on impact factors include scientists' avoiding riskier
research that's less certain to command a wide audience.
A focus on impact factors also "wastes the time of scientists by
overloading highly cited journals such as Science with
inappropriate submissions from researchers who are desperate to
gain points from their evaluators," Mr. Alberts wrote.
The impact factor has nevertheless withstood years of criticism,
and Mr. Bertuzzi acknowledged there are no simple solutions,
given the financial pressures on universities, publishers, and grantwriting
agencies.
Still, there may be some new signs that the criticism is having an
effect. The National Cancer Institute, a division of the National
Institutes of Health, plans this year to join the private Howard
Hughes Medical Institute and a few universities in pressing grant
applicants to broaden their descriptions of career
accomplishments beyond the common list of journal publications.
Under the cancer institute's plan, scientists will be "asked to
describe your five leading contributions to science as a way of
helping a reviewer to evaluate your contributions rather than
depending on where your name is positioned in a paper with 15
authors or 500 authors," the director of the cancer institute, Harold
E. Varmus, told the annual meeting last month of the American
Association for Cancer Research.
Dr. Varmus said on Thursday that he fully backs the San Francisco
Declaration. Agencies, he said, "need to change the culture of
science, especially with respect to the way that scientists evaluate
each other, moving away from simplistic metrics."

Comments
Researchers and Scientific Groups Make New Push Against Impact Factors - Government - The Chronicle of Higher Education 5/17/13 4:53 PM
http://chronicle.com/article/ResearchersScientific/139337/?cid=at&utm_source=at&utm_medium=en Page 4 of 10
We are all aware of the limitations of simplistic measures of quality, from 'lists of best
colleges' to 'impact factor.' It is more the inappropriate use of these metrics than it is the
metric itself. Impact factor may have value for publishers to gauge their penetration of the
'market,' but has minimum if any value in assessing the value of a scientific publication.
Several other attempts to judge scientific merit have been promulgated, each with their
limitations. Although the dialog is meritorious, it is unrealistic to anticipate a Golden Rule
for scientific quality.
5 people liked this. LIKE
And then there is the "tenure trap" whereby the bombastic "masters-in-charge"can ruin
careers simply because they can. Senator McCarthy is long dead so what IS the validity of
the tenure process? The big game is laced through with egoistic bias. Don't assume that my
comments are "sour grapes" - I worked the tenure game with extraordinary "success"like a
good little lacky. After this game was over I did my very best academic research that
contributed to the advancement of knowledge. It is time to exorcise the present
tenure process and its gamesmanship.
7 people liked this. LIKE
I commonly joke around in our school that impact factor is also known as "incest factor"
given the fact that the higher the number of journals within the last two years are cited (very
incestuous there), the higher the "incest factor". The unfortunate thing is that in our society,
which is obsessed with measuring every thing, there's only one way out of this insanity and
that is a collective movement to oppose the IF or propose something better. Unfortunately
the other measures are equally bad, if not worse - e.g., the h-index and citations. In the
Australian system, it's even more ridiculous. The funding model is tied to how silo you can
get your research. You could not get research points (= research dollars from the Feds) if
your research was published in another area outside of your main discipline. So much for
encouraging cross-disciplinary research down there. Though the system has been scrapped,
an equally terrible system has been introduced where a group of "elite individuals" get to
Showing 20 comments
Sort by Oldest first Follow comments: by e-mail by RSS
Real-time updating is paused. (Resume)
POST AS LSMOLINSKY
1111118800447700 1 day ago
REPLY
meeyyeerrggd 22 hours ago
REPLY
vvaattiiccaan 19 hours ago
Researchers and Scientific Groups Make New Push Against Impact Factors - Government - The Chronicle of Higher Education 5/17/13 4:53 PM
http://chronicle.com/article/ResearchersScientific/139337/?cid=at&utm_source=at&utm_medium=en Page 5 of 10
decide what are the A*, A, B, C and D journals. No matter what the metrices are used for
measuring journals, they are always going to be controversial. Rather than abide by one
particular rubric, I use a variety as a guide.
2 people liked this. LIKE
We can't compete without comparisons, and I see the problem as an obsession with
competition in an enterprise that is inherently cooperative. If we really want to get away
from this "how do we know who won" culture, we need to concentrate resources on support
of the research effort, and restrict competitive evaluation to weeding out the few researchers
who really aren't doing their part.
4 people liked this. LIKE
Journals that are known to engage in impact factor rigging as described in this article
(forcing authors to cite other articles in that journal) should either be excluded from the
impact factor rating system at all, or have their i.f. asterisked in ISI's and other reports of it,
like an asterisked sports record.
3 people liked this. LIKE
Then the publishers will sue Thomson Reuters for defamation.
1 person liked this. LIKE
The biggest problem with impact factor is that good numbers are highly related to the field
that you publish in. Huge fields (such as cancer) and cardiovascular disease may have many
field specific journals with impact factors over 10, while other fields with fewer researchers
may have the best field specific journal impact factor be 4 or so. Guess what, if fewer papers
are published in an area, the impact factors that papers in the entire field garner will be
lower. That is fine as long as this is taken into account. However, when impact factors are
used to compare between fields by administrators who can not be bothered to educate
themselves, they just lead to the further glut of hiring in already oversubscribed fields leading
to extreme funding competition in some fields (cancer, neuroscience;) while other fields
worry about keeping sufficient young investigators to remain viable and funding rates in
those fields 2-3X higher than the oversubscribed ones (vision research is a good example).
REPLY
jjaalld33772244 18 hours ago
REPLY
mbeellvvaadii 10 hours ago
REPLY
Antsy Kuhnwisse 7 hours ago in reply to mbelvadi
REPLY
ggrraaddiirreeccttorr 10 hours ago
Researchers and Scientific Groups Make New Push Against Impact Factors - Government - The Chronicle of Higher Education 5/17/13 4:53 PM
http://chronicle.com/article/ResearchersScientific/139337/?cid=at&utm_source=at&utm_medium=en Page 6 of 10
10 people liked this. LIKE
Exactly. I work for a marine science college, where the various labs study subjects as
varied as pathogens in fish, to marine paleo-geology, to satellite altimetry. It's been
suggested that the dean use impact factors as part of a weighting system to base faculty
quality metrics on -- including who gets tenure, who gets fired, and who gets bonuses
(assuming any of us ever get bonuses again).
Now, I sympathize with the idea and I'd really like to support it... in theory. After all,
everyone knows that there are "great" journals, "good" journals, and "lousy" journals,
right? The problem is, if you compare the typical impact factor of a "good" genetics
journal to the typical impact factor of a "good" satellite gravity journal, the genetics
journal will have TWICE the impact factor of the other. That's just due to the fact that
there are more people studying genetics than satellite gravity, not any measure of the
quality of work in the journals.
Very frustrating.
3 people liked this. LIKE
In our P&T decisions at the department level, we have the candidate give a
description of each journal including the relative ranking based on impact factor
with in a journal category (ie within the top 20% of journals published in
biochemistry for instance) but we do not have them report the actual impact
factor number. This way, publishing in good journals is highlighted, and crappy
journals is revealed. This does allow some comparison among fields without as
much bias in favor of those working in large fields. While it is still not perfect, it has
been a good compromise overall and has been accepted well by the upper level
promotion committees.
2 people liked this. LIKE
Yeah, the main problem we have is that our college is such an
interdisciplinary field that it's hard to list meaningful "average" impact factors
that would hold for more than a single professor. Just because so few profs
publish in the same journals. Of course, there are a few larger ones (Nature
and Science, and others like GRL and JGR) which lots of profs from different
areas publish in, but most of our publications happen in more disciplinespecific
journals. So the more-or-less-final decision was to remove the
impact factor rating altogether. No one's really thrilled with it, but the only
REPLY
Reeyytthiiaa 6 hours ago in reply to graddirector
REPLY
ggrraaddiirreeccttorr 6 hours ago in reply to Reythia
(Edited by author 6 hours ago)
REPLY
Reeyytthiiaa 6 hours ago in reply to graddirector
Researchers and Scientific Groups Make New Push Against Impact Factors - Government - The Chronicle of Higher Education 5/17/13 4:53 PM
http://chronicle.com/article/ResearchersScientific/139337/?cid=at&utm_source=at&utm_medium=en Page 7 of 10
other option was to come up with an "average" impact factor rating that
differed for almost every prof -- and there's no way that would have worked
out evenly. :(
LIKE
Well that is why the relative ranking in the professor's field works pretty
well. We are also very interdisciplinary and few of us publish in the same
journals in my department either. Both SciImago and Web of Science
classify journals into broad subject areas making it easy to calculate
whether it is a "top 25% " genetics journal versus a "top 25%" biophysics
journal versus a "top 25%" engineering journal etc. This also helps truly
interdisciplinary folks explain things who could be publishing their work
in medicine, mathematics, engineering and physiology journals
depending on the project, all fields with pretty different criteria for
productivity.
1 person liked this. LIKE
While the method you're employing does address the discrepencies between
what constitutes a high IF in one field vs. a high IF in another, if those journal
rankings you're continuing to use are based on impact factor, you're still
using a ranking that is based on an acknowledged flawed metric, and one that
purports to measure the quality of journals, NOT of any particular piece of
research that appears in them. Why are we not evaluating an individual's
research (and the published results of that research) at face value, rather than
basing part of that evaluation on a flawed metrics's measure of the quality of
the _vessel_ those research results appeared in.
3 people liked this. LIKE
You have an excellent point and that is why H-index is used by some as
an alternative. The problem comes when decisions about promotion
need to be made shortly after a piece of work is completed. Even if the
paper is in a so called high impact journal, one can not really know
whether a paper will garner 2 citations in its life or 2000. If you are
looking at the end of someone's career, H-index is a reasonable measure
of impact since it includes all 20-40 years of scientific productivity. The
thing is that many folks in engineering, computer science etc are using it
to convince search committees that their graduate student/postdoc
publications are high impact. Who knows if the paper will ever be cited
after the current year or whether a paper will become highly cited in a
REPLY
ggrraaddiirreeccttorr 6 hours ago in reply to Reythia
REPLY
deessaarrtt 4 hours ago in reply to graddirector
REPLY
ggrraaddiirreeccttorr 35 minutes ago in reply to desart
Researchers and Scientific Groups Make New Push Against Impact Factors - Government - The Chronicle of Higher Education 5/17/13 4:53 PM
http://chronicle.com/article/ResearchersScientific/139337/?cid=at&utm_source=at&utm_medium=en Page 8 of 10
few years when its cutting edge importance is better appreciated.
Overall, I do not know how to deal with issues the most fairly for
promotion in situations like Reythia's (or mine) where the environment
is so diverse that it is difficult to judge the importance of ones colleagues
work. In the end, some objective (albeit imperfect) measure is needed.
At least for us, it seems to be reasonably fair to let the reviewers of
journals in a tenure candidates field whether it meets the standards for a
highly ranked journal. Imperfect, yes, but there are not a lot of
alternatives.
LIKE
11180470 amen to that, but what I would propose is a combination of measures which I
think is what many of our learned colleagues use when making tenure decisions. What that
combination may be is probably somewhat discipline context specific, but I am sure if we
put our heads together we can come up with a set of measures which, while not perfect are
at least acceptable to most. This rush to simplistic conclusions, mostly not warrranted, I
would think drives most of us nuts, but we need to stay vigilant and propose solutions that
make sense.
LIKE
Is there any blog or online resource which regularly reports on/synthesizes successful
alternative-to-the-IF practice?
LIKE
In the United Kingdom, they have the Research Excellence Framework (REF) 2014
Guidelines, which says "No sub-panel will make any use of journal impact factors,
rankings, lists or the perceived standing of publishers in assessing the quality of
research outputs." See http://www.ref.ac.uk/faq/all/ However, some researchers
worry that the guidelines are being ignored. http://blogs.lse.ac.uk/impacto...
LIKE
Even articles in journals like Science and Nature garner more citations after the first two
years than they do in the time period used to calculate the impact factor, and many other
REPLY
phdbussiineessss 9 hours ago
REPLY
beeaaggllee9999 8 hours ago in reply to phdbusiness
REPLY
jjokkrraaussdu 4 hours ago in reply to beagle99
REPLY
aabrroweerr11223344 6 hours ago
Researchers and Scientific Groups Make New Push Against Impact Factors - Government - The Chronicle of Higher Education 5/17/13 4:53 PM
http://chronicle.com/article/ResearchersScientific/139337/?cid=at&utm_source=at&utm_medium=en Page 9 of 10
journals peak or plateau at four or five years, sometimes with a very long tail towards ten or
twenty years. Thus, the impact factor does not even measure what it is supposed to
measure, let alone the quality of individual papers.
LIKE
It's about time. I've been on department and college P&T committees and I absolutely will
not, in any way, rely on impact factors, citation indexes, or any other such metrics in my
decisions.
LIKE
A good many years ago, when I was working for the National Science Foundation, we did an
internal study, later published in Science and Technology Studies, of the research
management practices at the top 100 research universities in the US. Specifically, we
studied three different departments, as representatives respectively of applied science, pure
science, and social science: electrical engineering, chemistry, and economics. There were a
lot of interesting findings, but one germane to this discussion was a publication pattern
(numbers are approximate, based on recollection, but order-of-magnitude correct.) In
electrical engineering, the average number of authors on a paper was about 12, and papers
were about 2-3 pages long. In chemistry, the authors averaged about 4, with 5-6 page papers;
while in economics, the average number of authors on a paper was about 1.1, with papers of
10-15 pages. EE papers tended to have as authors all the members of the research team, to
reflect relatively small chunks of an ongoing larger project, and to shufffle the rankings of the
authors on subsequent papers. Chemistry papers tended to reflect the same pattern, but
with smaller projects. Economics papers were almost always single-authored, and reflected
the completion of a study rather than mid-term progress.
I am not sure how, if at all, these differences in professional practice are reflected in the
impact factor calculations, but clearly they would affect publication patterns. Research
publication is not a one-size-fits-all proposition; any systematic evaluation of research
outcomes needs to thus allow for significant disciplinary differences.
1 person liked this. LIKE
JD Eveland:
Would the paper referred to be:
Walton, A. L., Tornatzky, Louis G., & Eveland, J. D. Research Management at the
University Department. Science and Technology Studies 4.3/4 (1986 Autume/Winter):
35-38. Retrieved from JStor 17 May 2013.
REPLY
prroff229911 5 hours ago
REPLY
JD Eveland 2 hours ago
(Edited by author 2 hours ago)
REPLY
ccssullb11225500 2 hours ago in reply to JD Eveland
Researchers and Scientific Groups Make New Push Against Impact Factors - Government - The Chronicle of Higher Education 5/17/13 4:53 PM
http://chronicle.com/article/ResearchersScientific/139337/?cid=at&utm_source=at&utm_medium=en Page 10 of 10
???
LIKE REPLY
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20130520/5ba69194/attachment.html>


More information about the SIGMETRICS mailing list