Chronicle of Higher Education Impact Factor Article
harnad at ECS.SOTON.AC.UK
Mon Oct 10 16:07:26 EDT 2005
Richard Monastersky, The Number That's Devouring Science,
Chronicle of Higher Education, October 1, 2005
[text appended at the end of the comment]
Impact Analysis in the PostGutenberg Era
Although Richard Monasterky describes a real problem -- the abuse of
journal impact factors -- its solution is so obvious one hardly required
so many words on the subject:
A journal's citation impact factor (CIF) is the average number of
citations received by articles in that journal (ISI -- somewhat
arbitrarily -- calculates CIFs on the basis of the preceding two
years, although other time-windows may also be informative; see
There is an undeniable relationship between the usefulness of an
article and how many other articles use and hence cite it. Hence CIF
does measure the average usefulness of the articles in a journal. But
there are three problems with the way CIF itself is used, each of them
(1) A measure of the average usefulness of the articles in the journal
in which a given article appears is no substitute for the actual
usefulness of each article itself: In other words, the journal CIF is
merely a crude and indirect measure of usefulness; each article's own
citation count is the far more direct and accurate measure. (Using
the CIF instead of an article's own citation count [or the average
citation count for the author] for evaluation and comparison is
like using the average marks for the school from which a candidate
graduated, rather than the actual marks of the candidate.)
(2) Whether comparing CIFs or direct article/author citation counts,
one must always compare like with like. There is no point comparing
either CIFs between journals in different fields, or citation counts
for articles/authors in different fields. (No one has yet bothered
to develop a normalised citation count, adjusting for different
baseline citation levels and variability in different fields. It
could easily be done, but it has not been -- or if it has been done,
it was in an obscure scholarly article, but not applied by the actual
daily users of CIFs or citation counts today.)
(3) Both CIFs and citation counts can be distorted and abused. Authors
can self-cite, or cite their friends; some journal editors can and do
encourage self-citing their journal. These malpractices are deplorable,
but most are also detectable, and then name-and-shame-able and
correctable. ISI could do a better job policing them, but soon the
playing field will widen, for as authors make their articles open
access online, other harvesters -- such as citebase and citeseer
and even google scholar -- will be able to harvest and calculate
citation counts, and average, compare, expose, enrich and correct
them in powerful ways that were in the inconceivable in the Gutenberg
So, yes, CIFs are being misused and abused currently, but the cure is
already obvious -- and wealth of powerful new resources are on the way
for measuring and analyzing
research usage and impact online, including (1) download counts, (2) co-citation
counts (co-cited with, co-cited by), (3) hub/authority ranks (authorities
are highly cited papers cited by many highly cited papers; hubs cite
many authorities), (4) download/citation correlations and other time-series
analyses, (5) download growth-curve and peak latency scores, (6) citation
growth-curve and peak-latency scores, (7) download/citation longevity scores,
(8) co-text analysis (comparing similar texts, extrapolating directional
trends), and much more. It will no longer be just CIFs and citation counts
but a rich multiple regression equation, with many weighted predictor
variables based on these new measures. And they will be available both
for navigators and evaluators online, and based not just on the current ISI
database but on all of the peer-reviewed research literature.
Meanwhile, use the direct citation counts, not the CIFs.
Some self-citations follow (and then the CHE article's text):
Brody, T. (2003) Citebase Search: Autonomous Citation Database for e-print
Archives, sinn03 Conference on Worldwide Coherent Workforce, Satisfied Users -
New Services For Scientific Information, Oldenburg, Germany, September 2003
Brody, T. (2004) Citation Analysis in the Open Access World Interactive Media
Brody, T. , Harnad, S. and Carr, L. (2005) Earlier Web Usage Statistics as
Predictors of Later Citation Impact. Journal of the American Association for
Information Science and Technology (JASIST, in press).
Hajjem, C., Gingras, Y., Brody, T., Carr, L. & Harnad, S. (2005) Across
Disciplines, Open Access Increases Citation Impact. (manuscript in preparation).
Hajjem, C. (2005) Analyse de la variation de pourcentages d'articles en accès
libre en fonction de taux de citations
Harnad, S. and Brody, T. (2004a) Comparing the Impact of Open Access (OA) vs.
Non-OA Articles in the Same Journals. D-Lib Magazine, Vol. 10 No. 6
Harnad, S. and Brody, T. (2004) Prior evidence that downloads predict
citations. British Medical Journal online.
Harnad, S. and Carr, L. (2000) Integrating, Navigating and Analyzing Eprint
Archives Through Open Citation Linking (the OpCit Project). Current Science
Harnad, S. , Brody, T. , Vallieres, F. , Carr, L. , Hitchcock, S. ,
Gingras, Y. , Oppenheim, C. , Stamerjohanns, H. and Hilf, E. (2004) The
Access/Impact Problem and the Green and Gold Roads to Open Access. Serials
Review, Vol. 30, No. 4, 310-314
Hitchcock, S. , Brody, T. , Gutteridge, C. , Carr, L. , Hall, W. ,
Harnad, S. , Bergmark, D. and Lagoze, C. (2002) Open Citation Linking: The
Way Forward. D-Lib Magazine 8(10).
Hitchcock, S. , Carr, L. , Jiao, Z. , Bergmark, D. , Hall, W. ,
Lagoze, C. and Harnad, S. (2000) Developing services for open eprint archives:
globalisation, integration and the impact of links. In Proceedings of the 5th
ACM Conference on Digital Libraries, San Antonio, Texas, June 2000. , pages
Hitchcock, S. , Woukeu, A. , Brody, T. , Carr, L. , Hall, W. and
Harnad, S. (2003) Evaluating Citebase, an open access Web-based
citation-ranked search and impact discovery service. Technical Report
ECSTR-IAM03-005, School of Electronics and Computer Science, University of
> The Number That's Devouring Science
> The impact factor, once a simple way to rank scientific journals, has
> become an unyielding yardstick for hiring, tenure, and grants
> By RICHARD MONASTERSKY
> In the beginning, during the late 1950s, it was just an innocent idea in
> Eugene Garfield's head. A Philadelphia researcher who described himself as
> a "documentation consultant," Mr. Garfield spent his free time thinking
> about scientific literature and how to mine information from it.
> He eventually dreamed up something he called an "impact factor,"
> essentially a grading system for journals, that could help him pick out the
> most important publications from the ranks of lesser titles. To identify
> which journals mattered most to scientists, he proposed tallying up the
> number of citations an average article in each journal received.
> This accounting method sounds harmless enough. Outside academe, few people
> have even heard of it. Mr. Garfield, though, now compares his brainchild to
> nuclear energy: a force that can help society but can unleash mayhem when
> it is misused.
> Indeed, impact factors have assumed so much power, especially in the past
> five years, that they are starting to control the scientific enterprise. In
> Europe, Asia, and, increasingly, the United States, Mr. Garfield's tool can
> play a crucial role in hiring, tenure decisions, and the awarding of
> "The impact factor may be a pox upon the land because of the abuse of that
> number," says Robert H. Austin, a professor of physics at Princeton
> Impact-factor fever is spreading, threatening to skew the course of
> scientific research, say critics. Investigators are now more likely to
> chase after fashionable topicsÂ â the kind that get into high-impact
> journalsÂ â than to follow important avenues that may not be the flavor of
> the year, says Yu-Li Wang, a professor of physiology at the University of
> Massachusetts Medical School. "It influences a lot of people's research
> That influence has also led to a creeping sense of cynicism about the
> business of science publications. Journal editors have learned how to
> manipulate the system, sometimes through legitimate editorial choices and
> other times through deceptive practices that artificially inflate their own
> rankings. Several ecology journals, for example, routinely ask authors to
> add citations to previous articles from that same journal, a policy that
> pushes up its impact factor. Authors who have received such requests say
> that the practice veers toward extortion and represents a violation of
> scientific ethics.
> What's more, investigations into impact factors have revealed problems with
> the basic data used by ISI, the company that tabulates citation statistics
> and journals' impact factors. Started by Mr. Garfield in Philadelphia, ISI
> was bought in 1992 by the Thomson Corporation, which has tried to transform
> the citation enterprise into a more profitable operation by buying up
> databases and promoting its products. With alarming frequency, editors are
> finding fault with the impact factors that Thomson has issued.
> "This was a serious concern," says Alan Nevill, a professor of
> biostatistics at the University of Wolverhampton, in England, who took
> issue with the calculations that ISI made regarding the Journal of Sports
> Science, which he edits. "Academia is being held ransom by these
> Far From Its Roots
> It wasn't supposed to be this way. "We never predicted that people would
> turn this into an evaluation tool for giving out grants and funding," says
> Mr. Garfield.
> Although he first mentioned the term "impact factor" in a publication in
> 1955, it wasn't until the 1960s that Mr. Garfield and a colleague fully
> developed the concept to help them select the most important journals for a
> new citation index, which has grown into one of the most widely used
> citation tools in science and the social sciences. It didn't make sense,
> they reasoned, to include only the journals that get the most citations,
> because that would eliminate smaller publications. So they invented a type
> of measurement that reflects the average number of citations per article
> for each journal.
> The basic definition has changed little since then, although the process of
> calculating impact factors has become highly automated through the use of
> computer algorithms, which trolled through 27 million citations last year.
> In June, ISI issued its latest set of impact factors, for 5,968 science
> journals and 1,712 social-science journals.
> To calculate the most recent factor for the journal Nature, for example,
> the company tallied the number of citations in 2004 to all of the articles
> that Nature published in 2002 and 2003. Those citations were divided by the
> number of articles the journal published in those two years, yielding an
> impact factor of 32.182Â â the ninth-highest of all journals. It is a number
> that editors and publishers across the world lust after; more than half of
> all science journals listed by ISI score below 1.
> Impact factors caught on because they are an objective measure that serves
> many purposes. Librarians can use them to decide which journals to purchase
> and which to cancel. Editors and publishers can chart their journals'
> impact factors to gauge their progress relative to competitors. And
> scientists can examine the numbers to see where their research papers are
> likely to get the most attention.
> Higher-ranking journals, it turns out, do get a message out better. Matthew
> B. Stanbrook, an assistant professor of medicine at the University of
> Toronto, tracked what happened after 12 medical journals published a joint
> statement on research authorship and sponsorship in 2001Â â an unusual
> situation that provided direct comparisons. Over the following 26 months,
> the highest-impact journal received 100 times as many citations to the
> article as the lowest one of the 12, Dr. Stanbrook reported at a conference
> on peer review and publishing last month in Chicago. "There's a measurable
> value associated with a high-impact journal, which indicates why those
> journals are important," he says.
> Over the years, impact factors have proved so attractive to scientists that
> they started applying them not only to journals but also to researchers.
> Ideally, evaluators would look at the number of citations an individual
> paper receives or a scientist accumulates over his or her careerÂ â but that
> process takes time and money. Impact factors provide a shortcut.
> They also help in the modern world of ultraspecialized science. Members of
> a tenure committee or a hiring panel find it increasingly difficult to
> assess the papers of a candidate working outside their own subdiscipline,
> so they use the impact factor of the journal in which the paper appeared as
> a measure of the paper's quality. By that logic, evaluators rate a paper
> more highly if it appears in a high-impact journal, regardless of what the
> paper actually says.
> Europeans cite another reason that impact factors are popular there. In
> some countries, the community of researchers in a particular field is so
> small that they all know each other and either collaborate or compete.
> Using impact factors to assess individual scientists is seen as an
> improvement over tapping into an old-boy network to make hiring and grant
> Fuzzy Math
> But relying on impact factors to evaluate a person is statistically
> dimwitted, say critics of its spreading influence. The measurement is just
> an average of all the papers in a journal over a year; it doesn't apply to
> any single paper, let alone to any author. For example, a quarter of the
> articles in Nature last year drew 89 percent of the citations to that
> journal, so a vast majority of the articles received far fewer than the
> average of 32 citations reflected in the most recent impact factor.
> Mr. Garfield and ISI routinely point out the problems of using impact
> factors for individual papers or people. "That is something we have
> wrestled with quite a bit here," says Jim Pringle, vice president for
> development at Thomson Scientific, the division that oversees ISI. "It is a
> fallacy to think you can say anything about the citation pattern of an
> article from the citation pattern of a journal."
> Such warnings have not helped. In several countries in Europe and Asia,
> administrators openly use impact factors to evaluate researchers or
> allocate money:
> In England, hiring panels routinely consider impact factors, says Mr.
> According to Spanish law, researchers are rewarded for publishing in
> journals defined by ISI as prestigious, which in practice has meant
> in the upper third of the impact-factor listings.
> In China, scientists get cash bonuses for publishing in high-impact
> journals, and graduate students in physics at some universities must
> place at least two articles in journals with a combined impact factor
> of 4 to get their Ph.D.'s, says Martin Blume, editor in chief of the
> American Physical Society, who recently met with scientists in China.
> The obsession with impact factors has also seeped into the United States,
> although less openly. Martin Frank, executive director of the American
> Physiological Society, says a young faculty member once told him about a
> policy articulated by her department chair. She was informed that in order
> to get tenure, scientists should publish in journals with an impact factor
> above 5.
> "We are slaves to the impact factor," says Mr. Frank, whose organization
> publishes 14 science journals.
> Impact ranking may now be a tool that controls scientists, rather than the
> other way around. Pressure to publish in the highest-impact science
> journalsÂ â Nature, Science, and CellÂ â has led researchers to compete more
> and more for the limited number of slots in those broader journals, thus
> diminishing the specialty titles that have traditionally served as the main
> publications of each discipline. Academe used to be a "publish or perish"
> world, but now the halls of science have turned into a "publish in a
> high-impact journal or perish" environment, says Massachusetts' Mr. Wang.
> He observes that impact factors may even be affecting what kind of research
> is conducted. Top journals require that papers be topical, in addition to
> presenting important science, so researchers are shifting the kinds of
> questions they investigate to accommodate those high-impact journals. "The
> system is going after the short term," says Mr. Wang.
> "For example, it is easy to catch attention when one describes a previously
> unknown gene or protein related to a disease, even if the analysis is done
> only superficially," he says. "Follow-up studies, to uncover the true
> functions of the molecules or sometimes to challenge the initial analysis,
> are typically more difficult to publish in journals of top 'impact.'"
> Catherine D. DeAngelis, editor of the high-impact Journal of the American
> Medical Association, also criticizes the current culture. The impact factor
> "has taken on a life of its own," she says, lamenting that many scientists
> view their work as a failure if they can't get into a top journal. "There
> are wonderful journals that have impact factors lower than some of the
> higher-citation journals, and they're perfectly appropriate for good
> scientists to publish in."
> The whole system has led to increasing discontent among researchers, says
> Dr. DeAngelis. "It's bad for science in that you don't make researchers
> feel good about what they're doing and the fact that their work gets
> published in a good journal," she says. "That's bad. You're a better
> scientist if you're a happy scientist."
> Researchers go to great lengths to place their papers in high-impact
> journals. They will often flip a manuscript from one publication to the
> next, dropping reluctantly down the impact ladder until they find one that
> will accept their work. The system slows the pace of science, say critics,
> because researchers spend their time trying to publish their work rather
> than moving on to the next set of experiments.
> Sometimes authors will put considerable extra work into a paperÂ â at the
> request of reviewers at top journalsÂ â only to find it eventually rejected.
> "I'll get so exhausted `y the whole thing that I won't even publish it or
> will delay it for a year," says Princeton's Mr. Austin.
> Think Quick
> Deluged by so many manuscripts, high-impact journals can send only a
> fraction out to experts for review. Nature, for example, rejects half of
> the submissions it gets without forwarding them to referees, says its
> editor in chief, Philip Campbell.
> Mr. Austin worries about that process, saying that journal editors are
> summarily rejecting unfashionable papers. "That can really limit
> creativity, and really pioneering papers will not necessarily be judged as
> such by these editors," he says, adding that the editors at top journals
> are not active researchers.
> Mr. Campbell responds that editors at Nature all have research experience
> at good labs and keep on top of their fields by attending conferences and
> reviewing the literature. "They are better than most academics in keeping
> track of what's going on," he says. "I would put them up against any
> academic any day in terms of knowing what's going on."
> He also rejects a belief widely held among scientists that Nature rejects
> manuscripts if editors suspect that they won't attract citations and
> therefore will depress the journal's impact factor. If that were true, he
> says, the journal would stop publishing papers in geology or paleontology,
> which rarely receive as many citations as ones in molecular biology.
> "We're perfectly happy with the fact that we publish papers that are much
> less cited than others," says Mr. Campbell, who also notes that Nature has
> regularly voiced skepticism about impact factors in editorials, letters,
> and news articles.
> Many other editors contacted by The Chronicle also deny making judgments on
> the basis of whether a paper will attract citations. But Dr. DeAngelis, of
> JAMA, says editors at some top journals have told her that they do consider
> citations when judging some papers. "There are people who won't publish
> articles," she says, "because it won't help their impact factor."
> She acknowledges that citations sometimes play a role in her own decisions
> about a paper. "If I'm on the edge and we're going back and forth," she
> says, "I might make the decision saying, Will people use this? In that
> case, one of the criteria is: Will they cite it?"
> Yet she also publishes papers that she knows will hurt JAMA's impact
> factor. "We have a special theme issue on medical education, and we
> continue to do it," she says, even though articles in it are cited
> relatively infrequently.
> Fiona Godlee, editor of BMJ (formerly known as the British Medical Journal
> ), agrees that editors take impact factors into account when deciding on
> manuscripts, whether they realize it or not. "It would be hard to imagine
> that editors don't do that," she says. "That's part of the way that impact
> factors are subverting the scientific process."
> She says editors may be rejecting not only studies in smaller or
> less-fashionable fields, but also important papers from certain regions of
> the world, out of fear that such reports won't attract sufficient citation
> attention. "It's distorting people's priorities," she says, "and we have to
> constantly fight against that."
> Cult of the Factor
> Although impact factors have been around for decades, it is only within the
> past 10 years that they have taken on cult status, as the growing use of
> the Internet has given researchers easy access to ISI data. The company
> says the ranking is here to stay.
> "One thing we won't do is change the impact factor as it stands now, just
> because it's become such a key indicator over time," says Mr. Pringle, the
> vice president for development. Rather than alter the original, ISI has
> added additional information and measurement tools to complement the impact
> factor, he says.
> But the number continues to be so influential that some who run journals
> try to manipulate the system. "Publishers have become quite expert in
> skewing it to their own benefit," says Vitek Tracz, chairman of Current
> Science Group, which publishes more than 100 open-access journals.
> One well-known method is to publish more review articlesÂ â those that give
> overviews of a topic but don't usually present new data. They generally
> attract more citations than do original research articles. So when the
> editorial board of the Journal of Environmental Quality met in 2003, it
> resolved to emphasize review articles in order to shore up the journal's
> slipping impact factor.
> Other tactics exploit gaps in the way ISI calculates the impact factor.
> When journals publish news articles, editorials, book reviews, and
> abstracts of meetings, ISI does not count those items as "citable
> articles"; hence they do not go into the denominator of the impact-factor
> calculation. But if those uncounted items get cited in the literature, ISI
> still puts those citations into the numerator, thereby increasing the
> journal's impact factor.
> Managers at ISI and several journal editors contacted by The Chronicle
> dismissed the issue, arguing that news articles and editorials do not get
> cited often. On average that may be true. But some of them gain enough
> citations to significantly boost the impact factors of certain journals,
> says Henk F. Moed, a bibliometrician at the Center for Science and
> Technology Studies at Leiden University, in the Netherlands, who wrote
> about the issue in his new book, Citation Analysis in Research Evaluation
> (Springer, 2005). His analysis of the high-impact journal The Lancet, for
> example, showed that free citations from news articles and similar material
> buoyed the British medical journal's impact factor by 16 percent in 2002.
> Many journals have added a considerable number of uncountable items to
> their mix in recent years, even as they have decreased the number of
> original research articles. In fact, Cell, JAMA, The Lancet, Nature, The
> New England Journal of Medicine, and Science are all now publishing fewer
> countable research items than they were in 1998, according to ISI data.
> At the same time, those top journals and others have made a science out of
> getting publicity for their products. Big journals with well-funded
> public-relations offices send alerts to hundreds of reporters each week
> about the articles slated for their next issues. The system generates news
> items, which have been shown to increase citations to the original
> scientific articles, thus raising impact factors. Smaller, less-visible
> journals don't benefit from the same media connection.
> Crooked Citations
> Editors defend the changes they have made in their journals, arguing that
> editorials, book reviews, news sections, and similar features are important
> and popular with readers. But journal watchers point to other, less
> scrupulous, ways to raise the citation numbers.
> Sometimes journals will run editorials that cite numerous articles from
> previous issues. In a new study, Jan Reedijk, of Leiden University, and Mr.
> Moed found that a significant number of journals get a noticeable jump in
> their impact factors from such self-citations in editorials.
> In other cases, research articles in a journal preferentially cite that
> very journal, with the effect of raising its impact factor. ISI detected a
> clear example of that practice at the World Journal of Gastroenterology.
> The company stopped listing that journal this year because 85 percent of
> the citations to the publication were coming from its own pages. (Despite
> that censure, the journal's Web site has a moving banner that still
> trumpets its 2003 impact factor.)
> The gaming has grown so intense that some journal editors are violating
> ethical standards to draw more citations to their publications, say
> scientists. John M. Drake, a postdoctoral researcher at the National Center
> for Ecological Analysis and Synthesis, at the University of California at
> Santa Barbara, sent a manuscript to the Journal of Applied Ecology and
> received this e-mail response from an editor: "I should like you to look at
> some recent issues of the Journal of Applied Ecology and add citations to
> any relevant papers you might find. This helps our authors by drawing
> attention to their work, and also adds internal integrity to the Journal's
> Because the manuscript had not yet been accepted, the request borders on
> extortion, Mr. Drake says, even if it weren't meant that way. Authors may
> feel that they have to comply in order to get their papers published.
> "That's an abuse of editorial power," he says, "because of the apparent
> potential for extortion."
> Robert P. Freckleton, a research fellow at the University of Oxford who is
> the journal editor who sent the message to Mr. Drake, says he never
> intended the request to be read as a requirement. "I'd be upset if people
> read it that way," he says. "That's kind of a generic line we use. We
> understand most authors don't actually do that." He changed the wording in
> the form letter last week to clear up misunderstandings, he said.
> Whatever the intention behind such requests, they are becoming more common.
> Anurag A. Agrawal, an assistant professor of ecology and evolutionary
> biology at Cornell University, has documented similar practices at five
> other ecology journals. "It's embarrassing, and it's a scar on our
> discipline," he says. "Authors are being asked to compromise their
> principles. That chips away at the fabric of the scientific enterprise."
> Mr. Freckleton defends the practice: "Part of our job as editors is making
> sure that our work is getting cited and read appropriately." The policy, he
> says, is not an explicit attempt to raise the journal's impact factor.
> But the policy has done just that, and quite successfully, according to the
> The Chronicle's analysis of self-citations to one-year-old articlesÂ â which
> are important in the impact calculation. In 1997 the Journal of Applied
> Ecology cited its own one-year-old articles 30 times. By 2004 that number
> had grown to 91 citations, a 200-percent increase. Similar types of
> citations of the journal in other publications had increased by only 41
> The journal was engaged in other questionable activities at the time. Steve
> Ormerod, executive editor from 2000 through 2004, wrote several editorials
> during his tenure that cited his own journal dozens of times. In 2002, for
> example, two of his commentaries cited 103 papers published in the journal
> during 2000 and 2001. Those two editorials alone raised his journal's 2002
> impact factor by 20 percent.
> Mr. Ormerod, a professor of ecology at Cardiff University, in Wales,
> acknowledges that his actions look suspicious, but says "there is a
> less-sinister explanation." He was attempting, he says, to make the journal
> more relevant by examining whether past articles on environmental issues
> had led to policy actions. "As an accident, the impact factor went up at
> the same time as self-citations went up," he says. He advocates removing
> self-citations from the impact calculations completely, to avoid any
> semblance of impropriety.
> Nonetheless, the self-citations at his publication had a measurable effect.
> The ecology journal's impact factor jumped from 1.3 in 1997 to 3.3 in 2004,
> and its ranking within the discipline rose from 29th out of 86 journals to
> 16th out of 107.
> Following inquiries by The Chronicle, Mr. Freckleton said last week he was
> developing a plan to alter the journal's editorials so that self-citations
> will not raise its impact factor.
> Complaints From Researchers
> ISI says it is taking steps to stay ahead of the schemers. "It's not easy,
> but as we become aware of possible abuse, we try to expose that," says
> Marie E. McVeigh, product-development manager. For example, citation
> reports now indicate what percentage of citations to a journal come from
> that same publication.
> While it is trying to track abuse from editors, however, ISI may not be
> doing enough to police itself. Several editors contacted by The Chronicle
> have raised complaints about errors in the company's data and analyses. The
> problems appear to be growing worse.
> Mr. Blume, of the American Physical Society, says researchers have
> contacted him recently to complain that the ISI database is missing
> citations to their articles. "Complaints are on the rise," says Mr. Blume,
> whose organization is looking into the concerns.
> Mr. Nevill, editor in chief of the Journal of Sports Science, says his
> journal suffered when ISI incorrectly counted short meeting abstracts as if
> they were full-fledged original research articles or reviews. That
> miscoding doubled the number of articles credited to the journal each year,
> halving its impact factor, he says.
> Dr. Godlee, of BMJ, says ISI incorrectly counted some items in her journal,
> such as commentaries, with the effect of depressing its impact factor.
> James Testa, director of editorial development at Thomson Scientific, takes
> issue with calling those cases "errors." Questions often arise about how to
> define certain types of articles, and ISI works closely with publishers to
> establish a correct coding system for each journal, he says. The company
> has decided to rerun its impact-factor calculations this year to correct
> problems with 10 to 15 journals, says Mr. Pringle, of Thomson Scientific.
> He says the rising importance of impact factors in science has caused
> editors to pay closer attention to the calculations, which results in them
> raising more complaints than in the past.
> Like many other editors and researchers, Dr. Godlee sees an easy solution
> to the types of problems that have been plaguing the calculations, as well
> as to deliberate deceptions. She suggests that ISI count citations only to
> original research articles, eliminating the problem of news stories,
> editorials, reviews, and other kinds of materials. But ISI has steadfastly
> resisted altering its original formula.
> Given the power of ISI and its impact factors, scientists have little
> choice but to accept the systemÂ â although competitors are emerging that
> could alter the situation. And the growing use of online journals and
> open-access journals could eventually topple the traditional system of
> packaging articles into issues of a journal.
> Like music lovers who download single songs instead of buying complete
> albums, some researchers are starting to download only the articles they
> want, regardless of where they originally appeared. "In terms of where it
> gets published, it's becoming less and less an issue," says Harold P.
> Erickson, a professor of cell biology at Duke University.
> But most scientists still see value in differentiating between the quality
> of articles in, say, Science and Science of the Total Environment. Even Mr.
> Erickson has to face a dean who expects his professors to demonstrate their
> excellence by occasionally publishing in Cell, Nature, Science, and other
> journals with soaring impact factors.
> Section: Research & Publishing
> Volume 52, Issue 8, Page A12
> Copyright Â© 2005 by The Chronicle of Higher Education
> policy | Help
> (See attached file: CHE IF.doc)
More information about the SIGMETRICS