FW: The import of Impact by Sophie L. Rovner C&E News May 19, 2008

Eugene Garfield eugene.garfield at THOMSONREUTERS.COM
Tue May 27 16:31:36 EDT 2008


 

 

  

 Subject: The import of Impact by Sophie L. Rovner C&E News May 19, 2008

 

C& E News 

May 19, 2008  Volume 86, Number 20  pp. 39-42 


The Import Of Impact


New types of journal metrics grow more influential in the scientific community


Sophie L. Rovner <http://pubs.acs.org/cen/staff/biosw.html> 


AT ONE POINT in his career, Nobel Laureate Sir Harold W. Kroto <http://www.kroto.info>  was the second most highly cited chemist in Britain-topped only by the University of Southampton's Martin Fleischmann, one of the proponents of cold fusion.

Kroto, who codiscovered C60 and is currently a chemistry professor at Florida State University, declines to draw any conclusions from that experience. But given the ultimate fate of cold fusion, the anecdote suggests that citation statistics aren't always a good indicator of scientific excellence.

 Courtesy of Jorge Hirsch/U of California, San Diego 

Metric Equalizer Hirsch introduced the h index as a "more democratic assessment of people's research."

Still, citation counts are used to rank the performance of individual academic researchers, their departments, and even their universities, and thus they help influence promotion and funding decisions.

Citation metrics and other statistics related to usage are also used to evaluate the significance and impact of individual journals. The statistics help librarians select which journals to subscribe to and help authors decide where to submit their manuscripts.

The journal impact factor "is probably the most well-established metric that's out there," according to Susan King, senior vice president in the journals publishing group at the American Chemical Society, which publishes C&EN. "Librarians, publishers, authors, and grant-funding bodies all pay attention to the impact factor."

Eugene Garfield first introduced the concept of an impact factor in 1955, when he was director of the Institute for Scientific Information (ISI). Since then, ISI, which is now part of Thomson Reuters <http://www.thomsonreuters.com> , has developed dozens of other metrics, including the immediacy index (which indicates how soon after publication a journal's articles are cited on average), the percentage of self-citations by individual authors, and lists of researchers whose papers have received the highest number of citations over the past 10 or 20 years.

The impact factor of a journal is calculated by first counting the number of citations received in a given year by items the journal published in the previous two years. That quantity is then divided by the number of articles published in the journal in those two years.

Although the impact factor is widely viewed as a general measure of a journal's quality, importance, and influence, it has some drawbacks. "If you look at chemistry journals, quite often articles will be cited long after that two-year period," King says. "Just looking at a relatively narrow window-two years-may not truly capture those journals that have a longer citation half-life."

Furthermore, the impact factor measures the average number of citations per article in a given journal. But by itself it does not give a full picture of the underlying data because "citation data tend to be very skewed," says James Pringle, vice president of product development for Thomson Reuters' scientific business. For a given journal over a period of time, he explains, "some papers are very highly cited; other papers are not cited at all." The same goes for an individual author's collection of papers.

One way to address this issue is to use other types of measures, such as the h index, Pringle says. "It's a statistical way of showing how many papers within the individual's work rank at a certain level."

"The main advantage of the h index is that it is less susceptible to distortions and fluctuations," says Jorge E. Hirsch <http://pubs.acs.org/isubscribe/journals/cen/86/i21/html/physics.ucsd.edu/%7ejorge/jh.html> , a physics professor at the University of California, San Diego, who introduced the h index in 2005 (Proc. Natl. Acad. Sci. USA 2005, 102, 16569). He defines the h index as the maximum number of an author's papers that have been cited at least h times.

Other methods continue to emerge. Last year, for instance, University of Washington biologist Carl T. Bergstrom <http://pubs.acs.org/isubscribe/journals/cen/86/i21/html/octavia.zoology.washington.edu>  and colleagues proposed the Eigenfactor <http://www.eigenfactor.org> . This metric, which is calculated by using network theory, ranks journals according to influence. A journal is considered influential if its articles are heavily cited within five years of publication by other influential journals. The Eigenfactor team says the metric measures a journal's importance to the scientific community.

The Eigenfactor, h index, and impact factor are all calculated from citations, meaning that an author has to have read an article and then written another article citing the first one. However, readers based in industry "tend to read but not cite because they are publishing patents," King says. "If you've got an applied journal, you may be distressed because it's got a low impact factor. When you look at the usage, however, you may see that it's used very heavily." In that case, even though the journal has a low impact factor, it's actually a very important journal because it's widely used, she says.

Such usage is tracked by counting downloads of online documents. A raw usage number isn't all that informative, however. "All other things being equal, a journal that publishes 2,000 articles a year is going to generate significantly more requests for downloads than one that publishes merely 50," says Richard Gedye, research director for the journals division at Oxford University Press <http://www.oup.com>  in the U.K. A "journal usage factor" that compensates for such differences is under development. He adds that usage factors offer an advantage over citation data in that "usage data are available pretty well from day one of publication."

Gedye chairs the Counting Online Usage of Networked Electronic Resources group <http://www.projectcounter.org>  (COUNTER), which was established in 2002 to make sure that publishers, librarians, and others measure usage in a consistent way. COUNTER is working with the U.K. Serials Group <http://www.uksg.org> , which promotes collaboration on matters related to publishing, and other organizations to develop the usage factor <http://www.uksg.org/usagefactors> .

ACS is among the publishers that are helping scope out parameters for this metric, such as how it should be calculated and what time frame should be used. Gedye says the usage factor will have some similarity to the impact factor. For instance, it might be calculated by taking total usage during period x of articles published during period y, and dividing that number by the total number of articles published during period y.

The next stage of the project will test proposals for calculating and auditing the metric with real data provided by publishers, but there are several nuances to consider. "If we want to look at the usage in a given year of articles published in specific other years, then we have to address how we define when an article was published," Gedye says. "Very often an article goes up online months before it appears in a designated issue in print. We have to make sure that we are looking at statistics from when articles originally came online."

In addition, the usage factor team is "aiming to compensate, in a way that the impact factor doesn't, for the point in a calendar year when an article is published," Gedye says. "If you are looking at average amounts of downloads per article over a given period of time for articles published in a given year, clearly there's going to be more usage of an article published in January than there is of one published in November," he explains. "So we're looking at average downloads of articles in the first 12 months of their life or the second 12 months of their life, irrespective of when in the year they were published. We feel that will create a more level playing field."

The group also wants to avoid "the issue that you get with the impact factor, where some items appear in the numerator but not in the denominator," Gedye says. "For example, the count of citable articles doesn't include letters to the editor, but if a letter to the editor is cited, it counts in the numerator. So if you publish a lot of letters to the editor that are very controversial and interesting, then your impact factor is going to get a bit of a boost."

The numerator-denominator discrepancy isn't the only criticism that's lobbed at Thomson Reuters' impact factor. Indeed, the company periodically comes under fire over the accuracy and import of its metrics. For instance, Mike Rossner, executive director of Rockefeller University Press <http://www.rockefeller.edu/rupress> , and colleagues recently took the company to task for what the authors perceive as errors and inconsistencies in its data (J. Cell Biol. 2008, 180, 254, and 2007, 179, 1091). Thomson Reuters posted a rebuttal on its website (scientific.thomsonreuters.com/citationimpactforum/8427045 <http://www.scientific.thomsonreuters.com/citationimpactforum/8427045> ).

As a general rule, Thomson Reuters' Pringle says, "there's going to be some small percentage of things that need to be corrected," given the tremendous volume of articles the firm analyzes. He adds that the company fixes errors brought to its attention by authors and publishers.

Despite the controversies, impact factors and other journal metrics have a wide following in academia.

Librarians use impact factors to guide decisions about journals, such as whether to renew a subscription to a particular journal or to subscribe to a new one, ACS's King says. "Librarians are also increasingly using usage, and particularly cost per use, as a means of helping them develop their collections."

PUBLISHERS USE several metrics to assess how their journals measure up against competing publications. For example, King evaluates ACS journal performance by impact factors and numbers of manuscripts submitted and articles published. She is also examining the utility of the h index and Eigenfactor for journal evaluation. "I think you want as many tools as possible when you're looking at journals," she explains.

King tracks changes over time in the impact factors for ACS journals and their competitors. "If the impact factor of an ACS journal is declining while the other journals in that area are going up," she notes, "I'd want to try and find out why. Has there been some change in the number or types of articles published in a given journal?" For instance, an increase in the number of review articles can boost a journal's impact factor because reviews tend to be cited more heavily than original research articles, she says. On the other hand, if the impact factors of all of the journals in a particular field are dropping, it could indicate that scientific interest in that field is waning, she says.

 Courtesy of Richard Gedye/Oxford University Press 

Gedye

Thomson Reuters notes that the impact factor must be used judiciously. With this metric, "the object under study is a journal. It's not an individual or a department or a university or a country," Pringle says.

Despite the cautions voiced by Thomson Reuters, the impact factor is "used in ways that were never anticipated," Pringle says. "For example, people will take the impact factor of each journal in which someone has published and then come up with a ranking of an individual author on that basis, and the metric was never really intended for that purpose." Just because that author published a paper in a journal with a high impact factor, it would be incorrect to assume that that specific paper would be highly cited, Pringle explains. An individual author's work could be better assessed by summing up the number of citations to each of that scientist's papers.

HOWEVER, a low number of citations for an author's work in a particular field doesn't necessarily mean the work is low-caliber. For example, Kroto says the papers describing his most original and intellectually important work garnered few citations, "partly because I left the field and didn't continue to tell people that we started it off." The work, performed at the University of Sussex <http://www.sussex.ac.uk>  with John F. Nixon, showed that multiple bonds could be formed between carbon and phosphorus and launched the field of phospha-alkyne and phospha-alkene chemistry, Kroto says. "I doubt whether I would get a citation at all today, although it's a major field of chemistry."

"Evaluation, particularly where individuals are involved, has to be done extremely carefully, because one is dealing with people's careers," Pringle says.

Thomson Reuters has developed a set of guidelines for using citation analysis to evaluate research. For instance, when analyzing metrics about a particular author, university, or country, it's important to establish appropriate benchmarks against which the scientist or institution can be compared, Pringle says. "If I'm analyzing papers produced by a university within a field, I need to take into account the expected citation rate of papers in that field, in the journals underlying it, in the years in which the articles were produced."

Pringle adds, "If I'm a biologist, I would expect to see different levels of citation counts than if I were a chemist or a physicist or an economist" because citation practices vary tremendously across fields.

Just how could citation counts be used in chemistry departments?

Robin L. Garrell <http://www.chem.ucla.edu/dept/Organic/garrell.html> , a chemistry and biochemistry professor at the University of California, Los Angeles, says her department has discussed the value of h indexes in decisions involving promotion to tenure and the hiring of senior faculty.

As with other metrics, the h index must be used with care. "It's widely understood that assessing the true impact of work can only be done over a fairly long timeline," Garrell says. "A lot of work might be flashy and get cited right away, but the true value of lots of very high-quality work is only known over time." She says that makes the h index a "questionable parameter to use for people who are at a very early stage in their career or if you are trying to analyze the impact of work that's just been done in the last couple of years, which is typically the case for someone coming up for advancement to tenure. The work may be superb, but the true impact has yet to manifest itself."

As a result, Garrell adds, "many people have concluded that the h index is more useful as a retrospective metric, where you're looking at someone at a more advanced stage in their career. You could say, 'This person's work has been cited a lot more than someone else's work, or the individual has a higher h index.' Then the metric has value to people who perhaps aren't in that field," Garrell says. "The fact is, at that point, people in the field really know the substance of the work, so the metric doesn't add much to what they think."

When making hiring and promotion decisions, Garrell's department also discusses whether individuals are publishing in the super-high-impact journals.

"There are very different feelings among different people about the value of that metric," she says. "Some say that it is absolutely essential for candidates for tenure to have published in Science and Nature. This has been challenged by many people who raise the concern that not all topics are suitable for those magazines," Garrell says.

FOR INSTANCE, it would be tough to write up a total synthesis or the development of a theoretical model for the broad readership of Science and Nature, she explains. In addition, neither journal "allows for a full presentation of the results," which can make replication of a reported experiment difficult for other scientists. "So the more traditional journals-many of the high-impact ACS journals, for example-are often more useful to the scholarly community" for advancing science, she says.

Furthermore, "I feel that the expectation to publish in Science and Nature potentially disadvantages outstanding scholars whose personal style is just less aggressive," Garrell says. "To get your work in these journals, you may have to communicate with the editorial board and hammer on them."

The likelihood that a manuscript will be accepted by Science or Nature can be swayed by factors apart from quality, such as how hot the subject matter is, Garrell says. In addition, "once you start getting into Science and Nature, it's easier to get into Science and Nature. Certain people have published a lot of papers in Science and Nature. Jan Hendrik Schön is one," she says, referring to the Bell Labs physicist disgraced for publishing articles based on falsified experimental data (C&EN, Sept. 30, 2002, page 9 <http://pubs.acs.org/cen/topstory/8039/8039notw5.html> ). His papers were later retracted.

Hirsch's own troubles with breaking into the ranks of the top-notch journals helped drive his determination to create the h index. "I have had difficulty publishing in very high-impact journals because of the somewhat controversial nature of my research," which questions the conventional theory of superconductivity, he says. "I have had several papers that got published in not-as-high-impact journals and then got lots of citations because after some time people realized that they were good papers. So I partly was motivated by the desire to have more of a democratic assessment of people's research that doesn't depend on whether they got their paper published in a high-impact journal."

Hirsch adds that "it's good to have some quantitative parameters" to supplement and validate subjective criteria "when you're thinking of hiring or promoting people." At the same time, he emphasizes that the h index and other quantitative parameters should be "evaluated together with more detailed evidence on the research, on the content of the papers, and on recommendations."

"If you've actually gone through that detailed reading of the scholarly work, you might conclude that the metrics don't add much, and so the people who are using them more are the people who can't or won't go through that effort," Garrell says.

That might be the case at a university's administrative level. For instance, a dean or other administrator might reject a new hire recommended by a faculty committee because the candidate has a low h index, Hirsch says.

Reliance on metrics can arise even at the department level if the department includes professors from a lot of different subdisciplines, "not all of whom would feel competent or interested in reading all the papers by someone who's coming up for promotion," Garrell says. That's why departments solicit the opinions of outside reviewers who can "assess quality and creativity and also whether the work has had or is beginning to have an impact."

RUMORS ABOUND that citation metrics play a part in grant allocation. Garrell says she's never observed peer review panels in the U.S. using such metrics when making decisions about awarding grants. But she notes that reviewers do examine the quality and quantity of the investigator's work.

Likewise, the U.K.'s national research-funding apparatus currently relies on peer review rather than citation analysis when making grant decisions for its universities and other educational institutions.

But that situation is changing. After this year, the next national evaluation of academic research departments in the U.K., known as the Research Excellence Framework, will utilize citation metrics. The Higher Education Funding Council for England <http://www.hefce.ac.uk> , which distributes public money to universities and colleges for teaching and research, is implementing the shift. The new assessment method will be phased in beginning in 2011.

Observations drawn from the U.K. experience should help clarify exactly how large a role citation metrics should play in the research sector. 

 

 

 

 

When responding, please attach my original message
__________________________________________________
Eugene Garfield, PhD. email:  garfield at codex.cis.upenn.edu 
home page: www.eugenegarfield.org
Tel: 215-243-2205 Fax 215-387-1266
President, The Scientist LLC. www.the-scientist.com  
400 Market St., Suite 1250 Phila. PA 19106- 

Chairman Emeritus, ISI www.isinet.com 
3501 Market Street, Philadelphia, PA 19104-3302
Past President, American Society for Information Science and Technology (ASIS&T) www.asis.org 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20080527/a472a529/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 21619 bytes
Desc: image001.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20080527/a472a529/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.jpg
Type: image/jpeg
Size: 4727 bytes
Desc: image002.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20080527/a472a529/attachment-0001.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.gif
Type: image/gif
Size: 108389 bytes
Desc: image003.gif
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20080527/a472a529/attachment.gif>


More information about the SIGMETRICS mailing list