From kretschmer.h at T-ONLINE.DE Thu Dec 1 04:58:53 2011 From: kretschmer.h at T-ONLINE.DE (kretschmer.h@t-online.de) Date: Thu, 1 Dec 2011 10:58:53 +0100 Subject: 8th Int Conf Webometr. Informetr. Scientom. & 13th COLLNET Meeting South Korea October 2012 Message-ID: Please, apology cross-posting! 1st Announcement Call for Papers Eighth International Conference on Webometrics, Informetrics and Scientometrics (WIS) & Thirteenth COLLNET Meeting 23-26 October, 2012, Seoul, Korea http://collnet2012.ndsl.kr [1] Host : Korea Institute of Science and Technology Information (KISTI) General Chair: Hildrun Kretschmer (Germany, China) Local Chair : Honam Choi (South Korea) Programme Committee: COLLNET Members http://www.collnet.de/ [2] and Local Programme Committee Important Dates: Extended Abstract (3 pages. Abstracts less than 2.5 pages are not accepted): April 15, 2012 (deadline) Acceptance: May 15, 2012 Full Paper: July 30, 2012 (deadline) (Camera-ready version, maximum 10 pages including tables, figures, references) The extended abstracts will be peer reviewed by the Programme Committee. The accepted full papers will be published in the proceedings. Please send your extended abstracts to: Hildrun Kretschmer Kretschmer.h at onlinehome.de [3] Please send also a copy to: Kyungran Noh infor at kisti.re.kr For more details of the conference, please have a view at the website: http://collnet2012.ndsl.kr [4] POSTFACH FAST VOLL? Jetzt kostenlos E-Mail Adresse @t-online.de sichern und endlich Platz f?r tausende Mails haben. http://www.t-online.de/email-kostenlos [5] Links: ------ [1] http://collnet2012.ndsl.kr [2] http://www.collnet.de/ [3] mailto:Kretschmer.h at onlinehome.de [4] http://collnet2012.ndsl.kr [5] http://www.t-online.de/email-kostenlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From notsjb at LSU.EDU Fri Dec 2 12:32:46 2011 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Fri, 2 Dec 2011 11:32:46 -0600 Subject: Scientific Certainty Message-ID: Two Wall Street Journal articles on scientific certainty. The firs discusses it in respect to the economics of it. Stephen J Bensman, Ph.D. LSU Libraries Lousiana State University Baton Rouge, LA 70803 USA Scientists' Elusive Goal: Reproducing Study Results By GAUTAM NAIK Two years ago, a group of Boston researchers published a study describing how they had destroyed cancer tumors by targeting a protein called STK33. Scientists at biotechnology firm Amgen Inc. quickly pounced on the idea and assigned two dozen researchers to try to repeat the experiment with a goal of turning the findings into a drug. WSJ's Gautam Naik has details of challenges scientists face in reproducing claims made by medical journals. Photo: Sandy Huffaker/The New York Times It proved to be a waste of time and money. After six months of intensive lab work, Amgen found it couldn't replicate the results and scrapped the project. "I was disappointed but not surprised," says Glenn Begley, vice president of research at Amgen of Thousand Oaks, Calif. "More often than not, we are unable to reproduce findings" published by researchers in journals. This is one of medicine's dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can't be reproduced. Enlarge Image Close Bayer Researchers at Bayer's labs often find their experiments fail to match claims made in the scientific literature. "It's a very serious and disturbing issue because it obviously misleads people" who implicitly trust findings published in a respected peer-reviewed journal, says Bruce Alberts, editor of Science. On Friday, the U.S. journal is devoting a large chunk of its Dec. 2 issue to the problem of scientific replication. Reproducibility is the foundation of all modern research, the standard by which scientific claims are evaluated. In the U.S. alone, biomedical research is a $100-billion-year enterprise. So when published medical findings can't be validated by others, there are major consequences. Drug manufacturers rely heavily on early-stage academic research and can waste millions of dollars on products if the original results are later shown to be unreliable. Patients may enroll in clinical trials based on conflicting data, and sometimes see no benefits or suffer harmful side effects. There is also a more insidious and pervasive problem: a preference for positive results. Unlike pharmaceutical companies, academic researchers rarely conduct experiments in a "blinded" manner. This makes it easier to cherry-pick statistical findings that support a positive result. In the quest for jobs and funding, especially in an era of economic malaise, the growing army of scientists need more successful experiments to their name, not failed ones. An explosion of scientific and academic journals has added to the pressure. When it comes to results that can't be replicated, Dr. Alberts says the increasing intricacy of experiments may be largely to blame. "It has to do with the complexity of biology and the fact that methods [used in labs] are getting more sophisticated," he says. It is hard to assess whether the reproducibility problem has been getting worse over the years; there are some signs suggesting it could be. For example, the success rate of Phase 2 human trials?where a drug's efficacy is measured?fell to 18% in 2008-2010 from 28% in 2006-2007, according to a global analysis published in the journal Nature Reviews in May. "Lack of reproducibility is one element in the decline in Phase 2 success," says Khusru Asadullah, a Bayer AG research executive. In September, Bayer published a study describing how it had halted nearly two-thirds of its early drug target projects because in-house experiments failed to match claims made in the literature. The German pharmaceutical company says that none of the claims it attempted to validate were in papers that had been retracted or were suspected of being flawed. Yet, even the data in the most prestigious journals couldn't be confirmed, Bayer said. Enlarge Image Close In 2008, Pfizer Inc. made a high-profile bet, potentially worth more than $725 million, that it could turn a 25-year-old Russian cold medicine into an effective drug for Alzheimer's disease. The idea was promising. Published by the journal Lancet, data from researchers at Baylor College of Medicine and elsewhere suggested that the drug, an antihistamine called Dimebon, could improve symptoms in Alzheimer's patients. Later findings, presented by researchers at the University of California Los Angeles at a Chicago conference, showed that the drug appeared to prevent symptoms from worsening for up to 18 months. "Statistically, the studies were very robust," says David Hung, chief executive officer of Medivation Inc., a San Francisco biotech firm that sponsored both studies. In 2010, Medivation along with Pfizer released data from their own clinical trial for Dimebon, involving nearly 600 patients with mild to moderate Alzheimer's disease symptoms. The companies said they were unable to reproduce the Lancet results. They also indicated they had found no statistically significant difference between patients on the drug versus the inactive placebo. Pfizer and Medivation have just completed a one-year study of Dimebon in over 1,000 patients, another effort to see if the drug could be a potential treatment for Alzheimer's. They expect to announce the results in coming months. Scientists offer a few theories as to why duplicative results may be so elusive. Two different labs can use slightly different equipment or materials, leading to divergent results. The more variables there are in an experiment, the more likely it is that small, unintended errors will pile up and swing a lab's conclusions one way or the other. And, of course, data that have been rigged, invented or fraudulently altered won't stand up to future scrutiny. According to a report published by the U.K.'s Royal Society, there were 7.1 million researchers working globally across all scientific fields?academic and corporate?in 2007, a 25% increase from five years earlier. From the Archives Mistakes in Scientific Studies Surge 8/10/2011 "Among the more obvious yet unquantifiable reasons, there is immense competition among laboratories and a pressure to publish," wrote Dr. Asadullah and others from Bayer, in their September paper. "There is also a bias toward publishing positive results, as it is easier to get positive results accepted in good journals." Science publications are under pressure, too. The number of research journals has jumped 23% between 2001 and 2010, according to Elsevier, which has analyzed the data. Their proliferation has ratcheted up competitive pressure on even elite journals, which can generate buzz by publishing splashy papers, typically containing positive findings, to meet the demands of a 24-hour news cycle. Dr. Alberts of Science acknowledges that journals increasingly have to strike a balance between publishing studies "with broad appeal," while making sure they aren't hyped. Drugmakers also have a penchant for positive results. A 2008 study published in the journal PLoS Medicine by researchers at the University of California San Francisco looked at data from 33 new drug applications submitted between 2001 and 2002 to the U.S. Food and Drug Administration. The agency requires drug companies to provide all data from clinical trials. However, the authors found that a quarter of the trial data?most of it unfavorable?never got published because the companies never submitted it to journals. The upshot: doctors who end up prescribing the FDA-approved drugs often don't get to see the unfavorable data. "I would say that selectively publishing data is unethical because there are human subjects involved," says Lisa Bero of UCSF and co-author of the PLoS Medicine study. In an email statement, a spokeswoman for the FDA said the agency considers all data it is given when reviewing a drug but "does not have the authority to control what a company chooses to publish." Venture capital firms say they, too, are increasingly encountering cases of nonrepeatable studies, and cite it as a key reason why they are less willing to finance early-stage projects. Before investing in very early-stage research, Atlas Ventures, a venture-capital firm that backs biotech companies, now asks an outside lab to validate any experimental data. In about half the cases the findings can't be reproduced, says Bruce Booth, a partner in Atlas' Life Sciences group. There have been several prominent cases of nonreproducibility in recent months. For example, in September, the journal Science partially retracted a 2009 paper linking a virus to chronic fatigue syndrome because several labs couldn't replicate the published results. The partial retraction came after two of the 13 study authors went back to the blood samples they analyzed from chronic-fatigue patients and found they were contaminated. Some studies can't be redone for a more prosaic reason: the authors won't make all their raw data available to rival scientists. John Ioannidis of Stanford University recently attempted to reproduce the findings of 18 papers published in the respected journal Nature Genetics. He noted that 16 of these papers stated that the underlying "gene expression" data for the studies were publicly available. But the supplied data apparently weren't detailed enough, and results from 16 of the 18 major papers couldn't fully be reproduced by Dr. Ioannidis and his colleagues. "We have to take it [on faith] that the findings are OK," said Dr. Ioannidis, an epidemiologist who studies the credibility of medical research. Veronique Kiermer, an editor at Nature, says she agrees with Dr. Ioannidis' conclusions, noting that the findings have prompted the journal to be more cautious when publishing large-scale genome analyses. When companies trying to find new drugs come up against the nonreproducibility problem, the repercussions can be significant. A few years ago, several groups of scientists began to seek out new cancer drugs by targeting a protein called KRAS. The KRAS protein transmits signals received on the outside of a cell to its interior and is therefore crucial for regulating cell growth. But when certain mutations occur, the signaling can become continuous. That triggers excess growth such as tumors. The mutated form of KRAS is believed to be responsible for more than 60% of pancreatic cancers and half of colorectal cancers. It has also been implicated in the growth of tumors in many other organs, such as the lung. So scientists have been especially keen to impede KRAS and, thus, stop the constant signaling that leads to tumor growth. In 2008, researchers at Harvard Medical School used cell-culture experiments to show that by inhibiting another protein, STK33, they could prevent the growth of tumor cell lines driven by the malfunctioning KRAS. The finding galvanized researchers at Amgen, who first heard about the experiments at a scientific conference. "Everyone was trying to do this," recalls Dr. Begley of Amgen, which derives nearly half of its revenues from cancer drugs and related treatments. "It was a really big deal." When the Harvard researchers published their results in the prestigious journal Cell, in May 2009, Amgen moved swiftly to capitalize on the findings. At a meeting in the company's offices in Thousand Oaks, Calif., Dr. Begley assigned a group of Amgen researchers the task of identifying small molecules that might inhibit STK33. Another team got a more basic job: reproduce the Harvard data. "We're talking about hundreds of millions of dollars in downstream investments" if the approach works," says Dr. Begley. "So we need to be sure we're standing on something firm and solid." But over the next few months, Dr. Begley and his team got increasingly disheartened. Amgen scientists, it turned out, couldn't reproduce any of the key findings published in Cell. For example, there was no difference in the growth of cells where STK33 was largely blocked, compared with a control group of cells where STK33 wasn't blocked. What could account for the irreproducibility of the results? "In our opinion there were methodological issues" in Amgen's approach that could have led to the different findings, says Claudia Scholl, one of the lead authors of the original Cell paper. Dr. Scholl points out, for example, that Amgen used a different reagent to suppress STK33 than the one reported in Cell. Yet, she acknowledges that even when slightly different reagents are used, "you should be able to reproduce the results." Now a cancer researcher at the University Hospital of Ulm in Germany, Dr. Scholl says her team has reproduced the original Cell results multiple times, and continues to have faith in STK33 as a cancer target. Amgen, however, killed its STK33 program. In September, two dozen of the firm's scientists published a paper in the journal Cancer Research describing their failure to reproduce the main Cell findings. Dr. Begley suggests that academic scientists, like drug companies, should perform more experiments in a "blinded" manner to reduce any bias toward positive findings. Otherwise, he says, "there is a human desire to get the results your boss wants you to get." Adds Atlas' Mr. Booth: "Nobody gets a promotion from publishing a negative study." Write to Gautam Naik at gautam.naik at wsj.com Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved OPINION DECEMBER 2, 2011 Absolute Certainty Is Not Scientific Global warming alarmists betray their cause when they declare that it is irresponsible to question them. By DANIEL B. BOTKIN One of the changes among scientists in this century is the increasing number who believe that one can have complete and certain knowledge. For example, Michael J. Mumma, a NASA senior scientist who has led teams searching for evidence of life on Mars, was quoted in the New York Times as saying, "Based on evidence, what we do have is, unequivocally, the conditions for the emergence of life were present on Mars?period, end of story." This belief in absolute certainty is fundamentally what has bothered me about the scientific debate over global warming in the 21st century, and I am hoping it will not characterize the discussions at the United Nations Climate Change Conference in Durban, South Africa, currently under way. Bjorn Lomborg on the ClimateGate 2.0 e-mail scandal and World AIDS Day. Reading Mr. Mumma's statement, I thought immediately of physicist Niels Bohr, a Nobel laureate, who said, "Anyone who is not shocked by quantum theory has not understood it." To which Richard Feynman, another famous physicist and Nobel laureate, quipped, "Nobody understands quantum mechanics." I felt nostalgic for those times when even the greatest scientific minds admitted limits to what they knew. And when they recognized well that the key to the scientific method is that it is a way of knowing in which you can never completely prove that something is absolutely true. Instead, the important idea about the method is that any statement, to be scientific, must be open to disproof, and a way of knowing how to disprove it exists. Therefore, "Period, end of story" is something a scientist can say?but it isn't science. Enlarge Image Close Getty Images I was one of many scientists on several panels in the 1970s who reviewed the results from the Viking Landers on Mars, the ones that were supposed to conduct experiments that would help determine whether there was or wasn't life on that planet. I don't remember anybody on those panels talking in terms of absolute certainty. Instead, the discussions were about what the evidence did and did not suggest, and what might be disprovable from them and from future landers. I was also one of a small number of scientists?mainly ecologists, climatologists and meteorologists?who in the 1970s became concerned about the possibility of a human-induced global warming, based on then-new measurements. It seemed to be an important scientific problem, both as part of the beginning of a new science of global ecology and as a potentially major practical problem that nations would have to deal with. It did not seem to be something that should or would rise above standard science and become something that one had to choose sides in. But that's what has happened. Some scientists make "period, end of story" claims that human-induced global warming definitely, absolutely either is or isn't happening. For me, the extreme limit of this attitude was expressed by economist Paul Krugman, also a Nobel laureate, who wrote in his New York Times column in June, "Betraying the Planet" that "as I watched the deniers make their arguments, I couldn't help thinking that I was watching a form of treason?treason against the planet." What had begun as a true scientific question with possibly major practical implications had become accepted as an infallible belief (or if you're on the other side, an infallible disbelief), and any further questions were met, Joe-McCarthy style, "with me or agin me." Not only is it poor science to claim absolute truth, but it also leads to the kind of destructive and distrustful debate we've had in last decade about global warming. The history of science and technology suggests that such absolutism on both sides of a scientific debate doesn't often lead to practical solutions. It is helpful to go back to the work of the Wright brothers, whose invention of a true heavier-than-air flying machine was one kind of precursor to the Mars Landers. They basically invented aeronautical science and engineering, developed methods to test their hypotheses, and carefully worked their way through a combination of theory and experimentation. The plane that flew at Kill Devil Hill, a North Carolina dune, did not come out of true believers or absolute assertions, but out of good science and technological development. Let us hope that discussions about global warming can be more like the debates between those two brothers than between those who absolutely, completely agree with Paul Krugman and those who absolutely, completely disagree with him. How about a little agnosticism in our scientific assertions?and even, as with Richard Feynman, a little sense of humor so that we can laugh at our errors and move on? We should all remember that Feynman also said, "If you think that science is certain?well that's just an error on your part." Mr. Botkin, president of the Center for the Study of the Environment and professor emeritus at the University of California, Santa Barbara, is the author of the forthcoming "Discordant Harmonies: Ecology in a Changing World" (Oxford University Press). Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 2151 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 78339 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 31808 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 15461 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 49542 bytes Desc: image007.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image020.jpg Type: image/jpeg Size: 17173 bytes Desc: image020.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 13419 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image021.jpg Type: image/jpeg Size: 7087 bytes Desc: image021.jpg URL: From loet at LEYDESDORFF.NET Fri Dec 2 13:56:36 2011 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 2 Dec 2011 19:56:36 +0100 Subject: FW: [SIGMETRICS] Scientific Certainty In-Reply-To: <4928689828488E458AECE7AFDCB52CFE64D9B5@email003.lsu.edu> Message-ID: Dear Stephen, As you know, I had my first degree in biochemistry. The University of Amsterdam had a leading laboratory at the time. Informally, we were told that it was always impossible to replicate results of others without contacting them first because most researchers misinformed in the legends (on purpose). People wanted others to contact them first in order to establish the hierarchy/trust. Best, Loet _____ Loet Leydesdorff Professor, University of Amsterdam Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-842239111 loet at leydesdorff.net ; http://www.leydesdorff.net/ ; http://scholar.google.com/citations?user=ych9gNYAAAAJ &hl=en From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman Sent: Friday, December 02, 2011 6:33 PM To: SIGMETRICS at LISTSERV.UTK.EDU Subject: [SIGMETRICS] Scientific Certainty Two Wall Street Journal articles on scientific certainty. The firs discusses it in respect to the economics of it. Stephen J Bensman, Ph.D. LSU Libraries Lousiana State University Baton Rouge, LA 70803 USA Description: Description: The Wall Street Journal Scientists' Elusive Goal: Reproducing Study Results By GAUTAM NAIK Two years ago, a group of Boston researchers published a study describing how they had destroyed cancer tumors by targeting a protein called STK33. Scientists at biotechnology firm Amgen Inc. quickly pounced on the idea and assigned two dozen researchers to try to repeat the experiment with a goal of turning the findings into a drug. Description: Description: http://m.wsj.net/video/20111202/120211hubamreproduce/120211hubamreproduce_512x288.jpg WSJ's Gautam Naik has details of challenges scientists face in reproducing claims made by medical journals. Photo: Sandy Huffaker/The New York Times It proved to be a waste of time and money. After six months of intensive lab work, Amgen found it couldn't replicate the results and scrapped the project. "I was disappointed but not surprised," says Glenn Begley, vice president of research at Amgen of Thousand Oaks, Calif. "More often than not, we are unable to reproduce findings" published by researchers in journals. This is one of medicine's dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can't be reproduced. Enlarge Image Close Description: Description: Reproduce Bayer Researchers at Bayer's labs often find their experiments fail to match claims made in the scientific literature. "It's a very serious and disturbing issue because it obviously misleads people" who implicitly trust findings published in a respected peer-reviewed journal, says Bruce Alberts, editor of Science. On Friday, the U.S. journal is devoting a large chunk of its Dec. 2 issue to the problem of scientific replication. Reproducibility is the foundation of all modern research, the standard by which scientific claims are evaluated. In the U.S. alone, biomedical research is a $100-billion-year enterprise. So when published medical findings can't be validated by others, there are major consequences. Drug manufacturers rely heavily on early-stage academic research and can waste millions of dollars on products if the original results are later shown to be unreliable. Patients may enroll in clinical trials based on conflicting data, and sometimes see no benefits or suffer harmful side effects. There is also a more insidious and pervasive problem: a preference for positive results. Description: Description: [REPRODUCE_p1] Unlike pharmaceutical companies, academic researchers rarely conduct experiments in a "blinded" manner. This makes it easier to cherry-pick statistical findings that support a positive result. In the quest for jobs and funding, especially in an era of economic malaise, the growing army of scientists need more successful experiments to their name, not failed ones. An explosion of scientific and academic journals has added to the pressure. When it comes to results that can't be replicated, Dr. Alberts says the increasing intricacy of experiments may be largely to blame. "It has to do with the complexity of biology and the fact that methods [used in labs] are getting more sophisticated," he says. It is hard to assess whether the reproducibility problem has been getting worse over the years; there are some signs suggesting it could be. For example, the success rate of Phase 2 human trials?where a drug's efficacy is measured?fell to 18% in 2008-2010 from 28% in 2006-2007, according to a global analysis published in the journal Nature Reviews in May. "Lack of reproducibility is one element in the decline in Phase 2 success," says Khusru Asadullah, a Bayer AG research executive. In September, Bayer published a study describing how it had halted nearly two-thirds of its early drug target projects because in-house experiments failed to match claims made in the literature. The German pharmaceutical company says that none of the claims it attempted to validate were in papers that had been retracted or were suspected of being flawed. Yet, even the data in the most prestigious journals couldn't be confirmed, Bayer said. Enlarge Image Description: Description: REPRODUCE_jmp Close Description: Description: REPRODUCE_jmp In 2008, Pfizer Inc. made a high-profile bet, potentially worth more than $725 million, that it could turn a 25-year-old Russian cold medicine into an effective drug for Alzheimer's disease. The idea was promising. Published by the journal Lancet, data from researchers at Baylor College of Medicine and elsewhere suggested that the drug, an antihistamine called Dimebon, could improve symptoms in Alzheimer's patients. Later findings, presented by researchers at the University of California Los Angeles at a Chicago conference, showed that the drug appeared to prevent symptoms from worsening for up to 18 months. "Statistically, the studies were very robust," says David Hung, chief executive officer of Medivation Inc., a San Francisco biotech firm that sponsored both studies. In 2010, Medivation along with Pfizer released data from their own clinical trial for Dimebon, involving nearly 600 patients with mild to moderate Alzheimer's disease symptoms. The companies said they were unable to reproduce the Lancet results. They also indicated they had found no statistically significant difference between patients on the drug versus the inactive placebo. Pfizer and Medivation have just completed a one-year study of Dimebon in over 1,000 patients, another effort to see if the drug could be a potential treatment for Alzheimer's. They expect to announce the results in coming months. Scientists offer a few theories as to why duplicative results may be so elusive. Two different labs can use slightly different equipment or materials, leading to divergent results. The more variables there are in an experiment, the more likely it is that small, unintended errors will pile up and swing a lab's conclusions one way or the other. And, of course, data that have been rigged, invented or fraudulently altered won't stand up to future scrutiny. According to a report published by the U.K.'s Royal Society, there were 7.1 million researchers working globally across all scientific fields?academic and corporate?in 2007, a 25% increase from five years earlier. >From the Archives Mistakes in Scientific Studies Surge 8/10/2011 "Among the more obvious yet unquantifiable reasons, there is immense competition among laboratories and a pressure to publish," wrote Dr. Asadullah and others from Bayer, in their September paper. "There is also a bias toward publishing positive results, as it is easier to get positive results accepted in good journals." Science publications are under pressure, too. The number of research journals has jumped 23% between 2001 and 2010, according to Elsevier, which has analyzed the data. Their proliferation has ratcheted up competitive pressure on even elite journals, which can generate buzz by publishing splashy papers, typically containing positive findings, to meet the demands of a 24-hour news cycle. Dr. Alberts of Science acknowledges that journals increasingly have to strike a balance between publishing studies "with broad appeal," while making sure they aren't hyped. Drugmakers also have a penchant for positive results. A 2008 study published in the journal PLoS Medicine by researchers at the University of California San Francisco looked at data from 33 new drug applications submitted between 2001 and 2002 to the U.S. Food and Drug Administration. The agency requires drug companies to provide all data from clinical trials. However, the authors found that a quarter of the trial data?most of it unfavorable?never got published because the companies never submitted it to journals. The upshot: doctors who end up prescribing the FDA-approved drugs often don't get to see the unfavorable data. "I would say that selectively publishing data is unethical because there are human subjects involved," says Lisa Bero of UCSF and co-author of the PLoS Medicine study. In an email statement, a spokeswoman for the FDA said the agency considers all data it is given when reviewing a drug but "does not have the authority to control what a company chooses to publish." Venture capital firms say they, too, are increasingly encountering cases of nonrepeatable studies, and cite it as a key reason why they are less willing to finance early-stage projects. Before investing in very early-stage research, Atlas Ventures, a venture-capital firm that backs biotech companies, now asks an outside lab to validate any experimental data. In about half the cases the findings can't be reproduced, says Bruce Booth, a partner in Atlas' Life Sciences group. There have been several prominent cases of nonreproducibility in recent months. For example, in September, the journal Science partially retracted a 2009 paper linking a virus to chronic fatigue syndrome because several labs couldn't replicate the published results. The partial retraction came after two of the 13 study authors went back to the blood samples they analyzed from chronic-fatigue patients and found they were contaminated. Some studies can't be redone for a more prosaic reason: the authors won't make all their raw data available to rival scientists. John Ioannidis of Stanford University recently attempted to reproduce the findings of 18 papers published in the respected journal Nature Genetics. He noted that 16 of these papers stated that the underlying "gene expression" data for the studies were publicly available. But the supplied data apparently weren't detailed enough, and results from 16 of the 18 major papers couldn't fully be reproduced by Dr. Ioannidis and his colleagues. "We have to take it [on faith] that the findings are OK," said Dr. Ioannidis, an epidemiologist who studies the credibility of medical research. Veronique Kiermer, an editor at Nature, says she agrees with Dr. Ioannidis' conclusions, noting that the findings have prompted the journal to be more cautious when publishing large-scale genome analyses. When companies trying to find new drugs come up against the nonreproducibility problem, the repercussions can be significant. A few years ago, several groups of scientists began to seek out new cancer drugs by targeting a protein called KRAS. The KRAS protein transmits signals received on the outside of a cell to its interior and is therefore crucial for regulating cell growth. But when certain mutations occur, the signaling can become continuous. That triggers excess growth such as tumors. The mutated form of KRAS is believed to be responsible for more than 60% of pancreatic cancers and half of colorectal cancers. It has also been implicated in the growth of tumors in many other organs, such as the lung. So scientists have been especially keen to impede KRAS and, thus, stop the constant signaling that leads to tumor growth. In 2008, researchers at Harvard Medical School used cell-culture experiments to show that by inhibiting another protein, STK33, they could prevent the growth of tumor cell lines driven by the malfunctioning KRAS. The finding galvanized researchers at Amgen, who first heard about the experiments at a scientific conference. "Everyone was trying to do this," recalls Dr. Begley of Amgen, which derives nearly half of its revenues from cancer drugs and related treatments. "It was a really big deal." When the Harvard researchers published their results in the prestigious journal Cell, in May 2009, Amgen moved swiftly to capitalize on the findings. At a meeting in the company's offices in Thousand Oaks, Calif., Dr. Begley assigned a group of Amgen researchers the task of identifying small molecules that might inhibit STK33. Another team got a more basic job: reproduce the Harvard data. "We're talking about hundreds of millions of dollars in downstream investments" if the approach works," says Dr. Begley. "So we need to be sure we're standing on something firm and solid." But over the next few months, Dr. Begley and his team got increasingly disheartened. Amgen scientists, it turned out, couldn't reproduce any of the key findings published in Cell. For example, there was no difference in the growth of cells where STK33 was largely blocked, compared with a control group of cells where STK33 wasn't blocked. What could account for the irreproducibility of the results? "In our opinion there were methodological issues" in Amgen's approach that could have led to the different findings, says Claudia Scholl, one of the lead authors of the original Cell paper. Dr. Scholl points out, for example, that Amgen used a different reagent to suppress STK33 than the one reported in Cell. Yet, she acknowledges that even when slightly different reagents are used, "you should be able to reproduce the results." Now a cancer researcher at the University Hospital of Ulm in Germany, Dr. Scholl says her team has reproduced the original Cell results multiple times, and continues to have faith in STK33 as a cancer target. Amgen, however, killed its STK33 program. In September, two dozen of the firm's scientists published a paper in the journal Cancer Research describing their failure to reproduce the main Cell findings. Dr. Begley suggests that academic scientists, like drug companies, should perform more experiments in a "blinded" manner to reduce any bias toward positive findings. Otherwise, he says, "there is a human desire to get the results your boss wants you to get." Adds Atlas' Mr. Booth: "Nobody gets a promotion from publishing a negative study." Write to Gautam Naik at gautam.naik at wsj.com Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved Description: Description: The Wall Street Journal OPINION DECEMBER 2, 2011 Absolute Certainty Is Not Scientific Global warming alarmists betray their cause when they declare that it is irresponsible to question them. By DANIEL B. BOTKIN One of the changes among scientists in this century is the increasing number who believe that one can have complete and certain knowledge. For example, Michael J. Mumma, a NASA senior scientist who has led teams searching for evidence of life on Mars, was quoted in the New York Times as saying, "Based on evidence, what we do have is, unequivocally, the conditions for the emergence of life were present on Mars?period, end of story." This belief in absolute certainty is fundamentally what has bothered me about the scientific debate over global warming in the 21st century, and I am hoping it will not characterize the discussions at the United Nations Climate Change Conference in Durban, South Africa, currently under way. Description: Description: http://m.wsj.net/video/20111201/120111opinionlomborg/120111opinionlomborg_512x288.jpg Bjorn Lomborg on the ClimateGate 2.0 e-mail scandal and World AIDS Day. Reading Mr. Mumma's statement, I thought immediately of physicist Niels Bohr, a Nobel laureate, who said, "Anyone who is not shocked by quantum theory has not understood it." To which Richard Feynman, another famous physicist and Nobel laureate, quipped, "Nobody understands quantum mechanics." I felt nostalgic for those times when even the greatest scientific minds admitted limits to what they knew. And when they recognized well that the key to the scientific method is that it is a way of knowing in which you can never completely prove that something is absolutely true. Instead, the important idea about the method is that any statement, to be scientific, must be open to disproof, and a way of knowing how to disprove it exists. Therefore, "Period, end of story" is something a scientist can say?but it isn't science. Enlarge Image Description: Description: botkin Close Getty Images I was one of many scientists on several panels in the 1970s who reviewed the results from the Viking Landers on Mars, the ones that were supposed to conduct experiments that would help determine whether there was or wasn't life on that planet. I don't remember anybody on those panels talking in terms of absolute certainty. Instead, the discussions were about what the evidence did and did not suggest, and what might be disprovable from them and from future landers. I was also one of a small number of scientists?mainly ecologists, climatologists and meteorologists?who in the 1970s became concerned about the possibility of a human-induced global warming, based on then-new measurements. It seemed to be an important scientific problem, both as part of the beginning of a new science of global ecology and as a potentially major practical problem that nations would have to deal with. It did not seem to be something that should or would rise above standard science and become something that one had to choose sides in. But that's what has happened. Some scientists make "period, end of story" claims that human-induced global warming definitely, absolutely either is or isn't happening. For me, the extreme limit of this attitude was expressed by economist Paul Krugman, also a Nobel laureate, who wrote in his New York Times column in June, "Betraying the Planet" that "as I watched the deniers make their arguments, I couldn't help thinking that I was watching a form of treason?treason against the planet." What had begun as a true scientific question with possibly major practical implications had become accepted as an infallible belief (or if you're on the other side, an infallible disbelief), and any further questions were met, Joe-McCarthy style, "with me or agin me." Not only is it poor science to claim absolute truth, but it also leads to the kind of destructive and distrustful debate we've had in last decade about global warming. The history of science and technology suggests that such absolutism on both sides of a scientific debate doesn't often lead to practical solutions. It is helpful to go back to the work of the Wright brothers, whose invention of a true heavier-than-air flying machine was one kind of precursor to the Mars Landers. They basically invented aeronautical science and engineering, developed methods to test their hypotheses, and carefully worked their way through a combination of theory and experimentation. The plane that flew at Kill Devil Hill, a North Carolina dune, did not come out of true believers or absolute assertions, but out of good science and technological development. Let us hope that discussions about global warming can be more like the debates between those two brothers than between those who absolutely, completely agree with Paul Krugman and those who absolutely, completely disagree with him. How about a little agnosticism in our scientific assertions?and even, as with Richard Feynman, a little sense of humor so that we can laugh at our errors and move on? We should all remember that Feynman also said, "If you think that science is certain?well that's just an error on your part." Mr. Botkin, president of the Center for the Study of the Environment and professor emeritus at the University of California, Santa Barbara, is the author of the forthcoming "Discordant Harmonies: Ecology in a Changing World" (Oxford University Press). Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 2151 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 78339 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 31808 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 15461 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 49542 bytes Desc: image007.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image020.jpg Type: image/jpeg Size: 17173 bytes Desc: image020.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 13419 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image021.jpg Type: image/jpeg Size: 7087 bytes Desc: image021.jpg URL: From notsjb at LSU.EDU Fri Dec 2 14:29:23 2011 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Fri, 2 Dec 2011 13:29:23 -0600 Subject: Scientific Certainty In-Reply-To: <013701ccb124$1e80fa90$5b82efb0$@leydesdorff.net> Message-ID: Loet, That is an interesting observation. It is also an explanation why researchers hide their data from companies like Amgen, so that they be on the big money, And we are talking BIG money here?billion, if not tne new standard?trillions.. SB From: loet at leydesdorff.net [mailto:leydesdorff at gmail.com] On Behalf Of Loet Leydesdorff Sent: Friday, December 02, 2011 12:57 PM To: Stephen J Bensman; SIGMETRICS at LISTSERV.UTK.EDU Subject: FW: [SIGMETRICS] Scientific Certainty Dear Stephen, As you know, I had my first degree in biochemistry. The University of Amsterdam had a leading laboratory at the time. Informally, we were told that it was always impossible to replicate results of others without contacting them first because most researchers misinformed in the legends (on purpose). People wanted others to contact them first in order to establish the hierarchy/trust. Best, Loet ________________________________ Loet Leydesdorff Professor, University of Amsterdam Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-842239111 loet at leydesdorff.net ; http://www.leydesdorff.net/ ; http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman Sent: Friday, December 02, 2011 6:33 PM To: SIGMETRICS at LISTSERV.UTK.EDU Subject: [SIGMETRICS] Scientific Certainty Two Wall Street Journal articles on scientific certainty. The firs discusses it in respect to the economics of it. Stephen J Bensman, Ph.D. LSU Libraries Lousiana State University Baton Rouge, LA 70803 USA Scientists' Elusive Goal: Reproducing Study Results By GAUTAM NAIK Two years ago, a group of Boston researchers published a study describing how they had destroyed cancer tumors by targeting a protein called STK33. Scientists at biotechnology firm Amgen Inc. quickly pounced on the idea and assigned two dozen researchers to try to repeat the experiment with a goal of turning the findings into a drug. WSJ's Gautam Naik has details of challenges scientists face in reproducing claims made by medical journals. Photo: Sandy Huffaker/The New York Times It proved to be a waste of time and money. After six months of intensive lab work, Amgen found it couldn't replicate the results and scrapped the project. "I was disappointed but not surprised," says Glenn Begley, vice president of research at Amgen of Thousand Oaks, Calif. "More often than not, we are unable to reproduce findings" published by researchers in journals. This is one of medicine's dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can't be reproduced. Enlarge Image Close Bayer Researchers at Bayer's labs often find their experiments fail to match claims made in the scientific literature. "It's a very serious and disturbing issue because it obviously misleads people" who implicitly trust findings published in a respected peer-reviewed journal, says Bruce Alberts, editor of Science. On Friday, the U.S. journal is devoting a large chunk of its Dec. 2 issue to the problem of scientific replication. Reproducibility is the foundation of all modern research, the standard by which scientific claims are evaluated. In the U.S. alone, biomedical research is a $100-billion-year enterprise. So when published medical findings can't be validated by others, there are major consequences. Drug manufacturers rely heavily on early-stage academic research and can waste millions of dollars on products if the original results are later shown to be unreliable. Patients may enroll in clinical trials based on conflicting data, and sometimes see no benefits or suffer harmful side effects. There is also a more insidious and pervasive problem: a preference for positive results. Unlike pharmaceutical companies, academic researchers rarely conduct experiments in a "blinded" manner. This makes it easier to cherry-pick statistical findings that support a positive result. In the quest for jobs and funding, especially in an era of economic malaise, the growing army of scientists need more successful experiments to their name, not failed ones. An explosion of scientific and academic journals has added to the pressure. When it comes to results that can't be replicated, Dr. Alberts says the increasing intricacy of experiments may be largely to blame. "It has to do with the complexity of biology and the fact that methods [used in labs] are getting more sophisticated," he says. It is hard to assess whether the reproducibility problem has been getting worse over the years; there are some signs suggesting it could be. For example, the success rate of Phase 2 human trials?where a drug's efficacy is measured?fell to 18% in 2008-2010 from 28% in 2006-2007, according to a global analysis published in the journal Nature Reviews in May. "Lack of reproducibility is one element in the decline in Phase 2 success," says Khusru Asadullah, a Bayer AG research executive. In September, Bayer published a study describing how it had halted nearly two-thirds of its early drug target projects because in-house experiments failed to match claims made in the literature. The German pharmaceutical company says that none of the claims it attempted to validate were in papers that had been retracted or were suspected of being flawed. Yet, even the data in the most prestigious journals couldn't be confirmed, Bayer said. Enlarge Image Close In 2008, Pfizer Inc. made a high-profile bet, potentially worth more than $725 million, that it could turn a 25-year-old Russian cold medicine into an effective drug for Alzheimer's disease. The idea was promising. Published by the journal Lancet, data from researchers at Baylor College of Medicine and elsewhere suggested that the drug, an antihistamine called Dimebon, could improve symptoms in Alzheimer's patients. Later findings, presented by researchers at the University of California Los Angeles at a Chicago conference, showed that the drug appeared to prevent symptoms from worsening for up to 18 months. "Statistically, the studies were very robust," says David Hung, chief executive officer of Medivation Inc., a San Francisco biotech firm that sponsored both studies. In 2010, Medivation along with Pfizer released data from their own clinical trial for Dimebon, involving nearly 600 patients with mild to moderate Alzheimer's disease symptoms. The companies said they were unable to reproduce the Lancet results. They also indicated they had found no statistically significant difference between patients on the drug versus the inactive placebo. Pfizer and Medivation have just completed a one-year study of Dimebon in over 1,000 patients, another effort to see if the drug could be a potential treatment for Alzheimer's. They expect to announce the results in coming months. Scientists offer a few theories as to why duplicative results may be so elusive. Two different labs can use slightly different equipment or materials, leading to divergent results. The more variables there are in an experiment, the more likely it is that small, unintended errors will pile up and swing a lab's conclusions one way or the other. And, of course, data that have been rigged, invented or fraudulently altered won't stand up to future scrutiny. According to a report published by the U.K.'s Royal Society, there were 7.1 million researchers working globally across all scientific fields?academic and corporate?in 2007, a 25% increase from five years earlier. From the Archives Mistakes in Scientific Studies Surge 8/10/2011 "Among the more obvious yet unquantifiable reasons, there is immense competition among laboratories and a pressure to publish," wrote Dr. Asadullah and others from Bayer, in their September paper. "There is also a bias toward publishing positive results, as it is easier to get positive results accepted in good journals." Science publications are under pressure, too. The number of research journals has jumped 23% between 2001 and 2010, according to Elsevier, which has analyzed the data. Their proliferation has ratcheted up competitive pressure on even elite journals, which can generate buzz by publishing splashy papers, typically containing positive findings, to meet the demands of a 24-hour news cycle. Dr. Alberts of Science acknowledges that journals increasingly have to strike a balance between publishing studies "with broad appeal," while making sure they aren't hyped. Drugmakers also have a penchant for positive results. A 2008 study published in the journal PLoS Medicine by researchers at the University of California San Francisco looked at data from 33 new drug applications submitted between 2001 and 2002 to the U.S. Food and Drug Administration. The agency requires drug companies to provide all data from clinical trials. However, the authors found that a quarter of the trial data?most of it unfavorable?never got published because the companies never submitted it to journals. The upshot: doctors who end up prescribing the FDA-approved drugs often don't get to see the unfavorable data. "I would say that selectively publishing data is unethical because there are human subjects involved," says Lisa Bero of UCSF and co-author of the PLoS Medicine study. In an email statement, a spokeswoman for the FDA said the agency considers all data it is given when reviewing a drug but "does not have the authority to control what a company chooses to publish." Venture capital firms say they, too, are increasingly encountering cases of nonrepeatable studies, and cite it as a key reason why they are less willing to finance early-stage projects. Before investing in very early-stage research, Atlas Ventures, a venture-capital firm that backs biotech companies, now asks an outside lab to validate any experimental data. In about half the cases the findings can't be reproduced, says Bruce Booth, a partner in Atlas' Life Sciences group. There have been several prominent cases of nonreproducibility in recent months. For example, in September, the journal Science partially retracted a 2009 paper linking a virus to chronic fatigue syndrome because several labs couldn't replicate the published results. The partial retraction came after two of the 13 study authors went back to the blood samples they analyzed from chronic-fatigue patients and found they were contaminated. Some studies can't be redone for a more prosaic reason: the authors won't make all their raw data available to rival scientists. John Ioannidis of Stanford University recently attempted to reproduce the findings of 18 papers published in the respected journal Nature Genetics. He noted that 16 of these papers stated that the underlying "gene expression" data for the studies were publicly available. But the supplied data apparently weren't detailed enough, and results from 16 of the 18 major papers couldn't fully be reproduced by Dr. Ioannidis and his colleagues. "We have to take it [on faith] that the findings are OK," said Dr. Ioannidis, an epidemiologist who studies the credibility of medical research. Veronique Kiermer, an editor at Nature, says she agrees with Dr. Ioannidis' conclusions, noting that the findings have prompted the journal to be more cautious when publishing large-scale genome analyses. When companies trying to find new drugs come up against the nonreproducibility problem, the repercussions can be significant. A few years ago, several groups of scientists began to seek out new cancer drugs by targeting a protein called KRAS. The KRAS protein transmits signals received on the outside of a cell to its interior and is therefore crucial for regulating cell growth. But when certain mutations occur, the signaling can become continuous. That triggers excess growth such as tumors. The mutated form of KRAS is believed to be responsible for more than 60% of pancreatic cancers and half of colorectal cancers. It has also been implicated in the growth of tumors in many other organs, such as the lung. So scientists have been especially keen to impede KRAS and, thus, stop the constant signaling that leads to tumor growth. In 2008, researchers at Harvard Medical School used cell-culture experiments to show that by inhibiting another protein, STK33, they could prevent the growth of tumor cell lines driven by the malfunctioning KRAS. The finding galvanized researchers at Amgen, who first heard about the experiments at a scientific conference. "Everyone was trying to do this," recalls Dr. Begley of Amgen, which derives nearly half of its revenues from cancer drugs and related treatments. "It was a really big deal." When the Harvard researchers published their results in the prestigious journal Cell, in May 2009, Amgen moved swiftly to capitalize on the findings. At a meeting in the company's offices in Thousand Oaks, Calif., Dr. Begley assigned a group of Amgen researchers the task of identifying small molecules that might inhibit STK33. Another team got a more basic job: reproduce the Harvard data. "We're talking about hundreds of millions of dollars in downstream investments" if the approach works," says Dr. Begley. "So we need to be sure we're standing on something firm and solid." But over the next few months, Dr. Begley and his team got increasingly disheartened. Amgen scientists, it turned out, couldn't reproduce any of the key findings published in Cell. For example, there was no difference in the growth of cells where STK33 was largely blocked, compared with a control group of cells where STK33 wasn't blocked. What could account for the irreproducibility of the results? "In our opinion there were methodological issues" in Amgen's approach that could have led to the different findings, says Claudia Scholl, one of the lead authors of the original Cell paper. Dr. Scholl points out, for example, that Amgen used a different reagent to suppress STK33 than the one reported in Cell. Yet, she acknowledges that even when slightly different reagents are used, "you should be able to reproduce the results." Now a cancer researcher at the University Hospital of Ulm in Germany, Dr. Scholl says her team has reproduced the original Cell results multiple times, and continues to have faith in STK33 as a cancer target. Amgen, however, killed its STK33 program. In September, two dozen of the firm's scientists published a paper in the journal Cancer Research describing their failure to reproduce the main Cell findings. Dr. Begley suggests that academic scientists, like drug companies, should perform more experiments in a "blinded" manner to reduce any bias toward positive findings. Otherwise, he says, "there is a human desire to get the results your boss wants you to get." Adds Atlas' Mr. Booth: "Nobody gets a promotion from publishing a negative study." Write to Gautam Naik at gautam.naik at wsj.com Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved OPINION DECEMBER 2, 2011 Absolute Certainty Is Not Scientific Global warming alarmists betray their cause when they declare that it is irresponsible to question them. By DANIEL B. BOTKIN One of the changes among scientists in this century is the increasing number who believe that one can have complete and certain knowledge. For example, Michael J. Mumma, a NASA senior scientist who has led teams searching for evidence of life on Mars, was quoted in the New York Times as saying, "Based on evidence, what we do have is, unequivocally, the conditions for the emergence of life were present on Mars?period, end of story." This belief in absolute certainty is fundamentally what has bothered me about the scientific debate over global warming in the 21st century, and I am hoping it will not characterize the discussions at the United Nations Climate Change Conference in Durban, South Africa, currently under way. Bjorn Lomborg on the ClimateGate 2.0 e-mail scandal and World AIDS Day. Reading Mr. Mumma's statement, I thought immediately of physicist Niels Bohr, a Nobel laureate, who said, "Anyone who is not shocked by quantum theory has not understood it." To which Richard Feynman, another famous physicist and Nobel laureate, quipped, "Nobody understands quantum mechanics." I felt nostalgic for those times when even the greatest scientific minds admitted limits to what they knew. And when they recognized well that the key to the scientific method is that it is a way of knowing in which you can never completely prove that something is absolutely true. Instead, the important idea about the method is that any statement, to be scientific, must be open to disproof, and a way of knowing how to disprove it exists. Therefore, "Period, end of story" is something a scientist can say?but it isn't science. Enlarge Image Close Getty Images I was one of many scientists on several panels in the 1970s who reviewed the results from the Viking Landers on Mars, the ones that were supposed to conduct experiments that would help determine whether there was or wasn't life on that planet. I don't remember anybody on those panels talking in terms of absolute certainty. Instead, the discussions were about what the evidence did and did not suggest, and what might be disprovable from them and from future landers. I was also one of a small number of scientists?mainly ecologists, climatologists and meteorologists?who in the 1970s became concerned about the possibility of a human-induced global warming, based on then-new measurements. It seemed to be an important scientific problem, both as part of the beginning of a new science of global ecology and as a potentially major practical problem that nations would have to deal with. It did not seem to be something that should or would rise above standard science and become something that one had to choose sides in. But that's what has happened. Some scientists make "period, end of story" claims that human-induced global warming definitely, absolutely either is or isn't happening. For me, the extreme limit of this attitude was expressed by economist Paul Krugman, also a Nobel laureate, who wrote in his New York Times column in June, "Betraying the Planet" that "as I watched the deniers make their arguments, I couldn't help thinking that I was watching a form of treason?treason against the planet." What had begun as a true scientific question with possibly major practical implications had become accepted as an infallible belief (or if you're on the other side, an infallible disbelief), and any further questions were met, Joe-McCarthy style, "with me or agin me." Not only is it poor science to claim absolute truth, but it also leads to the kind of destructive and distrustful debate we've had in last decade about global warming. The history of science and technology suggests that such absolutism on both sides of a scientific debate doesn't often lead to practical solutions. It is helpful to go back to the work of the Wright brothers, whose invention of a true heavier-than-air flying machine was one kind of precursor to the Mars Landers. They basically invented aeronautical science and engineering, developed methods to test their hypotheses, and carefully worked their way through a combination of theory and experimentation. The plane that flew at Kill Devil Hill, a North Carolina dune, did not come out of true believers or absolute assertions, but out of good science and technological development. Let us hope that discussions about global warming can be more like the debates between those two brothers than between those who absolutely, completely agree with Paul Krugman and those who absolutely, completely disagree with him. How about a little agnosticism in our scientific assertions?and even, as with Richard Feynman, a little sense of humor so that we can laugh at our errors and move on? We should all remember that Feynman also said, "If you think that science is certain?well that's just an error on your part." Mr. Botkin, president of the Center for the Study of the Environment and professor emeritus at the University of California, Santa Barbara, is the author of the forthcoming "Discordant Harmonies: Ecology in a Changing World" (Oxford University Press). Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 2151 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 13419 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 78339 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 31808 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 15461 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 49542 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 7087 bytes Desc: image007.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 17173 bytes Desc: image008.jpg URL: From eugene.garfield at THOMSONREUTERS.COM Fri Dec 2 14:36:27 2011 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Fri, 2 Dec 2011 19:36:27 +0000 Subject: FW: A new citation style Message-ID: -----Original Message----- From: American Scientist Open Access Forum [mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG] On Behalf Of Arif Jinha Sent: Friday, December 02, 2011 11:30 AM To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG Subject: A new citation style Dear friends, we need to officially create an accepted and updated citation style. However, print publication has a production and preservational value that ought to be preserved. A publisher may easily deliver their highest-quality journals in print, while publishing the same articles online in OA. They just need the right business model to accomplish this. By putting the resources into high-quality print publication sends a message that these articles have value, the writing, copy-editing and print presentations give greater gravitas and impact to the content. For print publications, I suggest maintaining the geographical information about the publisher, because locality is still important to culture, and thus the culture of publishing houses in different regions. When something is in print, people take in the whole of the work. When it is electronic, people tend to sift for data or even just the bits they need to reference to back up their analysis. The second route is pragmatic insofar as it gets work done but the time taken to read the whole of the work gives incentive to digest information in a more reflective fashion. Such considerations reflexively affect the culture of research, and beg of the need for quality and contemplation of the ideas - not just the bits of information that are often grabbed and embedded in the polemics of research agendas. A new citation style for online articles would include author-date, article and journal title, DOI, and link to the OA version of an article if available, and if not to the abstract. This convention would favour OA versions. There should be a limit on characters, this would encourage concise titles (short versions if necessary) and disqualify the pasting of lengthy website addresses. Citation for printed works should be used authentically, that is when the author is getting the information from the hard copy, and therein should include geographical location. In all the citation style should appropriately and authentically reflect the research to publication process, whilst being convenient to the reader in their study. many thanks for your consideration and time, Arif Jinha, MA Wakefield, QC Canada ----- Original Message ----- From: "Andrew A. Adams" To: Sent: Monday, November 07, 2011 9:11 AM Subject: Re: Is a Different OA Strategy Needed for Social Sciences and Humanities? (No) > On 2011-11-06, at 5:58 PM, Jean-Claude Gu don wrote: >> most SSH journals would not accept the kind of referencing he >> suggests. Most journals, in fact, impose their citation and quotation >> referencing styles. As they now also accept electronic references, it >> leads to what I said: references to repository articles are beginning >> to appear in significant numbers. This raise a new question, that of >> quality control of the versions in the repositories, but that can be >> solved too. It is therefore true that the lack of reliable pagination >> is probably a fading inconvenience. Stevan Harnad replied: > Yes, quote-location convention-updating is a minor and fading > inconvenience. But not because we need (or are providing) peer review > for already peer-reviewed author drafts, just so that quotes can have > page numbers! There are simple ways to accomplish that. And what is > cited is the canonical published version of record, not the specific > document one actually accessed. (I don't cite a photocopy of an > article, I cite the article -- journal, title, date, volume, > page-span.) If a journal copy-editor, unsatisfied with the > section-heading and paragraph number, insists on page numbers for the > quotes, they can go look them up (when they look up the quote itself, > whose wording, after all, even more important to get right than its > pagination....) I would go even further than Stevan and say that practically (not in the minds of editors, but as a matter of practical usage fo researchers, not librarians, not editors, not bibliometricists) even paragrph numbering is pointless and unncessary in the new world of OA, if we ever reach it. If one has access to an electronic version of a paper referenced, then quotes or keywords can be searched for in the accesible electronic version. In computer science the concept of pages has been done away with in a number of new journals. Articles are referenced by article number within volume (i.e. year of publication). We are getting caght in all these gutenberg-era traps distracting us from providing the most important thing: access to the information. Everything else is simply a matter of having the proper tools available to make use of that access. We used to worry about findability - Google Scholar pretty much solves that one. If one has even a half-decent reference with author name(s)/title and journal name, then Google scholar will find it if it can be crawled. We used to worry about finding an element within an article. Syntactic search within an article with appropriately chosen words can not only solve that, but also show where else in the same paper the same concepts were addressed. All we're missing is the access and that is within our grasp if we as a community would stop worrying about all these mythical problems and deposit. -- Professor Andrew A Adams aaa at meiji.ac.jp Professor at Graduate School of Business Administration, and Deputy Director of the Centre for Business Information Ethics Meiji University, Tokyo, Japan http://www.a-cubed.info/ From loet at LEYDESDORFF.NET Fri Dec 2 14:37:00 2011 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 2 Dec 2011 20:37:00 +0100 Subject: Scientific Certainty In-Reply-To: <4928689828488E458AECE7AFDCB52CFE64D9B7@email003.lsu.edu> Message-ID: Dear Stephen, It is nowadays not only that. If one produces results which are not welcome to a corporation, one may be sued. This was recently discussed here because there is pressure that people hand over their data to archiving services. There is always room for litigation and corporations have endless amounts of money for hiring lawyers. This seems to be a real problem in the life sciences. Best, Loet From: Stephen J Bensman [mailto:notsjb at lsu.edu] Sent: Friday, December 02, 2011 8:29 PM To: Loet Leydesdorff; SIGMETRICS at LISTSERV.UTK.EDU Subject: RE: [SIGMETRICS] Scientific Certainty Loet, That is an interesting observation. It is also an explanation why researchers hide their data from companies like Amgen, so that they be on the big money, And we are talking BIG money here?billion, if not tne new standard?trillions.. SB From: loet at leydesdorff.net [mailto:leydesdorff at gmail.com] On Behalf Of Loet Leydesdorff Sent: Friday, December 02, 2011 12:57 PM To: Stephen J Bensman; SIGMETRICS at LISTSERV.UTK.EDU Subject: FW: [SIGMETRICS] Scientific Certainty Dear Stephen, As you know, I had my first degree in biochemistry. The University of Amsterdam had a leading laboratory at the time. Informally, we were told that it was always impossible to replicate results of others without contacting them first because most researchers misinformed in the legends (on purpose). People wanted others to contact them first in order to establish the hierarchy/trust. Best, Loet _____ Loet Leydesdorff Professor, University of Amsterdam Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-842239111 loet at leydesdorff.net ; http://www.leydesdorff.net/ ; http://scholar.google.com/citations?user=ych9gNYAAAAJ &hl=en From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman Sent: Friday, December 02, 2011 6:33 PM To: SIGMETRICS at LISTSERV.UTK.EDU Subject: [SIGMETRICS] Scientific Certainty Two Wall Street Journal articles on scientific certainty. The firs discusses it in respect to the economics of it. Stephen J Bensman, Ph.D. LSU Libraries Lousiana State University Baton Rouge, LA 70803 USA Description: Description: The Wall Street Journal Scientists' Elusive Goal: Reproducing Study Results By GAUTAM NAIK Two years ago, a group of Boston researchers published a study describing how they had destroyed cancer tumors by targeting a protein called STK33. Scientists at biotechnology firm Amgen Inc. quickly pounced on the idea and assigned two dozen researchers to try to repeat the experiment with a goal of turning the findings into a drug. Description: Description: http://m.wsj.net/video/20111202/120211hubamreproduce/120211hubamreproduce_512x288.jpg WSJ's Gautam Naik has details of challenges scientists face in reproducing claims made by medical journals. Photo: Sandy Huffaker/The New York Times It proved to be a waste of time and money. After six months of intensive lab work, Amgen found it couldn't replicate the results and scrapped the project. "I was disappointed but not surprised," says Glenn Begley, vice president of research at Amgen of Thousand Oaks, Calif. "More often than not, we are unable to reproduce findings" published by researchers in journals. This is one of medicine's dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can't be reproduced. Enlarge Image Close Description: Description: Reproduce Bayer Researchers at Bayer's labs often find their experiments fail to match claims made in the scientific literature. "It's a very serious and disturbing issue because it obviously misleads people" who implicitly trust findings published in a respected peer-reviewed journal, says Bruce Alberts, editor of Science. On Friday, the U.S. journal is devoting a large chunk of its Dec. 2 issue to the problem of scientific replication. Reproducibility is the foundation of all modern research, the standard by which scientific claims are evaluated. In the U.S. alone, biomedical research is a $100-billion-year enterprise. So when published medical findings can't be validated by others, there are major consequences. Drug manufacturers rely heavily on early-stage academic research and can waste millions of dollars on products if the original results are later shown to be unreliable. Patients may enroll in clinical trials based on conflicting data, and sometimes see no benefits or suffer harmful side effects. There is also a more insidious and pervasive problem: a preference for positive results. Description: Description: [REPRODUCE_p1] Unlike pharmaceutical companies, academic researchers rarely conduct experiments in a "blinded" manner. This makes it easier to cherry-pick statistical findings that support a positive result. In the quest for jobs and funding, especially in an era of economic malaise, the growing army of scientists need more successful experiments to their name, not failed ones. An explosion of scientific and academic journals has added to the pressure. When it comes to results that can't be replicated, Dr. Alberts says the increasing intricacy of experiments may be largely to blame. "It has to do with the complexity of biology and the fact that methods [used in labs] are getting more sophisticated," he says. It is hard to assess whether the reproducibility problem has been getting worse over the years; there are some signs suggesting it could be. For example, the success rate of Phase 2 human trials?where a drug's efficacy is measured?fell to 18% in 2008-2010 from 28% in 2006-2007, according to a global analysis published in the journal Nature Reviews in May. "Lack of reproducibility is one element in the decline in Phase 2 success," says Khusru Asadullah, a Bayer AG research executive. In September, Bayer published a study describing how it had halted nearly two-thirds of its early drug target projects because in-house experiments failed to match claims made in the literature. The German pharmaceutical company says that none of the claims it attempted to validate were in papers that had been retracted or were suspected of being flawed. Yet, even the data in the most prestigious journals couldn't be confirmed, Bayer said. Enlarge Image Description: Description: REPRODUCE_jmp Close Description: Description: REPRODUCE_jmp In 2008, Pfizer Inc. made a high-profile bet, potentially worth more than $725 million, that it could turn a 25-year-old Russian cold medicine into an effective drug for Alzheimer's disease. The idea was promising. Published by the journal Lancet, data from researchers at Baylor College of Medicine and elsewhere suggested that the drug, an antihistamine called Dimebon, could improve symptoms in Alzheimer's patients. Later findings, presented by researchers at the University of California Los Angeles at a Chicago conference, showed that the drug appeared to prevent symptoms from worsening for up to 18 months. "Statistically, the studies were very robust," says David Hung, chief executive officer of Medivation Inc., a San Francisco biotech firm that sponsored both studies. In 2010, Medivation along with Pfizer released data from their own clinical trial for Dimebon, involving nearly 600 patients with mild to moderate Alzheimer's disease symptoms. The companies said they were unable to reproduce the Lancet results. They also indicated they had found no statistically significant difference between patients on the drug versus the inactive placebo. Pfizer and Medivation have just completed a one-year study of Dimebon in over 1,000 patients, another effort to see if the drug could be a potential treatment for Alzheimer's. They expect to announce the results in coming months. Scientists offer a few theories as to why duplicative results may be so elusive. Two different labs can use slightly different equipment or materials, leading to divergent results. The more variables there are in an experiment, the more likely it is that small, unintended errors will pile up and swing a lab's conclusions one way or the other. And, of course, data that have been rigged, invented or fraudulently altered won't stand up to future scrutiny. According to a report published by the U.K.'s Royal Society, there were 7.1 million researchers working globally across all scientific fields?academic and corporate?in 2007, a 25% increase from five years earlier. >From the Archives Mistakes in Scientific Studies Surge 8/10/2011 "Among the more obvious yet unquantifiable reasons, there is immense competition among laboratories and a pressure to publish," wrote Dr. Asadullah and others from Bayer, in their September paper. "There is also a bias toward publishing positive results, as it is easier to get positive results accepted in good journals." Science publications are under pressure, too. The number of research journals has jumped 23% between 2001 and 2010, according to Elsevier, which has analyzed the data. Their proliferation has ratcheted up competitive pressure on even elite journals, which can generate buzz by publishing splashy papers, typically containing positive findings, to meet the demands of a 24-hour news cycle. Dr. Alberts of Science acknowledges that journals increasingly have to strike a balance between publishing studies "with broad appeal," while making sure they aren't hyped. Drugmakers also have a penchant for positive results. A 2008 study published in the journal PLoS Medicine by researchers at the University of California San Francisco looked at data from 33 new drug applications submitted between 2001 and 2002 to the U.S. Food and Drug Administration. The agency requires drug companies to provide all data from clinical trials. However, the authors found that a quarter of the trial data?most of it unfavorable?never got published because the companies never submitted it to journals. The upshot: doctors who end up prescribing the FDA-approved drugs often don't get to see the unfavorable data. "I would say that selectively publishing data is unethical because there are human subjects involved," says Lisa Bero of UCSF and co-author of the PLoS Medicine study. In an email statement, a spokeswoman for the FDA said the agency considers all data it is given when reviewing a drug but "does not have the authority to control what a company chooses to publish." Venture capital firms say they, too, are increasingly encountering cases of nonrepeatable studies, and cite it as a key reason why they are less willing to finance early-stage projects. Before investing in very early-stage research, Atlas Ventures, a venture-capital firm that backs biotech companies, now asks an outside lab to validate any experimental data. In about half the cases the findings can't be reproduced, says Bruce Booth, a partner in Atlas' Life Sciences group. There have been several prominent cases of nonreproducibility in recent months. For example, in September, the journal Science partially retracted a 2009 paper linking a virus to chronic fatigue syndrome because several labs couldn't replicate the published results. The partial retraction came after two of the 13 study authors went back to the blood samples they analyzed from chronic-fatigue patients and found they were contaminated. Some studies can't be redone for a more prosaic reason: the authors won't make all their raw data available to rival scientists. John Ioannidis of Stanford University recently attempted to reproduce the findings of 18 papers published in the respected journal Nature Genetics. He noted that 16 of these papers stated that the underlying "gene expression" data for the studies were publicly available. But the supplied data apparently weren't detailed enough, and results from 16 of the 18 major papers couldn't fully be reproduced by Dr. Ioannidis and his colleagues. "We have to take it [on faith] that the findings are OK," said Dr. Ioannidis, an epidemiologist who studies the credibility of medical research. Veronique Kiermer, an editor at Nature, says she agrees with Dr. Ioannidis' conclusions, noting that the findings have prompted the journal to be more cautious when publishing large-scale genome analyses. When companies trying to find new drugs come up against the nonreproducibility problem, the repercussions can be significant. A few years ago, several groups of scientists began to seek out new cancer drugs by targeting a protein called KRAS. The KRAS protein transmits signals received on the outside of a cell to its interior and is therefore crucial for regulating cell growth. But when certain mutations occur, the signaling can become continuous. That triggers excess growth such as tumors. The mutated form of KRAS is believed to be responsible for more than 60% of pancreatic cancers and half of colorectal cancers. It has also been implicated in the growth of tumors in many other organs, such as the lung. So scientists have been especially keen to impede KRAS and, thus, stop the constant signaling that leads to tumor growth. In 2008, researchers at Harvard Medical School used cell-culture experiments to show that by inhibiting another protein, STK33, they could prevent the growth of tumor cell lines driven by the malfunctioning KRAS. The finding galvanized researchers at Amgen, who first heard about the experiments at a scientific conference. "Everyone was trying to do this," recalls Dr. Begley of Amgen, which derives nearly half of its revenues from cancer drugs and related treatments. "It was a really big deal." When the Harvard researchers published their results in the prestigious journal Cell, in May 2009, Amgen moved swiftly to capitalize on the findings. At a meeting in the company's offices in Thousand Oaks, Calif., Dr. Begley assigned a group of Amgen researchers the task of identifying small molecules that might inhibit STK33. Another team got a more basic job: reproduce the Harvard data. "We're talking about hundreds of millions of dollars in downstream investments" if the approach works," says Dr. Begley. "So we need to be sure we're standing on something firm and solid." But over the next few months, Dr. Begley and his team got increasingly disheartened. Amgen scientists, it turned out, couldn't reproduce any of the key findings published in Cell. For example, there was no difference in the growth of cells where STK33 was largely blocked, compared with a control group of cells where STK33 wasn't blocked. What could account for the irreproducibility of the results? "In our opinion there were methodological issues" in Amgen's approach that could have led to the different findings, says Claudia Scholl, one of the lead authors of the original Cell paper. Dr. Scholl points out, for example, that Amgen used a different reagent to suppress STK33 than the one reported in Cell. Yet, she acknowledges that even when slightly different reagents are used, "you should be able to reproduce the results." Now a cancer researcher at the University Hospital of Ulm in Germany, Dr. Scholl says her team has reproduced the original Cell results multiple times, and continues to have faith in STK33 as a cancer target. Amgen, however, killed its STK33 program. In September, two dozen of the firm's scientists published a paper in the journal Cancer Research describing their failure to reproduce the main Cell findings. Dr. Begley suggests that academic scientists, like drug companies, should perform more experiments in a "blinded" manner to reduce any bias toward positive findings. Otherwise, he says, "there is a human desire to get the results your boss wants you to get." Adds Atlas' Mr. Booth: "Nobody gets a promotion from publishing a negative study." Write to Gautam Naik at gautam.naik at wsj.com Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved Description: Description: The Wall Street Journal OPINION DECEMBER 2, 2011 Absolute Certainty Is Not Scientific Global warming alarmists betray their cause when they declare that it is irresponsible to question them. By DANIEL B. BOTKIN One of the changes among scientists in this century is the increasing number who believe that one can have complete and certain knowledge. For example, Michael J. Mumma, a NASA senior scientist who has led teams searching for evidence of life on Mars, was quoted in the New York Times as saying, "Based on evidence, what we do have is, unequivocally, the conditions for the emergence of life were present on Mars?period, end of story." This belief in absolute certainty is fundamentally what has bothered me about the scientific debate over global warming in the 21st century, and I am hoping it will not characterize the discussions at the United Nations Climate Change Conference in Durban, South Africa, currently under way. Description: Description: http://m.wsj.net/video/20111201/120111opinionlomborg/120111opinionlomborg_512x288.jpg Bjorn Lomborg on the ClimateGate 2.0 e-mail scandal and World AIDS Day. Reading Mr. Mumma's statement, I thought immediately of physicist Niels Bohr, a Nobel laureate, who said, "Anyone who is not shocked by quantum theory has not understood it." To which Richard Feynman, another famous physicist and Nobel laureate, quipped, "Nobody understands quantum mechanics." I felt nostalgic for those times when even the greatest scientific minds admitted limits to what they knew. And when they recognized well that the key to the scientific method is that it is a way of knowing in which you can never completely prove that something is absolutely true. Instead, the important idea about the method is that any statement, to be scientific, must be open to disproof, and a way of knowing how to disprove it exists. Therefore, "Period, end of story" is something a scientist can say?but it isn't science. Enlarge Image Description: Description: botkin Close Getty Images I was one of many scientists on several panels in the 1970s who reviewed the results from the Viking Landers on Mars, the ones that were supposed to conduct experiments that would help determine whether there was or wasn't life on that planet. I don't remember anybody on those panels talking in terms of absolute certainty. Instead, the discussions were about what the evidence did and did not suggest, and what might be disprovable from them and from future landers. I was also one of a small number of scientists?mainly ecologists, climatologists and meteorologists?who in the 1970s became concerned about the possibility of a human-induced global warming, based on then-new measurements. It seemed to be an important scientific problem, both as part of the beginning of a new science of global ecology and as a potentially major practical problem that nations would have to deal with. It did not seem to be something that should or would rise above standard science and become something that one had to choose sides in. But that's what has happened. Some scientists make "period, end of story" claims that human-induced global warming definitely, absolutely either is or isn't happening. For me, the extreme limit of this attitude was expressed by economist Paul Krugman, also a Nobel laureate, who wrote in his New York Times column in June, "Betraying the Planet" that "as I watched the deniers make their arguments, I couldn't help thinking that I was watching a form of treason?treason against the planet." What had begun as a true scientific question with possibly major practical implications had become accepted as an infallible belief (or if you're on the other side, an infallible disbelief), and any further questions were met, Joe-McCarthy style, "with me or agin me." Not only is it poor science to claim absolute truth, but it also leads to the kind of destructive and distrustful debate we've had in last decade about global warming. The history of science and technology suggests that such absolutism on both sides of a scientific debate doesn't often lead to practical solutions. It is helpful to go back to the work of the Wright brothers, whose invention of a true heavier-than-air flying machine was one kind of precursor to the Mars Landers. They basically invented aeronautical science and engineering, developed methods to test their hypotheses, and carefully worked their way through a combination of theory and experimentation. The plane that flew at Kill Devil Hill, a North Carolina dune, did not come out of true believers or absolute assertions, but out of good science and technological development. Let us hope that discussions about global warming can be more like the debates between those two brothers than between those who absolutely, completely agree with Paul Krugman and those who absolutely, completely disagree with him. How about a little agnosticism in our scientific assertions?and even, as with Richard Feynman, a little sense of humor so that we can laugh at our errors and move on? We should all remember that Feynman also said, "If you think that science is certain?well that's just an error on your part." Mr. Botkin, president of the Center for the Study of the Environment and professor emeritus at the University of California, Santa Barbara, is the author of the forthcoming "Discordant Harmonies: Ecology in a Changing World" (Oxford University Press). Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 2151 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 13419 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 78339 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 31808 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 15461 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 49542 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 7087 bytes Desc: image007.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 17173 bytes Desc: image008.jpg URL: From eugene.garfield at THOMSONREUTERS.COM Tue Dec 6 13:08:26 2011 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Tue, 6 Dec 2011 18:08:26 +0000 Subject: papers of potential interest to SIG Metrics readers Message-ID: ========================== Start of Data ========================= 4 -------------------------------------------------------------------------- TITLE: Prof. Dr. Ayhan Demirbas' scientometric biography (Article, English) AUTHOR: Konur, O SOURCE: ENERGY EDUCATION SCIENCE AND TECHNOLOGY PART A-ENERGY SCIENCE AND RESEARCH 28 (2). JAN 2012. p.727-738 SILA SCIENCE, TRABZON SEARCH TERM(S): GARFIELD E CURR CONTENTS :5 1981 KEYWORDS: Prof. Dr. Ayhan Demirbas; Bio-energy; Scientometric biography; Research productivity; Research evaluation KEYWORDS+: VEGETABLE-OILS; SUPERCRITICAL METHANOL; CITATION ANALYSIS; BIODIESEL FUELS; ENERGY-SOURCES; BIOMASS FUELS; RECENT TRENDS; BIOFUELS; FUTURE; TRANSESTERIFICATION ABSTRACT: It is well established fact that Turkish scientists have increasingly contributed to the literature on the bio-energy in recent years. However, there has not been any biographic study of these scientists as well as scientists working in the field of bio-energy. Therefore, as a first-ever case study of Turkish scientists, this paper presents a scientometric biography of Prof. Dr. Ayhan Demirbas working in the area of bio-energy since 1980s. He produced 454 articles and reviews in the interdisciplinary areas relating to the bio-energy between 1984 and 2010 where 379 of them were indexed by the SCI or the SSCI. He received 7,309 citations for his 454 papers giving a ratio for the "Average Citations per Item" as 16.1 and "H-index" over 39 as of July 2011, suggesting that the scientific impact of his research on the relevant literature has been significant. This paper suggests that scientometric methods are useful for the evaluation of individual researchers and for publicizing their scientific achievement. AUTHOR ADDRESS: O Konur, Sirnak Univ, Fac Engn, Dept Mech Engn, Sirnak, Turkey -------------------------------------------------------------------------- TITLE: The evaluation of the research on biofuels: a scientometric approach (Article, English) AUTHOR: Konur, O SOURCE: ENERGY EDUCATION SCIENCE AND TECHNOLOGY PART A-ENERGY SCIENCE AND RESEARCH 28 (2). JAN 2012. p.903-916 SILA SCIENCE, TRABZON SEARCH TERM(S): SCIENTOMETRIC* item_title; GARFIELD E CURR CONTENTS 0922 :3 1986; GARFIELD E rauth KEYWORDS: Biofuels; Research evaluation; Scientometrics; Web of Knowledge KEYWORDS+: MICROBIAL FUEL-CELLS; BIODIESEL PRODUCTION; RECENT TRENDS; BIOMASS; ENERGY; CITATION; ETHANOL; HYDROLYSIS; MICROALGAE; CHEMICALS ABSTRACT: The present study explores the characteristics of the literature on the biofuels published during the last three decades, based on the databases of Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) and its implications using the scientometric techniques. The results of this study reveal that the literature on the biofuels has grown exponentially during this period reaching 6,770 papers in total with paralleling enormous changes in the research landscape. Papers mostly have been journal articles, reviews, and proceedings, predominantly in English. USA, China, and Germany have been the three most prolific countries. The "University of Illinois" has been the most prolific institution. The most prolific authors have been "Demirbas A" and "Minter SD". "Biomass & Bioenergy" has been the most prolific journal whilst, "Energy Fuels" has been the most prolific subject area. The total number of citations is 79,304, giving a ratio for the "Average Citations per Item" as 11.71 and "H-index" as 101. Ragauskas et al. has had the highest impact on the literature. Both the research output and the citations have thrived spectacularly after 2005. The results of this first ever such study of its kind, show that the scientometric analysis has a great potential to gain valuable insights into the evolution of the high-profile research on the biofuels, complementing other research techniques. AUTHOR ADDRESS: O Konur, Sirnak Univ, Fac Engn, Sirnak, Turkey [ ]<-- Enter an X to order article (IDS: 848XE 00039) ISSN: 1308-772X -------------------------------------------------------------------------- TITLE: The evaluation of research on biodiesel: a scientometric approach (Article, English) AUTHOR: Konur, O SOURCE: ENERGY EDUCATION SCIENCE AND TECHNOLOGY PART A-ENERGY SCIENCE AND RESEARCH 28 (2). JAN 2012. p.1003-1014 SILA SCIENCE, TRABZON SEARCH TERM(S): SCIENTOMETRIC* item_title KEYWORDS: Biodiesel; Renewable fuels; Research evaluation; Scientometrics; Web of Knowledge KEYWORDS+: OIL METHYL-ESTER; DIESEL-ENGINE; HYDROGEN ENERGY; ANIMAL TALLOW; CANOLA OIL; PERFORMANCE; EMISSIONS; BIOFUELS; FUELS; CITATION ABSTRACT: The present study explores the characteristics of the literature on the biodiesel published during the last three decades based on the Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) and its implications using the scientometric techniques. The results of this study reveal that the research output on the biodiesel and the citations received have grown exponentially during this period especially after 2004 with paralleling enormous changes in the research landscape. The US, China, and Brazil have been the three most prolific countries. The "USDA" has been the most prolific institution and "Demirbas A" of Turkey has been the most prolific author. "Bioresource Technology" has been the most prolific journal whilst, "Energy & Fuels" has been the most prolific subject area. "H-index" was 102 and Ma & Hanna [1] has had the highest impact on the literature. The scientometric analysis has a great potential to gain valuable insights into the evolution of the research on the biodiesel, complementing the scientometric studies in the other fields of the renewable energies such as biohydrogen, bioenergy, biofuels, and microbial fuel cells, providing a unique insight on the incentive structures for all the key stakeholders in the field. AUTHOR ADDRESS: O Konur, Sirnak Univ, Fac Engn, TR-73000 Sirnak, Turkey -------------------------------------------------------------------------- TITLE: The evaluation of research on bioethanol: A scientometric approach (Article, English) AUTHOR: Konur, O SOURCE: ENERGY EDUCATION SCIENCE AND TECHNOLOGY PART A-ENERGY SCIENCE AND RESEARCH 28 (2). JAN 2012. p.1051-1064 SILA SCIENCE, TRABZON SEARCH TERM(S): SCIENTOMETRIC* item_title KEYWORDS: Bioethanol; Renewable fuels; Research evaluation; Scientometrics; Web of Knowledge KEYWORDS+: FUEL ETHANOL-PRODUCTION; BIODIESEL PRODUCTION; DIESEL- ENGINE; HYDROGEN-PRODUCTION; BIO-ETHANOL; BIOFUELS; ENERGY; OIL; EMISSIONS; CITATION ABSTRACT: The present study explores the characteristics of the literature on the bioethanol published during the last three decades based on the Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) and its implications using the scientometric techniques. The results of this study reveal that the research output on the bioethanol and the citations received have grown exponentially during this period especially after 2004 with paralleling enormous changes in the research landscape. The US, China, and Japan have been the three most prolific countries. The "Tech Univ Denmark" has been the most prolific institution and "Zacchi G" of Sweden has been the most prolific author. "Bioresource Technology" has been the most prolific journal whilst "Biotechnology & Applied Microbiology" has been the most prolific subject area. "H-index" was 61 and "Demirbas A" [1] has had the highest impact on the literature. The scientometric analysis has a great potential to gain valuable insights into the evolution of the research on the bioethanol, complementing the scientometric studies in the other fields of the renewable energies such as biohydrogen, biodiesel, bioenergy, biofuels, and microbial fuel cells, providing a unique insight on the incentive structures for all the key stakeholders in the field. AUTHOR ADDRESS: O Konur, Sirnak Univ, Fac Engn, TR-73000 Sirnak, Turkey ----------- From eugene.garfield at THOMSONREUTERS.COM Wed Dec 7 12:37:57 2011 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Wed, 7 Dec 2011 17:37:57 +0000 Subject: papers of interest to SIG Metrics Message-ID: -------------------------------------------------------------------------- TITLE: Research contributions of Spanish Psychiatry (2004-2009): A bibliometric analysis of a University department (Article, English) AUTHOR: Diaz-Moran, S; Tobena, A SOURCE: ACTAS ESPANOLAS DE PSIQUIATRIA 39 (5). SEP-OCT 2011. p.294-301 JUAN JOSE LOPEZ-IBOR FOUNDATION, MADRID SEARCH TERM(S): HIRSCH JE P NATL ACAD SCI USA 102:16569 2005; BIBLIOMETR* item_title; ACTAS ESP PSIQUIATRI source_abbrev_20 KEYWORDS: Bibliometrics; Scientific output; Impact index; Citation analysis; Psychiatry KEYWORDS+: SCIENCE; OUTPUT; INDEX ABSTRACT: Psychiatric research in Spain went through a notorious increase in quality and quantity of peer-reviewed papers during the last decade of the previous century, in parallel with other medical disciplines. Although there have been systematic studies of scientific production, they are inadequate from the perspective of the research groups and particularly from university departments. We considered this bibliometric study, in order to analyze the scientific production of the Department of Psychiatry and Forensic Medicine, at the Autonomous University of Barcelona, UAB [DPsML]. Methodology. In a cross-sectional survey of independent groups (n = 57, 54% men), indicators were applied to production, quality, visibility/distribution and sustained popularity. Results. DPsML research groups, published 314 articles and/or reviews (216 international) between 2004 2009, reaching a total of 974 quotations in the period (16 quots./basic researcher and 11.3 quots./clinical researcher). Contributions at the Thomson Scientific Index [TSI], come from clinical groups (56.48%), and basic groups: 43.52%. The basic groups showed on average impact factor of 5.12 and clinical groups of 2. Conclusions. DPsML published 11.84% of most cited papers in Spanish psychiatry, 20% in the field of drug addiction and 20.84% in the field of behavioral science,1 the inconsistent results with other bibliometric studies2 on the same researchers, shows the need for more tight and demanding indicators and mapping of production encompassing, both research groups as molar units (university departments). AUTHOR ADDRESS: S Diaz-Moran, Univ Autonoma Barcelona, Inst Neurociencias, Dept Psiquiatria & Med Legal, Fac Med, Barcelona, Spain -------------------------------------------------------------------------- TITLE: Impact factor, its variants and its influence in academic promotion (Article, Spanish) AUTHOR: Puche, RC SOURCE: MEDICINA-BUENOS AIRES 71 (5). 2011. p.484-489 MEDICINA (BUENOS AIRES), BUENOS AIRES SEARCH TERM(S): HIRSCH JE P NATL ACAD SCI USA 102:16569 2005; IMPACT FACTOR* item_title KEYWORDS: bibliometrics; impact factor; scientometrics ABSTRACT: Impact factor, its variants and its influence in academic promotion. Bibliometrics is a set of methods used to study or measure texts and information. While bibliometric methods are most often used in the field of library and information science, bibliometrics variables have wide applications in other areas. One popular bibliometric variable is Garfield's Impact Factor (IF). IF is used to explore the impact of a given field, the impact of a set of researchers, or the impact of a particular paper. This variable is used to assess academic output and it is believed to affect adversely the traditional approach and assessment of scientific research. In our country, the members of the evaluation committees of intensive research institutions, e.g. the National Scientific and Technical Research Council (CONICET) use IF to assess the quality of research. This article revises the exponential growth of bibliometrics and attempts to expose the overall dissatisfaction with the analytical quality of IF. Such dissatisfaction is expressed in the number of investigations attempting to obtain a better variable of improved analytical quality. AUTHOR ADDRESS: RC Puche, Fac Ciencias Med, Lab Biol Osea, RA-2000 Rosario, Santa Fe, Argentina -------------------------------------------------------------------------- TITLE: A review of comparative studies of spatial interpolation methods in environmental sciences: Performance and impact factors (Article, English) AUTHOR: Li, J; Heap, AD SOURCE: ECOLOGICAL INFORMATICS 6 (3-4). JUL 2011. p.228-241 ELSEVIER SCIENCE BV, AMSTERDAM SEARCH TERM(S): IMPACT FACTOR* item_title KEYWORDS: Spatial interpolator; Geostatistics; Kriging; Data variation; Sample density KEYWORDS+: SOIL PROPERTIES; GEOSTATISTICAL ANALYSIS; PREDICTION METHODS; SNOW DISTRIBUTION; AIR-TEMPERATURE; KRIGING METHODS; EXTERNAL DRIFT; POINT DATA; REGRESSION; PRECIPITATION ABSTRACT: Spatial interpolation methods have been applied to many disciplines. Many factors affect the performance of the methods, but there are no consistent findings about their effects. In this study, we use comparative studies in environmental sciences to assess the performance and to quantify the impacts of data properties on the performance. Two new measures are proposed to compare the performance of the methods applied to variables with different units/scales. A total of 53 comparative studies were assessed and the performance of 72 methods/sub-methods compared is analysed. The impacts of sample density, data variation and sampling design on the estimations of 32 methods are quantified using data derived from their application to 80 variables. Inverse distance weighting (IDW), ordinary kriging (OK), and ordinary co- kriging (OCR) are the most frequently used methods. Data variation is a dominant impact factor and has significant effects on the performance of the methods. As the variation increases, the accuracy of all methods decreases and the magnitude of decrease is method dependent. Irregular- spaced sampling design might improve the accuracy of estimation. The effect of sampling density on the performance of the methods is found not to be significant. The implications of these findings are discussed. Crown Copyright (C) 2010 Published by Elsevier B.V. All rights reserved. AUTHOR ADDRESS: J Li, Geosci Australia, Marine & Coastal Environm, PMD, GPO Box 378, Canberra, ACT 2601, Australia - From priem at EMAIL.UNC.EDU Wed Dec 7 23:41:10 2011 From: priem at EMAIL.UNC.EDU (Jason Priem) Date: Wed, 7 Dec 2011 23:41:10 -0500 Subject: CFP: Special collection on alternative metrics Message-ID: The huge increase in scientific output is presenting scholars with a deluge of data. There is growing concern that scholarly output may be swamping traditional mechanisms for both pre-publication filtering (e.g. peer review) and post-publication impact filtering (e.g. the Journal Impact Factor). Increasing scholarly use of Web 2.0 tools like CiteULike, Mendeley, Twitter, and blogs presents an opportunity to create new filters. Metrics based on a diverse set of social sources could yield broader, richer, and timelier assessments of current and potential scholarly impact. Realizing this, many authors have begun to call for investigation of these metrics under the banner of ?altmetrics.? Specifically, altmetrics is the creation and study of new metrics based on the Social Web for analyzing and informing scholarship. Despite the growing speculation and early exploratory investigation into the value of altmetrics, there remains little concrete, objective research into the properties of these metrics: their validity, their potential value and flaws, and their relationship to established measures. Nor has there been any large umbrella to bring these multiple approaches together. Following on from a first successful workshop onaltmetrics , this collection aims to provide a forum for the dissemination of innovative research on these metrics. We seek high quality submissions that advance the understanding of the efficacy of altmetrics, addressing research areas including: * Validated new metrics based on social media. * Tracking science communication on the Web. * Relation between traditional metrics and altmetrics including validation and correlation. * The relationship between peer review and altmetrics. * Evaluated tools for gathering, analyzing, or disseminating altmetrics. Papers will be reviewed on a rolling basis in-line with PLoS ONE standard practices. Please note that all submissions submitted before January 28^th , 2012 will be considered for the launch of the collection (expected spring 2012); submissions after this date will still be considered for the collection, but may not appear in the collection at launch. Submission Guidelines If you wish to submit your research to the Altmetrics: Tracking scholarly impact on the social Web Collection, please consider the following when preparing your manuscript: - All articles must adhere to the PLoS ONEsubmission guidelines . - Standard PLoS ONEpolicies andpublication fees apply to all submissions. - Submission to PLoS ONE as part of the Altmetrics Collection does not guarantee publication. When you are ready to submit your manuscript to the collection, please log in to thePLoS ONE manuscript submission system and insert ?Altmetrics? in the relevant field to ensure the PLoS ONE staff are aware of your submission to the Collection. Once you have registered, you can follow the steps for manuscript submission. Please contact Lindsay King (lking at plos.org ) if you would like further information about how to submit your research to the PLoS ONE Altmetrics Collection. Organizers Paul Groth,/VU University Amsterdam/ Dario Taraborelli,/Wikimedia Foundation/ Jason Priem,/UNC-Chapel Hill/ ** About PLoS ONE PLoS ONE (eISSN-1932-6203) is an international, peer-reviewed, open-access, online publication. PLoS ONE welcomes reports on primary research from any scientific discipline. -- Jason Priem UNC Royster Fellow School of Information and Library Science University of North Carolina at Chapel Hill -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmd8 at CORNELL.EDU Thu Dec 8 09:36:27 2011 From: pmd8 at CORNELL.EDU (Philip Davis) Date: Thu, 8 Dec 2011 09:36:27 -0500 Subject: Is Peer Review a Coin Toss? Message-ID: For readers interested in the dynamics of peer review as a lottery: Is Peer Review a Coin Toss? by Tim Vines The Scholarly Kitchen, 8 Dec, 2011 http://scholarlykitchen.sspnet.org/2011/12/08/is-peer-review-a-coin-toss/ From p.f.wouters at CWTS.LEIDENUNIV.NL Fri Dec 9 05:57:33 2011 From: p.f.wouters at CWTS.LEIDENUNIV.NL (Paul Wouters) Date: Fri, 9 Dec 2011 11:57:33 +0100 Subject: New Leiden Ranking Message-ID: Dear colleagues US still dominates high impact publications in science The US are still the dominant scientific world power, but new centres of science are emerging. MIT is the university which has the highest citation impact of its publications in the world. Princeton and Harvard take positions two and three. These are some of the findings of the new Leiden Ranking 2011 ? 2012 which has been published on the website: www.leidenranking.com. The top fifty list consists of 42 US based universities, 2 Swiss (Lausanne at 12 and ETH Zurich at 18), 1 Israeli (Weizmann Institute of Science), 4 British (Cambridge at 31, London School of Hygiene & Tropical Medicine at 33, Oxford at 36 and Durham at 42), and one Danish university (Technical University of Denmark). Aggregated to country level, the US has 64 universities in the top 100 list, the UK 12, and the Netherlands 7. The latter is remarkable given its small size. The Leiden Ranking 2011-2012 is based on an advanced methodology which compensates for distorting effects due to the size of the university, the differences in citation characteristics between scientific fields, differences between English and non-English publications, and distorting effects of extremely high cited publications. Publications authored by researchers at different universities are attributed to the universities as fractions. This prevents distortion of the ranking by counting these publications multiple times (for each co-authoring university). This distorting effect is often overseen in other global university rankings, which leads to a relative advantage of clinical research and some physics fields in these rankings. This makes clear how sensitive global rankings are to the nitty-gritty of the calculations. The Leiden Ranking enables users to choose the criteria on which they wish to compare university performance. The menu offers 3 indicators of impact and 4 indicators of scientific collaboration. When scored on the percentage of their papers produced in collaboration with institutes in different countries, the London School of Hygiene & Tropical Medicine tops the list with more than 50 % of its publications co-authored with other countries. Although in terms of impact, US universities are still strongest, it is clear that other countries are emerging as centres of science by looking at the total production (number of publications in the Web of Science). In this ranking, Harvard University is number one. But in the top 25 we also see universities from Canada (Toronto at 2, British Columbia at 22), Japan (Tokyo at 4, Kyoto at 11, Osaka at 25), Brazil (Sao Paulo at 8), United Kingdom (Cambridge at 13, Oxford at 14, University College at 17), South Korea (Seoul at 19), and China (Zhejiang at 20). The Leiden Ranking is the first global university ranking which has published the details of its methodology and indicators. The indicators are presented in combination with stability intervals, an advanced statistical method to measure to what extent the differences in rankings between universities are significant. If one wishes to compare the university citation impact in a global context, it is best to take the percentage of papers in the top 10 % highly cited papers together with the calculation method ?fractional counting?. This is the method which compares across institutions and fields in the fairest way. The Leiden Ranking is based on data of the Web of Science. Data on the arts and humanities are not included since these fields are not well represented in the Web of Science. The Leiden Ranking exclusively measures the citation impact of research of the 500 largest universities in the world. This prevents an arbritrary combination of performance in education, valorization and research, a disadvantage of many global university rankings. More information about the ranking results and its methodology: www.leidenranking.com. With regards Paul Wouters Professor of Scientometrics Director Centre for Science and Technology Studies Leiden University Visiting address: Willem Einthoven Building Wassenaarseweg 62A 2333 AL Leiden Mail address: P.O. Box 905 2300 AX Leiden T: +31 71 5273909 (secr.) F: +31 71 5273911 E: p.f.wouters at cwts.leidenuniv.nl CWTS home page: www.cwts.nl Blog about Citation Cultures: http://citationculture.wordpress.com/ Research Dreams: www.researchdreams.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From loet at LEYDESDORFF.NET Sat Dec 10 15:18:07 2011 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sat, 10 Dec 2011 21:18:07 +0100 Subject: New Leiden Ranking In-Reply-To: Message-ID: Dear colleagues, Using the newly introduced indicator for impact in the Leiden Rankings 2011/2012 ?the Proportion top-10% publications (PPtop 10%)?one can test differences between institutions statistically using the z-test. Furthermore, one can test whether each university performs above or below expectation. An Excel sheet with the test embedded is made available at http://www.leydesdorff.net/leiden11/leiden11.xls and an example is elaborated in a short introduction at http://www.leydesdorff.net/leiden11/index.htm (coauthored with Lutz Bornmann). The test was previously used analogously for the Excellence Indicator in the SCImago Institutions Rankings ; cf. http://www.leydesdorff.net/scimago11; Bornmann et al., in press), and can be considered as additional to the stability intervals provided at the webpages of the Leiden Ranking . The SCImago Rankings are based on Scopus data, and the Leiden Ranking on Web-of-Science data. With best wishes, Loet _____ Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-842239111 loet at leydesdorff.net ; http://www.leydesdorff.net/ ; http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Paul Wouters Sent: Friday, December 09, 2011 11:58 AM To: SIGMETRICS at LISTSERV.UTK.EDU Subject: [SIGMETRICS] New Leiden Ranking US still dominates high impact publications in science The US are still the dominant scientific world power, but new centres of science are emerging. MIT is the university which has the highest citation impact of its publications in the world. Princeton and Harvard take positions two and three. These are some of the findings of the new Leiden Ranking 2011 ? 2012 which has been published on the website: www.leidenranking.com . The top fifty list consists of 42 US based universities, 2 Swiss (Lausanne at 12 and ETH Zurich at 18), 1 Israeli (Weizmann Institute of Science), 4 British (Cambridge at 31, London School of Hygiene & Tropical Medicine at 33, Oxford at 36 and Durham at 42), and one Danish university (Technical University of Denmark). Aggregated to country level, the US has 64 universities in the top 100 list, the UK 12, and the Netherlands 7. The latter is remarkable given its small size. The Leiden Ranking 2011-2012 is based on an advanced methodology which compensates for distorting effects due to the size of the university, the differences in citation characteristics between scientific fields, differences between English and non-English publications, and distorting effects of extremely high cited publications. Publications authored by researchers at different universities are attributed to the universities as fractions. This prevents distortion of the ranking by counting these publications multiple times (for each co-authoring university). This distorting effect is often overseen in other global university rankings, which leads to a relative advantage of clinical research and some physics fields in these rankings. This makes clear how sensitive global rankings are to the nitty-gritty of the calculations. The Leiden Ranking enables users to choose the criteria on which they wish to compare university performance. The menu offers 3 indicators of impact and 4 indicators of scientific collaboration. When scored on the percentage of their papers produced in collaboration with institutes in different countries, the London School of Hygiene & Tropical Medicine tops the list with more than 50 % of its publications co-authored with other countries. Although in terms of impact, US universities are still strongest, it is clear that other countries are emerging as centres of science by looking at the total production (number of publications in the Web of Science). In this ranking, Harvard University is number one. But in the top 25 we also see universities from Canada (Toronto at 2, British Columbia at 22), Japan (Tokyo at 4, Kyoto at 11, Osaka at 25), Brazil (Sao Paulo at 8), United Kingdom (Cambridge at 13, Oxford at 14, University College at 17), South Korea (Seoul at 19), and China (Zhejiang at 20). The Leiden Ranking is the first global university ranking which has published the details of its methodology and indicators. The indicators are presented in combination with stability intervals, an advanced statistical method to measure to what extent the differences in rankings between universities are significant. If one wishes to compare the university citation impact in a global context, it is best to take the percentage of papers in the top 10 % highly cited papers together with the calculation method ?fractional counting?. This is the method which compares across institutions and fields in the fairest way. The Leiden Ranking is based on data of the Web of Science. Data on the arts and humanities are not included since these fields are not well represented in the Web of Science. The Leiden Ranking exclusively measures the citation impact of research of the 500 largest universities in the world. This prevents an arbritrary combination of performance in education, valorization and research, a disadvantage of many global university rankings. More information about the ranking results and its methodology: www.leidenranking.com . With regards Paul Wouters Professor of Scientometrics Director Centre for Science and Technology Studies Leiden University Visiting address: Willem Einthoven Building Wassenaarseweg 62A 2333 AL Leiden Mail address: P.O. Box 905 2300 AX Leiden T: +31 71 5273909 (secr.) F: +31 71 5273911 E: p.f.wouters at cwts.leidenuniv.nl CWTS home page: www.cwts.nl Blog about Citation Cultures: http://citationculture.wordpress.com/ Research Dreams: www.researchdreams.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksc at LIBRARY.IISC.ERNET.IN Mon Dec 12 05:30:09 2011 From: ksc at LIBRARY.IISC.ERNET.IN (K S Chudamani) Date: Mon, 12 Dec 2011 16:00:09 +0530 Subject: New Leiden Ranking In-Reply-To: <000901ccb778$d5976890$80c639b0$@leydesdorff.net> Message-ID: > dear all, I would like to point out that the elaborate method used is simply an extension of regular method. We have used h index and ranked universitires in India.it has been published in pearl journal. This also gives similar results. Then why break our head by complicatihg matters chudamani > > Dear colleagues, > > > > Using the newly introduced indicator for impact in the Leiden Rankings > 2011/2012 ???the Proportion > top-10% publications (PPtop 10%)???one can test differences between > institutions statistically using the z-test. Furthermore, one can test > whether each university performs above or below expectation. > > > > An Excel sheet with the test embedded is made available at > http://www.leydesdorff.net/leiden11/leiden11.xls and an example is > elaborated in a short introduction at > http://www.leydesdorff.net/leiden11/index.htm (coauthored with Lutz > Bornmann). > > > > The test was previously used analogously for the Excellence Indicator in > the SCImago Institutions Rankings > ; cf. > http://www.leydesdorff.net/scimago11; Bornmann > et al., in press), > and can be considered as additional to the stability intervals provided at > the webpages of the Leiden Ranking > . The SCImago Rankings are > based on Scopus data, and the Leiden Ranking on Web-of-Science data. > > > > With best wishes, > > Loet > > > > _____ > > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-842239111 > loet at leydesdorff.net ; > http://www.leydesdorff.net/ ; > > http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en > > > > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Paul Wouters > Sent: Friday, December 09, 2011 11:58 AM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] New Leiden Ranking > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html Dear colleagues > > US still dominates high impact publications in science > > > > The US are still the dominant scientific world power, but new centres of > science are emerging. MIT is the university which has the highest citation > impact of its publications in the world. Princeton and Harvard take > positions two and three. These are some of the findings of the new Leiden > Ranking 2011 ??? 2012 which has been published on the website: > www.leidenranking.com . The top fifty list > consists of 42 US based universities, 2 Swiss (Lausanne at 12 and ETH > Zurich at 18), 1 Israeli (Weizmann Institute of Science), 4 British > (Cambridge at 31, London School of Hygiene & Tropical Medicine at 33, > Oxford at 36 and Durham at 42), and one Danish university (Technical > University of Denmark). Aggregated to country level, the US has 64 > universities in the top 100 list, the UK 12, and the Netherlands 7. The > latter is remarkable given its small size. > > > > The Leiden Ranking 2011-2012 is based on an advanced methodology which > compensates for distorting effects due to the size of the university, the > differences in citation characteristics between scientific fields, > differences between English and non-English publications, and distorting > effects of extremely high cited publications. Publications authored by > researchers at different universities are attributed to the universities > as fractions. This prevents distortion of the ranking by counting these > publications multiple times (for each co-authoring university). This > distorting effect is often overseen in other global university rankings, > which leads to a relative advantage of clinical research and some physics > fields in these rankings. This makes clear how sensitive global rankings > are to the nitty-gritty of the calculations. > > > > The Leiden Ranking enables users to choose the criteria on which they wish > to compare university performance. The menu offers 3 indicators of impact > and 4 indicators of scientific collaboration. When scored on the > percentage of their papers produced in collaboration with institutes in > different countries, the London School of Hygiene & Tropical Medicine tops > the list with more than 50 % of its publications co-authored with other > countries. > > > > Although in terms of impact, US universities are still strongest, it is > clear that other countries are emerging as centres of science by looking > at the total production (number of publications in the Web of Science). In > this ranking, Harvard University is number one. But in the top 25 we also > see universities from Canada (Toronto at 2, British Columbia at 22), Japan > (Tokyo at 4, Kyoto at 11, Osaka at 25), Brazil (Sao Paulo at 8), United > Kingdom (Cambridge at 13, Oxford at 14, University College at 17), South > Korea (Seoul at 19), and China (Zhejiang at 20). > > > > The Leiden Ranking is the first global university ranking which has > published the details of its methodology and indicators. The indicators > are presented in combination with stability intervals, an advanced > statistical method to measure to what extent the differences in rankings > between universities are significant. > > > > If one wishes to compare the university citation impact in a global > context, it is best to take the percentage of papers in the top 10 % > highly cited papers together with the calculation method ???fractional > counting???. This is the method which compares across institutions and > fields in the fairest way. > > > > The Leiden Ranking is based on data of the Web of Science. Data on the > arts and humanities are not included since these fields are not well > represented in the Web of Science. The Leiden Ranking exclusively measures > the citation impact of research of the 500 largest universities in the > world. This prevents an arbritrary combination of performance in > education, valorization and research, a disadvantage of many global > university rankings. > > > > More information about the ranking results and its methodology: > www.leidenranking.com . > > > > With regards > > Paul Wouters > Professor of Scientometrics > Director Centre for Science and Technology Studies > Leiden University > > Visiting address: > Willem Einthoven Building > Wassenaarseweg 62A > 2333 AL Leiden > Mail address: P.O. Box 905 > 2300 AX Leiden > T: +31 71 5273909 (secr.) > F: +31 71 5273911 > E: p.f.wouters at cwts.leidenuniv.nl > > CWTS home page: www.cwts.nl > Blog about Citation Cultures: http://citationculture.wordpress.com/ > > Research Dreams: www.researchdreams.nl > > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > > -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From eugene.garfield at THOMSONREUTERS.COM Mon Dec 12 17:36:36 2011 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Mon, 12 Dec 2011 22:36:36 +0000 Subject: Papers of interest to SIG Metrics Message-ID: ========================== Start of Data ========================= -------------------------------------------------------------------------- TITLE: An updated ranking of academic journals in economics (Article, English) AUTHOR: Kalaitzidakis, P; Mamuneas, TP; Stengos, T SOURCE: CANADIAN JOURNAL OF ECONOMICS-REVUE CANADIENNE D ECONOMIQUE 44 (4). NOV 2011. p.1525-1538 WILEY-BLACKWELL, MALDEN SEARCH TERM(S): JOURNALS item_title KEYWORDS: A10; A14 KEYWORDS+: RELATIVE IMPACTS; CITATION ABSTRACT: We conduct an update of the ranking of economic journals by Kalaitzidakis, Mamuneas, and Stengos (2003). However, our present study differs methodologically from that earlier study in an important dimension. We use a rolling window of years between 2003 and 2008, for each year counting the number of citations of articles published in the previous 10 years. This allows us to obtain a smoother longer view of the evolution of rankings in the period under consideration and avoid the inherent randomness that may exist at any particular year, because of new entrants. AUTHOR ADDRESS: P Kalaitzidakis, Univ Crete, Khania, Greece -------------------------------------------------------------------------- TITLE: Bibliometric analysis of Mexican scientific production in hydraulic engineering based on journals in the Science Citation Index- Expanded database (1997-2008) (Article, Spanish) AUTHOR: Rojas-Sola, JI; Jorda-Albinana, B SOURCE: TECNOLOGIA Y CIENCIAS DEL AGUA 2 (4). OCT-DEC 2011. p.195-213 INST MEXICANO TECHNOLOGIAAGUA, MORELOS SEARCH TERM(S :GARFIELD E BRIT MED J 313:411 1996 KEYWORDS: bibliometrics; hydraulic engineering; scientific production; Mexico; Science Citation Index-Expanded KEYWORDS+: IMPACT FACTORS; PUBLICATIONS; INDICATORS; CATEGORY ABSTRACT: ROJAS-SOLA, J.I. & JORDA-ALBINANA, B. Bibliometric analysis of Mexican scientific production in hydraulic engineering based on journals in the Science Citation Index-Expanded database (1997-2008). Water Technology and Sciences (in Spanish). Vol. II, No. 4, October- December, 2011, pp. 195-213. The objective of this work was, first, to identify hydraulic engineering journals published throughout Latin America. To this end, the initial focus was to review the Science Citation Index-Expanded (SCI-E) database for journals associated with two categories water resources and civil engineering. This resulted in a total of 20. Second, a bibliometric analysis zoos performed of papers published in those journals between 1997 and 2008 by Mexican research institutions. This analysis found 373 papers in the 20 journals, of which 298 were in Spanish, 73 in English and 2 in French. Mexico has become the second most published country in Latin American in terms of scientific articles and has the third greatest sum of mean impact for the journals in which they are published. Furthermore, the journal Ingenieria Hidraulica en Mexico (Hydraulic Engineering in Mexico) represents 81% of all Mexican and 33.51% of all Latin American scientific production. International collaborations were also identified, mainly with the United States, France and Spain. AUTHOR ADDRESS: JI Rojas-Sola, Univ Jaen, Campus Lagunillas S-N, Jaen 23071, Spain -------------------------------------------------------------------------- TITLE: COLLABORATIVE AUTHORSHIP IN THE SCIENCES Anti-ownership and Citation Practices in Chemistry and Biology (Article, English) AUTHOR: Buranen, L; Stephenson, D SOURCE: WHO OWNS THIS TEXT: PLAGIARISM, AUTHORSHIP, AND DISCIPLINARY CULTURES. 2009. p.49-79 UTAH STATE UNIV PRESS, LOGAN SEARCH TERM(S): CITATION item_title; CITATION* item_title AUTHOR ADDRESS: L Buranen, Calif State Univ Los Angeles, Univ Writing Ctr, Los Angeles, CA 90032 USA - From eugene.garfield at THOMSONREUTERS.COM Tue Dec 13 14:14:46 2011 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Tue, 13 Dec 2011 19:14:46 +0000 Subject: Papers of possible interest to SIG Metrics readers Message-ID: ========================= TITLE: The impact factor of rheumatology journals: an analysis of 2008 and the recent 10 years (Article, English) AUTHOR: Chen, M; Zhao, MH; Kallenberg, CGM SOURCE: RHEUMATOLOGY INTERNATIONAL 31 (12). DEC 2011. p.1611-1615 SPRINGER HEIDELBERG, HEIDELBERG SEARCH TERM(S): IMPACT FACTOR* GARFIELD E SCIENCE 122:108 1955; GARFIELD E ANN INTERN MED 105:313 1986 KEYWORDS: Impact factor; Rheumatology; Journals; Clinical medicine KEYWORDS+: SCIENCE ABSTRACT: Despite various weaknesses, the impact factor (IF) is still used as an important indictor for scientific quality in specific subject categories. In the current study, the IFs of rheumatology journals over the past 10 years were serially analyzed and compared with that from other fields. For the past 10 years (1999-2008), the IFs published by the Institute for Scientific Information in the Science Citation Index-Journal Citation Report were analyzed. For the majority of rheumatology journals, the IF shows a gradually increasing trend. The mean and median level of increase of IF from 1999 to 2008 is 233.9 and 66.5%, respectively. The increase in IF from 1999 or the first year with IF documentation to that in 2008 was higher for European journals than for the USA journals. The aggregate IF and the median IF of rheumatology journals remained within the top 30% and top 15% in clinical medical and all the scientific categories, respectively. Over the past 10 years, rheumatology journals showed a general increase in IF and rheumatology remained a leading discipline. For journals in the English language, those from Europe had an even higher increase than those from USA. AUTHOR ADDRESS: M Chen, Peking Univ, Hosp 1, Div Renal, Dept Med, Beijing 100034, Peoples R China -------------------------------------------------------------------------- TITLE: The birth and growth of a scientific journal (Article, English) AUTHOR: Kent, RD SOURCE: CLINICAL LINGUISTICS & PHONETICS 25 (11-12). NOV 2011. p.917-921 INFORMA HEALTHCARE, LONDON KEYWORDS: clinical linguistics; clinical phonetics; publication; history ABSTRACT: Clinical Linguistics & Phonetics (CLP) and its namesake field have accomplished a great deal in the last quarter of a century. The success of the journal parallels the growth and vitality of the field it represents. The markers of journal achievement are several, including increased number of journal pages published annually; greater diversity of topics related to the core mission of the journal; expanding cross- language coverage; and healthy interactions among editors, reviewers and contributors; and - for better or worse - journal impact factors. A journal is in a competitive dynamic with other journals that share its general domain of scholarship, which is a major reason why an apparent imbalance may emerge in the topic content of any particular journal. The content of a journal is determined by the nature and number of submitted manuscripts. As far as linguistic content goes, CLP's centre of gravity appears to have been mostly in phonology and phonetics, but certainly not to the exclusion of syntax, semantics and pragmatics. The clinical scope is broad, both in terms of concepts and types of disorder. CLP has secured its place among journals in the field, and it is an outlet of choice for many researchers throughout the world. AUTHOR ADDRESS: RD Kent, Univ Wisconsin, Waisman Ctr, Rm 435,1500 Highland Ave, Madison, WI 53705 USA -------------------------------------------------------------------------- TITLE: Scientists' Opinions on the Global Status and Management of Biological Diversity (Article, English) AUTHOR: Rudd, MA SOURCE: CONSERVATION BIOLOGY 25 (6). DEC 2011. p.1165-1175 WILEY-BLACKWELL, MALDEN SEARCH TERM(S): HIRSCH JE P NATL ACAD SCI USA 102:16569 2005 KEYWORDS: best-worst scaling; biodiversity loss; conservation policy; conservation strategies; latent class; triage; clase latente; escala mejor-peor; estrategias de conservacion; perdida de biodiversidad; politicas de conservacion; triaje KEYWORDS+: CLIMATE-CHANGE; BIODIVERSITY; SCIENCE; POLICY ABSTRACT: The large investments needed if loss of biological diversity is to be stemmed will likely lead to increased public and political scrutiny of conservation strategies and the science underlying them. It is therefore crucial to understand the degree of consensus or divergence among scientists on core scientific perceptions and strategies most likely to achieve given objectives. I developed an internet survey designed to elucidate the opinions of conservation scientists. Conservation scientists (n =583) were unanimous (99.5%) in their view that a serious loss of biological diversity is likely, very likely, or virtually certain. Scientists agreement that serious loss is very likely or virtually certain ranged from 72.8% for Western Europe to 90.9% for Southeast Asia. Tropical coral ecosystems were perceived as the most seriously affected by loss of biological diversity; 88.0% of respondents familiar with that ecosystem type agreed that a serious loss is very likely or virtually certain. With regard to conservation strategies, scientists most often viewed understanding how people and nature interact in certain contexts and the role of biological diversity in maintaining ecosystem function as their priorities. Protection of biological diversity for its cultural and spiritual values and because of its usefulness to humans were low priorities, which suggests that many scientists do not fully support the utilitarian concept of ecosystem services. Many scientists expressed a willingness to consider conservation triage, engage in active conservation interventions, and consider reframing conservation goals and measures of success for conservation of biological diversity in an era of climate change. Although some heterogeneity of opinion is evident, results of the survey show a clear consensus within the scientific community on core issues of the extent and geographic scope of loss of biological diversity and on elements that may contribute to successful conservation strategies in the future. AUTHOR ADDRESS: MA Rudd, Univ York, Dept Environm, York YO10 5DD, N Yorkshire, England -------------------------------------------------------------------------- TITLE: A review and rationalisation of journal subscriptions undertaken by a library and information service in a Mental Health Trust in North-East England in 2009 (Review, English) AUTHOR: Steele, R SOURCE: HEALTH INFORMATION AND LIBRARIES JOURNAL 28 (4). DEC 2011. p.256-263 WILEY-BLACKWELL, MALDEN SEARCH TERM(S): GARFIELD E rauth; KEYWORDS: collection development; electronic journals; health care; health science; journals; librarianship; libraries; marketing and publicity KEYWORDS+: NURSES; IMPACT; USAGE; CARE ABSTRACT: Aim: To describe the methods and processes used in an evaluation of local journal subscriptions in a mental health trust and to suggest possible further areas of investigation were similar exercises to be undertaken again. Method and Results: Results from a user questionnaire were analysed along with e-journal usage statistics and data from local document supply activity. Conclusions: Journal reviews can yield surprising results. Carrying out a user survey is valuable in highlighting awareness of e-resources more generally and thus in providing evidence for marketing/information literacy initiatives. Future journal reviews should undertake impact analysis as potent evidence for continued expenditure on journals in this age of austerity. AUTHOR ADDRESS: R Steele, Lib & Informat Serv Educ Ctr, Lanchester Rd, Durham DH1 5RD, England -------------------------------------------------------------------------- TITLE: Effectiveness of bibliographic searches performed by paediatric residents and interns assisted by librarians. A randomised controlled trial (Article, English) AUTHOR: Gardois, P; Calabrese, R; Colombi, N; Deplano, A; Lingua, C; Longo, F; Villanacci, MC; Miniero, R; Piga, A SOURCE: HEALTH INFORMATION AND LIBRARIES JOURNAL 28 (4). DEC 2011. p.273-284 WILEY-BLACKWELL, MALDEN SEARCH TERM(S): BIBLIOGRAPHIC* item_title KEYWORDS: decision support; evidence based library and information practice; evidence based practice; evidence-based medicine; health science; health services research; information seeking behaviour; librarians; library and information science; reflective practice KEYWORDS+: EVIDENCE-BASED MEDICINE; INFORMATION-SEEKING BEHAVIOR; OF- THE-LITERATURE; QUESTIONNAIRE SURVEY; CLINICAL QUESTIONS; EMPIRIC PROJECT; PRIMARY-CARE; PHYSICIANS; IMPACT; SKILLS ABSTRACT: Background: Considerable barriers still prevent paediatricians from successfully using information retrieval technology. Objectives: To verify whether the assistance of biomedical librarians significantly improves the outcomes of searches performed by paediatricians in biomedical databases using real-life clinical scenarios. Methods: In a controlled trial at a paediatric teaching hospital, nine residents and interns were randomly allocated to an assisted search group and nine to a non-assisted (control) group. Each participant searched PUBMED and other online sources, performing pre-determined tasks including the formulation of a clinical question, retrieval and selection of bibliographic records. In the assisted group, participants were supported by a librarian with 5 years of experience. The primary outcome was the success of search sessions, scored against a specific assessment tool. Results: The median score of the assisted group was 73.6 points interquartile range (IQR = 13.4) vs. 50.4 (IQR = 17.1) of the control group. The difference between median values in the results was 23.2 points (95% CI 4.8-33.2), in favour of the assisted group (P-value, Mann- Whitney U test: 0.013). Conclusions: The study has found quantitative evidence of a significant difference in search performance between paediatric residents or interns assisted by a librarian and those searching the literature alone. AUTHOR ADDRESS: P Gardois, Univ Sheffield, Sch Hlth & Related Res, 30 Regent St, Sheffield S1 4DA, S Yorkshire, England - From loet at LEYDESDORFF.NET Wed Dec 14 03:28:37 2011 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Wed, 14 Dec 2011 09:28:37 +0100 Subject: New Leiden Ranking In-Reply-To: <54397.10.16.40.11.1323685809.squirrel@mailserver> Message-ID: Dear Chudamani, It seems to me that an indicator or ranking which allows for statitistical testing of the significance of differences and positions is far superior to one which does not (such as the h-index). Fortunately, these two major teams in our field (Granada and Leiden University) have agreed on using such an indicator for the Scopus and WoS databases, respectively. Of course, there remains the problem of interdisciplinarity/multidisciplinarity in institutional units such as universities. One might say that a university can improve its average citation score by closing the mathematics department. :-) This can be counteracted a bit by field-normalization and perhaps by fractionation of the citations (1/NRef) in the citigng papers. Best wishes, Loet On Mon, Dec 12, 2011 at 11:30 AM, K S Chudamani wrote: > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > > dear all, > > I would like to point out that the elaborate method used is simply an > extension of regular method. We have used h index and ranked universitires > in India.it has been published in pearl journal. This also gives similar > results. Then why break our head by complicatihg matters > > chudamani > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Dear colleagues, > > > > > > > > Using the newly introduced indicator for impact in the Leiden Rankings > > 2011/2012 ???the Proportion > > top-10% publications (PPtop 10%)???one can test differences between > > institutions statistically using the z-test. Furthermore, one can test > > whether each university performs above or below expectation. > > > > > > > > An Excel sheet with the test embedded is made available at > > http://www.leydesdorff.net/leiden11/leiden11.xls and an example is > > elaborated in a short introduction at > > http://www.leydesdorff.net/leiden11/index.htm (coauthored with Lutz > > Bornmann). > > > > > > > > The test was previously used analogously for the Excellence Indicator in > > the SCImago Institutions Rankings > > ; cf. > > http://www.leydesdorff.net/scimago11; Bornmann > > et al., in > press), > > and can be considered as additional to the stability intervals provided > at > > the webpages of the Leiden Ranking > > . The SCImago Rankings are > > based on Scopus data, and the Leiden Ranking on Web-of-Science data. > > > > > > > > With best wishes, > > > > Loet > > > > > > > > _____ > > > > Amsterdam School of Communications Research (ASCoR), > > Kloveniersburgwal 48, 1012 CX Amsterdam. > > Tel.: +31-20- 525 6598; fax: +31-842239111 > > loet at leydesdorff.net ; > > http://www.leydesdorff.net/ ; > > > > http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en > > > > > > > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Paul Wouters > > Sent: Friday, December 09, 2011 11:58 AM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: [SIGMETRICS] New Leiden Ranking > > > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html Dear colleagues > > > > US still dominates high impact publications in science > > > > > > > > The US are still the dominant scientific world power, but new centres of > > science are emerging. MIT is the university which has the highest > citation > > impact of its publications in the world. Princeton and Harvard take > > positions two and three. These are some of the findings of the new Leiden > > Ranking 2011 ??? 2012 which has been published on the website: > > www.leidenranking.com . The top fifty > list > > consists of 42 US based universities, 2 Swiss (Lausanne at 12 and ETH > > Zurich at 18), 1 Israeli (Weizmann Institute of Science), 4 British > > (Cambridge at 31, London School of Hygiene & Tropical Medicine at 33, > > Oxford at 36 and Durham at 42), and one Danish university (Technical > > University of Denmark). Aggregated to country level, the US has 64 > > universities in the top 100 list, the UK 12, and the Netherlands 7. The > > latter is remarkable given its small size. > > > > > > > > The Leiden Ranking 2011-2012 is based on an advanced methodology which > > compensates for distorting effects due to the size of the university, the > > differences in citation characteristics between scientific fields, > > differences between English and non-English publications, and distorting > > effects of extremely high cited publications. Publications authored by > > researchers at different universities are attributed to the universities > > as fractions. This prevents distortion of the ranking by counting these > > publications multiple times (for each co-authoring university). This > > distorting effect is often overseen in other global university rankings, > > which leads to a relative advantage of clinical research and some physics > > fields in these rankings. This makes clear how sensitive global rankings > > are to the nitty-gritty of the calculations. > > > > > > > > The Leiden Ranking enables users to choose the criteria on which they > wish > > to compare university performance. The menu offers 3 indicators of impact > > and 4 indicators of scientific collaboration. When scored on the > > percentage of their papers produced in collaboration with institutes in > > different countries, the London School of Hygiene & Tropical Medicine > tops > > the list with more than 50 % of its publications co-authored with other > > countries. > > > > > > > > Although in terms of impact, US universities are still strongest, it is > > clear that other countries are emerging as centres of science by looking > > at the total production (number of publications in the Web of Science). > In > > this ranking, Harvard University is number one. But in the top 25 we also > > see universities from Canada (Toronto at 2, British Columbia at 22), > Japan > > (Tokyo at 4, Kyoto at 11, Osaka at 25), Brazil (Sao Paulo at 8), United > > Kingdom (Cambridge at 13, Oxford at 14, University College at 17), South > > Korea (Seoul at 19), and China (Zhejiang at 20). > > > > > > > > The Leiden Ranking is the first global university ranking which has > > published the details of its methodology and indicators. The indicators > > are presented in combination with stability intervals, an advanced > > statistical method to measure to what extent the differences in rankings > > between universities are significant. > > > > > > > > If one wishes to compare the university citation impact in a global > > context, it is best to take the percentage of papers in the top 10 % > > highly cited papers together with the calculation method ???fractional > > counting?? . This is the method which compares across institutions and > > fields in the fairest way. > > > > > > > > The Leiden Ranking is based on data of the Web of Science. Data on the > > arts and humanities are not included since these fields are not well > > represented in the Web of Science. The Leiden Ranking exclusively > measures > > the citation impact of research of the 500 largest universities in the > > world. This prevents an arbritrary combination of performance in > > education, valorization and research, a disadvantage of many global > > university rankings. > > > > > > > > More information about the ranking results and its methodology: > > www.leidenranking.com . > > > > > > > > With regards > > > > Paul Wouters > > Professor of Scientometrics > > Director Centre for Science and Technology Studies > > Leiden University > > > > Visiting address: > > Willem Einthoven Building > > Wassenaarseweg 62A > > 2333 AL Leiden > > Mail address: P.O. Box 905 > > 2300 AX Leiden > > T: +31 71 5273909 (secr.) > > F: +31 71 5273911 > > E: p.f.wouters at cwts.leidenuniv.nl > > > > CWTS home page: www.cwts.nl > > Blog about Citation Cultures: http://citationculture.wordpress.com/ > > > > Research Dreams: www.researchdreams.nl > > > > > > -- > > This message has been scanned for viruses and > > dangerous content by MailScanner, and is > > believed to be clean. > > > > > > > > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > > -- Professor Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From notsjb at LSU.EDU Wed Dec 14 12:35:24 2011 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Wed, 14 Dec 2011 11:35:24 -0600 Subject: Models Behaving Badly Message-ID: Although the book reviewed below pertains to the relationship of physics to economics, it also has relevance to the stuff being done in scientometrics, etc. The review is written by Burton Malkiel, whose stock market model and advice I have followed to my great profit. Stephen J Bensman LSU Libraries Lousiana State University Baton Rouge, LA 70803 BOOKSHELF DECEMBER 14, 2011 Physics Envy Creating financial models involving human behavior is like forcing 'the ugly stepsister's foot into Cinderella's pretty glass slipper.' By BURTON G. MALKIEL Trained as a physicist, Emanuel Derman once served as the head of quantitative analysis at Goldman Sachs and is currently a professor of industrial engineering and operations research at Columbia University. With "Models Behaving Badly" he offers a readable, even eloquent combination of personal history, philosophical musing and honest confession concerning the dangers of relying on numerical models not only on Wall Street but also in life. Mr. Derman's particular thesis can be stated simply: Although financial models employ the mathematics and style of physics, they are fundamentally different from the models that science produces. Physical models can provide an accurate description of reality. Financial models, despite their mathematical sophistication, can at best provide a vast oversimplification of reality. In the universe of finance, the behavior of individuals determines value-and, as he says, "people change their minds." In short, beware of physics envy. When we make models involving human beings, Mr. Derman notes, "we are trying to force the ugly stepsister's foot into Cinderella's pretty glass slipper. It doesn't fit without cutting off some of the essential parts." As the collapse of the subprime collateralized debt market in 2008 made clear, it is a terrible mistake to put too much faith in models purporting to value financial instruments. "In crises," Mr. Derman writes, "the behavior of people changes and normal models fail. While quantum electrodynamics is a genuine theory of all reality, financial models are only mediocre metaphors for a part of it." Throughout "Models Behaving Badly," Mr. Derman treats us to vignettes from his interesting personal history, which gave him a front-row seat for more than one model's misbehavior. Growing up in Cape Town, South Africa, he witnessed the repressive and failed political model of apartheid. Later he became disillusioned with the utopian model of the kibbutz in Israel. He started out professionally in the 1970s as a theoretical physicist. He then migrated to the center of the financial world in the 1980s, using a mix of mathematics and statistics to value securities for the trading desk at Goldman Sachs in New York. He had hoped to use the methods of physics to build a grand, unified theory of security pricing. After 20 years on Wall Street, even before the meltdown, he became a disbeliever. Enlarge Image Models Behaving Badly By Emanuel Derman (Free Press, 231 pages, $26) He sums up his key points about how to keep models from going bad by quoting excerpts from his "Financial Modeler's Manifesto" (written with Paul Wilmott), a paper he published a couple of years ago. Among its admonitions: "I will always look over my shoulder and never forget that the model is not the world"; "I will not be overly impressed with mathematics"; "I will never sacrifice reality for elegance"; "I will not give the people who use my models false comfort about their accuracy"; "I understand that my work may have enormous effects on society and the economy, many beyond my apprehension." Sampling from models that behave well, Mr. Derman gives an eloquent description of James Clerk Maxwell's electromagnetic theory in a chapter titled "The Sublime." He writes: "The electromagnetic field is not like Maxwell's equations; it is Maxwell's equations." In another chapter, titled "The Absolute," he outlines Spinoza's "Theory of Emotions"-a description of the nature of emotions that did for man's inner life, Mr. Derman says, "what Euclid did for geometry." But then he turns to financial models-behaving badly. The basic problem, according to Mr. Derman, is that "in physics you're playing against God, and He doesn't change His laws very often. In finance, you're playing against God's creatures." And God's creatures use "their ephemeral opinions" to value assets. Moreover, most financial models "fail to reflect the complex reality of the world around them." It is hard to argue with this basic thesis. Nevertheless, Mr. Derman is perhaps a bit too harsh when he describes EMM-the so-called Efficient Market Model. EMM does not, as he claims, imply that prices are always correct and that price always equals value. Prices are always wrong. What EMM says is that we can never be sure if prices are too high or too low. The Efficient Market Model does not suggest that any particular model of valuation-such as the Capital Asset Pricing Model-fully accounts for risk and uncertainty or that we should rely on it to predict security returns. EMM does not, as Mr. Derman says, "stubbornly assume that all uncertainty about the future is quantifiable." The basic lesson of EMM is that it is very difficult-well nigh impossible-to beat the market consistently. This lesson, or "model," behaves very well when investors follow it. It says that most investors would be better off simply buying a low-cost index fund that holds all the securities in the market rather than using either quantitative models or intuition in an attempt to beat the market. The idea that significant arbitrage opportunities are unlikely to exist (and certainly do not persist) is precisely the mechanism behind the Black-Scholes option-pricing model that Mr. Derman admires as a financial model behaving pretty well. Such a quibble aside, it is undeniable that "Models Behaving Badly" itself performs splendidly. Bringing ethics into his analysis, Mr. Derman has no patience for coddling the folly of individuals and institutions who over-rely on faulty models and then seek to escape the consequences. He laments the aftermath of the 2008 financial meltdown, when banks rebounded "to record profits and bonuses" thanks to taxpayer bailouts. If you want to benefit from the seven fat years, he writes, "you must suffer the seven lean years too, even the catastrophically lean ones. We need free markets, but we need them to be principled." Mr. Malkiel is the author of "A Random Walk Down Wall Street." Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 2151 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 36578 bytes Desc: image002.jpg URL: From eugene.garfield at THOMSONREUTERS.COM Thu Dec 15 14:07:53 2011 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Thu, 15 Dec 2011 19:07:53 +0000 Subject: Papers of interest to SIG Metrics readers Message-ID: TITLE: Evolution and structure of sustainability science (Article, English) AUTHOR: Bettencourt, LMA; Kaur, J SOURCE: PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 108 (49). DEC 6 2011. p.19540-19545 NATL ACAD SCIENCES, WASHINGTON SEARCH TERM(S): GARFIELD E CURR CONTENTS :5 1980 KEYWORDS: science of science; population dynamics; geography; topological transition; networks KEYWORDS+: MATHEMATICAL APPROACH; IDEAS; SPREAD; MAPS ABSTRACT: The concepts of sustainable development have experienced extraordinary success since their advent in the 1980s. They are now an integral part of the agenda of governments and corporations, and their goals have become central to the mission of research laboratories and universities worldwide. However, it remains unclear how far the field has progressed as a scientific discipline, especially given its ambitious agenda of integrating theory, applied science, and policy, making it relevant for development globally and generating a new interdisciplinary synthesis across fields. To address these questions, we assembled a corpus of scholarly publications in the field and analyzed its temporal evolution, geographic distribution, disciplinary composition, and collaboration structure. We show that sustainability science has been growing explosively since the late 1980s when foundational publications in the field increased its pull on new authors and intensified their interactions. The field has an unusual geographic footprint combining contributions and connecting through collaboration cities and nations at very different levels of development. Its decomposition into traditional disciplines reveals its emphasis on the management of human, social, and ecological systems seen primarily from an engineering and policy perspective. Finally, we show that the integration of these perspectives has created a new field only in recent years as judged by the emergence of a giant component of scientific collaboration. These developments demonstrate the existence of a growing scientific field of sustainability science as an unusual, inclusive and ubiquitous scientific practice and bode well for its continued impact and longevity. AUTHOR ADDRESS: LMA Bettencourt, Santa Fe Inst, 1399 Hyde Pk Rd, Santa Fe, NM 87501 USA -------------------------------------------------------------------------- TITLE: Profile and Scientific Production of CNPq Researchers in Cardiology (Article, English) AUTHOR: Oliveira, EA; Ribeiro, ALP; Quirino, IG; Oliveira, MCL; Martelli, DR; Lima, LS; Colosimo, EA; Lopes, TJ; Silva, ACSE; Martelli, H Jr SOURCE: ARQUIVOS BRASILEIROS DE CARDIOLOGIA 97 (3). SEP 2011. p.186-193 ARQUIVOS BRASILEIROS CARDIOLOGIA, RIO DE JANEIRO SEARCH TERM(S): SEGLEN PO J AM SOC INFORM SCI 45:1 1994 KEYWORDS: Bibliometric indicators; scientific and technical publications; cardiology; education; medical, graduate; health sciences KEYWORDS+: IMPACT FACTOR; SCIENCE; BRAZIL ABSTRACT: Background: Systematic assessments of the scientific production can optimize resource allocation and increase research productivity in Brazil. Objective: The aim of this study was to evaluate the profile and scientific production of researchers in the field of Cardiology who have fellowship in Medicine provided by the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico. Methods: The curriculum Lattes of 33 researchers with active fellowships from 2006 to 2008 were included in the analysis. The variables of interest were: gender, affiliation, tutoring of undergraduate, masters and PhD students, and scientific production and its impact. Results: There was predominance of males (72.7%) and of fellowship level 2 (56.4%). Three states of the Federation were responsible for 94% of the researchers: SP (28; 71.8%), RS (4; 10.3%), e RJ (3; 9.1%). Four institutions are responsible for about 82% of researchers: USP (13; 39.4%), UNESP (5; 15.2%), UFRGS (4; 12.1%) e UNIFESP (3; 9.1%). During all academic careers, the researchers published 2.958 journal articles, with a mean of 89 articles per researcher. Of total, 55% and 75% were indexed at Web of Science and Scopus databases, respectively. The researchers received a total of 19648 citations at the database Web of Science, with a median of 330 citations per researcher (IQ = 198-706). The average number of citations per article was 13.5 citations (SD = 11.6). Conclusions: Our study has shown that researchers in the field of cardiology have a relevant scientific production. The knowledge of the profile of researchers in the field of Cardiology will probably enable effective strategies to qualitatively improve the scientific output of Brazilian researchers. (Arq Bras Cardiol 2011; 97(3) : 186-193) AUTHOR ADDRESS: EA Oliveira, Rua Engenheiro Amaro Lanari 389-501, BR-30310580 Belo Horizonte, MG, Brazil -------------------------------------------------------------------------- TITLE: Classification and Visualization of the Social Science Network by the Minimum Span Clustering Method (Article, English) AUTHOR: Chang, YF; Chen, CM SOURCE: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 62 (12). DEC 2011. p.2404-2413 WILEY-BLACKWELL, MALDEN SEARCH TERM(S): GRIFFITH BC SCI STUD 4:339 1974; SMALL H SCI STUD 4:17 1974; SMALL H J AM SOC INFORM SCI 50:799 1999; GARFIELD E J INDIAN I SCI 57:61 1975 KEYWORDS+: SCIENTIFIC LITERATURES; JOURNALS ABSTRACT: We propose a minimum span clustering (MSC) method for clustering and visualizing complex networks using the interrelationship of network components. To demonstrate this method, it is applied to classify the social science network in terms of aggregated journal- journal citation relations of the Institute of Scientific Information (ISI) Journal Citation Reports. This method of network classification is shown to be efficient, with a processing time that is linear to network size. The classification results provide an in-depth view of the network structure at various scales of resolution. For the social science network, there are 4 resolution scales, including 294 batches of journals at the highest scale, 65 categories of journals at the second, 15 research groups at the third scale, and 3 knowledge domains at the lowest resolution. By comparing the relatedness of journals within clusters, we show that our clustering method gives a better classification of social science journals than ISI's heuristic approach and hierarchical clustering. In combination with the minimum spanning tree approach and multi-dimensional scaling, MSC is also used to investigate the general structure of the network and construct a map of the social science network for visualization. AUTHOR ADDRESS: CM Chen, Natl Taiwan Normal Univ, Dept Phys, Taipei 117, Taiwan -------------------------------------------------------------------------- TITLE: The Structure of the Arts & Humanities Citation Index: A Mapping on the Basis of Aggregated Citations Among 1,157 Journals (Article, English) AUTHOR: Leydesdorff, L; Hammarfelt, B; Salah, A SOURCE: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 62 (12). DEC 2011. p.2414-2426 WILEY-BLACKWELL, MALDEN SEARCH TERM(S): GARFIELD E CURR CONTENTS :5 1979; GARFIELD E CURR CONTENTS :5 1982; PUDOVKIN AI J AM SOC INF SCI TEC 53:1113 2002; SMALL H J INF SCI 11:147 1985; GARFIELD E rauth; MARSHAKOVA IV rauth; SMALL H J AM SOC INFORM SCI 24:265 1973; SMALL H SCIENTOMETRICS 7:391 1985; JOURNALS item_title KEYWORDS+: PEARSONS CORRELATION-COEFFICIENT; SOCIAL-SCIENCES; MAPS; INDICATORS; ISI; INTERDISCIPLINARITY; PRODUCTIVITY; DISCIPLINES; COCITATIONS; HISTORY ABSTRACT: Using the Arts & Humanities Citation Index (A&HCI) 2008, we apply mapping techniques previously developed for mapping journal structures in the Science and Social Sciences Citation Indices. Citation relations among the 110,718 records were aggregated at the level of 1,157 journals specific to the A&HCI, and the journal structures are questioned on whether a cognitive structure can be reconstructed and visualized. Both cosine-normalization (bottom up) and factor analysis (top down) suggest a division into approximately 12 subsets. The relations among these subsets are explored using various visualization techniques. However, we were not able to retrieve this structure using the Institute for Scientific Information Subject Categories, including the 25 categories that are specific to the A&HCI. We discuss options for validation such as against the categories of the Humanities Indicators of the American Academy of Arts and Sciences, the panel structure of the European Reference Index for the Humanities, and compare our results with the curriculum organization of the Humanities Section of the College of Letters and Sciences of the University of California at Los Angeles as an example of institutional organization. AUTHOR ADDRESS: L Leydesdorff, Univ Amsterdam, Amsterdam Sch Commun Res ASCoR, Kloveniersburgwal 48, NL-1012 CX Amsterdam, Netherlands -------------------------------------------------------------------------- TITLE: Improving the Coverage of Social Science and Humanities Researchers' Output: The Case of the Erudit Journal Platform (Article, English) AUTHOR: Lariviere, V; Macaluso, B SOURCE: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 62 (12). DEC 2011. p.2437-2442 WILEY-BLACKWELL, MALDEN SEARCH TERM(S): JOURNAL item_title KEYWORDS+: RESEARCH PERFORMANCE; CITATION DATABASE; PUBLICATIONS ABSTRACT: In non-English-speaking countries the measurement of research output in the social sciences and humanities (SSH) using standard bibliographic databases suffers from a major drawback: the underrepresentation of articles published in local, non-English, journals. Using papers indexed (1) in a local database of periodicals (Erudit) and (2) in the Web of Science, assigned to the population of university professors in the province of Quebec, this paper quantifies, for individual researchers and departments, the importance of papers published in local journals. It also analyzes differences across disciplines and between French-speaking and English-speaking universities. The results show that, while the addition of papers published in local journals to bibliometric measures has little effect when all disciplines are considered and for anglophone universities, it increases the output of researchers from francophone universities in the social sciences and humanities by almost a third. It also shows that there is very little relation, at the level of individual researchers or departments, between the output indexed in the Web of Science and the output retrieved from the Erudit database; a clear demonstration that the Web of Science cannot be used as a proxy for the "overall" production of SSH researchers in Quebec. The paper concludes with a discussion on these disciplinary and language differences, as well as on their implications for rankings of universities. AUTHOR ADDRESS: V Lariviere, Univ Montreal, Ecole Bibliothecon & Sci Informat, CP 6128,Succ Ctr Ville, Montreal, PQ H3C 3J7, Canada ------------------------------------------------------------------------- TITLE: Who Is Going to Win the Next Association for the Advancement of Artificial Intelligence Fellowship Award? Evaluating Researchers by Mining Bibliographic Data (Article, English) AUTHOR: Rokach, L; Kalech, M; Blank, I; Stern, R SOURCE: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 62 (12). DEC 2011. p.2456-2470 WILEY-BLACKWELL, MALDEN SEARCH TERM(S): HIRSCH JE P NATL ACAD SCI USA 102:16569 2005; BIBLIOGRAPHIC* item_title KEYWORDS+: H-INDEX; NETWORKS; WEB; SCIENCE; FUSION; IMPACT ABSTRACT: Accurately evaluating a researcher and the quality of his or her work is an important task when decision makers have to decide on such matters as promotions and awards. Publications and citations play a key role in this task, and many previous studies have proposed using measurements based on them for evaluating researchers. Machine learning techniques as a way of enhancing the evaluating process have been relatively unexplored. We propose using a machine learning approach for evaluating researchers. In particular, the proposed method combines the outputs of three learning techniques (logistics regression, decision trees, and artificial neural networks) to obtain a unified prediction with improved accuracy. We conducted several experiments to evaluate the model's ability to: (a) classify researchers in the field of artificial intelligence as Association for the Advancement of Artificial Intelligence (AAAI) fellows and (b) predict the next AAAI fellowship winners. We show that both our classification and prediction methods are more accurate than are previous measurement methods, and reach a precision rate of 96% and a recall of 92%. AUTHOR ADDRESS: L Rokach, Ben Gurion Univ Negev, Dept Informat Syst Engn, POB 653, IL-84105 Beer Sheva, Israel ------------------------------------------------------------------------- TITLE: The Use of h-index for the Assessment of Journals' Performance Will Lead to Shifts in Editorial Policies (Letter, English) AUTHOR: Jaric, I SOURCE: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 62 (12). DEC 2011. p.2546 WILEY-BLACKWELL, MALDEN SEARCH TERM(S): JOURNALS item_title; HIRSCH JE P NATL ACAD SCI USA 102:16569 2005; LETTER* doctype AUTHOR ADDRESS: I Jaric, Univ Belgrade, Inst Multidisciplinary Res, Belgrade, Serbia -------------------------------------------------------------------------- TITLE: Most cited articles: ethanol-induced hepatotoxicity, anticarcinogenic effects of polyphenolic compounds in tea, dose-response modeling, novel roles of epoxide hydrolases and arsenic-induced suicidal erythrocyte death (Editorial Material, English) AUTHOR: Bolt, HM; Hengstler, JG SOURCE: ARCHIVES OF TOXICOLOGY 85 (12). DEC 2011. p.1485-1489 SPRINGER HEIDELBERG, HEIDELBERG SEARCH TERM(S): CITED item_title; EDITORIAL doctype KEYWORDS+: ISOLATED RAT HEPATOCYTES; OXIDATIVE STRESS; CELLS; EXPOSURE; RISK; TOXICOLOGY; ALCOHOL; BECAME; CANCER; DAMAGE AUTHOR ADDRESS: HM Bolt, TU Dortmund, Leibniz Inst Arbeitsforsch, Leibniz Res Ctr Working Environm & Human Factors, Ardeystr 67, D-44139 Dortmund, Germany - From umutal at HACETTEPE.EDU.TR Sat Dec 17 10:47:29 2011 From: umutal at HACETTEPE.EDU.TR (Umut AL) Date: Sat, 17 Dec 2011 17:47:29 +0200 Subject: Final Call for Papers - 3rd International Symposium on Information Management in a Changing World Message-ID: 3rd International Symposium on Information Management in a Changing World, September 19-21, 2012, Ankara, Turkey Symposium web site: by2012.bilgiyonetimi.net/en/ E-Science and Information Management (Third and Final Call for Papers) Organizer: Hacettepe University Department of Information Management, Ankara, Turkey (http://www.bby.hacettepe.edu.tr/eng/) Theme: ?E-Science and Information Management? Objectives: IMCW2012 aims to bring together both researchers and information professionals to discuss the implications of e-science for information management. Some of these issues and challenges are as follows: information literacy, intellectual property rights, e-science and open access data archives, information processing and visualizations tools, collection development and management, e-science librarianship, and so on. Keynote speaker: Dr. Tony Hey, Corporate Vice President of Microsoft Proceedings book: Accepted papers and posters will appear in the proceedings book to be published by Springer under its CCIS series (http://www.springer.com/series/7899) and in the Symposium web site. Papers that appear in Springer?s CCIS series are indexed in Thomson Reuter?s Conference Proceedings Citation Index. Main topics of the Symposium include (but not limited with) the following: ? Data Management Challenges in E-Science ? Data Life-cycle in E-Science ? Information Discovery, Organization, and Retrieval in E-Science ? Information Management and E-Science ? Information Architecture for E-Science ? Education for Information Management and E-Science ? Scholarly Publishing, Open Access and Digital Repositories in E-Science ? Digital Preservation of Scientific and Cultural Heritage ? Social and Cultural Issues and E-Science How to submit: In addition to papers, short papers (pecha-kucha), posters, workshops and panels on e-science and information management, general papers on information management are also welcome. Student papers and posters will also be considered. Please use the template available in the Symposium web site to prepare your contributions and proposals, and send them to us using the Conference Management Software (openconf). Important dates First Call: July 2011 Second Call: October 2011 Third Call: December 2011 Last date to send papers and posters: 23 January 2012 Authors notification: 5 March 2012 Final papers submission and registration: 7 May 2012 Symposium: 19-21 September 2012 Ex libris competition: Because IMCW2012 coincides with the 40th anniversary of the foundation of the Department of Information Management of Hacettepe University, to commemorate this event we organized an international ex libris competition with the theme ?information management? (http://exlibris.hacettepe.edu.tr/index.php?lang=en&page=HomePage). The winning art works of ex libris will be exhibited during the symposium. (Please note: Different deadlines apply for the ex libris competition. Please check the ex libris web site above for further information.) All suggestions and comments are welcome. Please send us your ideas about possible invited speakers at sempozyum at bilgiyonetimi.net. Symposium Facebook event: https://www.facebook.com/event.php?eid=304487562911300&context=create Twitter hashtag: #by2012 If you wish to receive updates on IMCW2012 Symposium and the other events organized by the Department of Information Management of Hacettepe University, you can also follow us on Twitter and Facebook. Looking forward to your contributions to and participation in the Symposium. Ya?ar Tonta, Chair of the Organizing Committee Serap Kurbano?lu, Chair of the Programme Committee Hacettepe University Department of Information Management 06800 Beytepe, Ankara, Turkey Tel: 0312 297 82 00 Faks: 0312 299 20 14 E-posta: tonta at hacettepe.edu.tr, serap at hacettepe.edu.tr -------------- next part -------------- An HTML attachment was scrubbed... URL: From loet at LEYDESDORFF.NET Mon Dec 19 08:07:47 2011 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Mon, 19 Dec 2011 14:07:47 +0100 Subject: New Leiden Ranking In-Reply-To: <15003.220.225.247.226.1324295456.squirrel@202.41.64.20> Message-ID: Dear Gangan, One can also take the first moment: p*q. This is by definition equal to the number of papers in the top-10% most-highly cited papers. Following Rousseau's recent article, one can also consider two percentile rank classes. The top-10% is then counted with value = 1 and the others with value = 0. I am more inclined to appreciate the tales of the distribution, and thus to value these two ranks with 2 and 1, respectively. I added these two options in red to your spreadsheet (attached). Alternatively, one could use the six percentile ranks of the NSF (Bornmann & Mutz, 2011). According to the authors in Leiden, the rankings in the various (top) ranks (5%, 10%, 20%) would be rather similar
. Perhaps, it is also useful to distinguish between an excellence indicator (such as the top-10%) and an impact indicator (such as I3). Best wishes, Loet -----Original Message----- From: gprathap at cmmacs.ernet.in [mailto:gprathap at cmmacs.ernet.in] Sent: Monday, December 19, 2011 12:51 PM To: Loet Leydesdorff Cc: sigmetrics at listserv.utk.edu Subject: Re: [SIGMETRICS] New Leiden Ranking Dear Loet, I'm not sure the Leiden approach to rank entitites on a quality proxy alone does justice to the big fellas like Harvard. A better method may be to use what I call the second order indicators where quantity proxy (read P, the number of papers) is multiplied by the square of the quality proxy (say, PP for a very simplistic measure of quality). Recently it was brought to my notice that Vinkler (Scientometrics,1988, 14,161-163) had advocated precisely this, arguing that by doing so, one "rewards" high quality and "punishes" low quality by such an approach. If the Leiden rankings are re-worked in this manne, we get the results shown in the attached spreadsheet. I have compared the 4 Indian institutions in the Leiden list with the "top 100". Harvard is 64 times more "effective" than the 4 Indian institutions in the Leiden list. Regards, Gangan Prathap > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear Chudamani, > > It seems to me that an indicator or ranking which allows for > statitistical testing of the significance of differences and positions > is far superior to one which does not (such as the h-index). > Fortunately, these two major teams in our field (Granada and Leiden > University) have agreed on using such an indicator for the Scopus and > WoS databases, respectively. > > Of course, there remains the problem of > interdisciplinarity/multidisciplinarity in institutional units such as > universities. One might say that a university can improve its average > citation score by closing the mathematics department. :-) This can be > counteracted a bit by field-normalization and perhaps by fractionation > of the citations (1/NRef) in the citigng papers. > > Best wishes, > Loet > > On Mon, Dec 12, 2011 at 11:30 AM, K S Chudamani > wrote: > >> Adminstrative info for SIGMETRICS (for example unsubscribe): >> http://web.utk.edu/~gwhitney/sigmetrics.html >> >> > >> >> >> >> dear all, >> >> I would like to point out that the elaborate method used is simply an >> extension of regular method. We have used h index and ranked >> universitires in India.it has been published in pearl journal. This >> also gives similar results. Then why break our head by complicatihg >> matters >> >> chudamani >> >> >> Adminstrative info for SIGMETRICS (for example unsubscribe): >> > http://web.utk.edu/~gwhitney/sigmetrics.html >> > >> > Dear colleagues, >> > >> > >> > >> > Using the newly introduced indicator for impact in the Leiden >> > Rankings >> > 2011/2012 ???the >> Proportion >> > top-10% publications (PPtop 10%)???one can test differences between >> > institutions statistically using the z-test. Furthermore, one can >> > test whether each university performs above or below expectation. >> > >> > >> > >> > An Excel sheet with the test embedded is made available at >> > http://www.leydesdorff.net/leiden11/leiden11.xls and an example is >> > elaborated in a short introduction at >> > http://www.leydesdorff.net/leiden11/index.htm (coauthored with Lutz >> > Bornmann). >> > >> > >> > >> > The test was previously used analogously for the Excellence >> > Indicator >> in >> > the SCImago Institutions Rankings >> > ; cf. >> > http://www.leydesdorff.net/scimago11; Bornmann >> > et al., in >> press), >> > and can be considered as additional to the stability intervals >> provided >> at >> > the webpages of the Leiden Ranking >> > . The SCImago Rankings >> > are based on Scopus data, and the Leiden Ranking on Web-of-Science data. >> > >> > >> > >> > With best wishes, >> > >> > Loet >> > >> > >> > >> > _____ >> > >> > Amsterdam School of Communications Research (ASCoR), >> > Kloveniersburgwal 48, 1012 CX Amsterdam. >> > Tel.: +31-20- 525 6598; fax: +31-842239111 >> > loet at leydesdorff.net ; >> > http://www.leydesdorff.net/ ; >> > >> > http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en >> > >> > >> > >> > From: ASIS&T Special Interest Group on Metrics >> > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Paul Wouters >> > Sent: Friday, December 09, 2011 11:58 AM >> > To: SIGMETRICS at LISTSERV.UTK.EDU >> > Subject: [SIGMETRICS] New Leiden Ranking >> > >> > >> > >> > Adminstrative info for SIGMETRICS (for example unsubscribe): >> > http://web.utk.edu/~gwhitney/sigmetrics.html Dear colleagues >> > >> > US still dominates high impact publications in science >> > >> > >> > >> > The US are still the dominant scientific world power, but new >> > centres >> of >> > science are emerging. MIT is the university which has the highest >> citation >> > impact of its publications in the world. Princeton and Harvard take >> > positions two and three. These are some of the findings of the new >> Leiden >> > Ranking 2011 ??? 2012 which has been published on the website: >> > www.leidenranking.com . The top >> > fifty >> list >> > consists of 42 US based universities, 2 Swiss (Lausanne at 12 and >> > ETH Zurich at 18), 1 Israeli (Weizmann Institute of Science), 4 >> > British (Cambridge at 31, London School of Hygiene & Tropical >> > Medicine at 33, Oxford at 36 and Durham at 42), and one Danish >> > university (Technical University of Denmark). Aggregated to country >> > level, the US has 64 universities in the top 100 list, the UK 12, and the Netherlands 7. >> The >> > latter is remarkable given its small size. >> > >> > >> > >> > The Leiden Ranking 2011-2012 is based on an advanced methodology >> > which compensates for distorting effects due to the size of the >> > university, >> the >> > differences in citation characteristics between scientific fields, >> > differences between English and non-English publications, and >> distorting >> > effects of extremely high cited publications. Publications authored >> > by researchers at different universities are attributed to the >> universities >> > as fractions. This prevents distortion of the ranking by counting >> these >> > publications multiple times (for each co-authoring university). >> > This distorting effect is often overseen in other global university >> rankings, >> > which leads to a relative advantage of clinical research and some >> physics >> > fields in these rankings. This makes clear how sensitive global >> rankings >> > are to the nitty-gritty of the calculations. >> > >> > >> > >> > The Leiden Ranking enables users to choose the criteria on which >> > they >> wish >> > to compare university performance. The menu offers 3 indicators of >> impact >> > and 4 indicators of scientific collaboration. When scored on the >> > percentage of their papers produced in collaboration with >> > institutes >> in >> > different countries, the London School of Hygiene & Tropical >> > Medicine >> tops >> > the list with more than 50 % of its publications co-authored with >> other >> > countries. >> > >> > >> > >> > Although in terms of impact, US universities are still strongest, >> > it >> is >> > clear that other countries are emerging as centres of science by >> looking >> > at the total production (number of publications in the Web of >> Science). >> In >> > this ranking, Harvard University is number one. But in the top 25 >> > we >> also >> > see universities from Canada (Toronto at 2, British Columbia at >> > 22), >> Japan >> > (Tokyo at 4, Kyoto at 11, Osaka at 25), Brazil (Sao Paulo at 8), >> United >> > Kingdom (Cambridge at 13, Oxford at 14, University College at 17), >> South >> > Korea (Seoul at 19), and China (Zhejiang at 20). >> > >> > >> > >> > The Leiden Ranking is the first global university ranking which has >> > published the details of its methodology and indicators. The >> indicators >> > are presented in combination with stability intervals, an advanced >> > statistical method to measure to what extent the differences in >> rankings >> > between universities are significant. >> > >> > >> > >> > If one wishes to compare the university citation impact in a global >> > context, it is best to take the percentage of papers in the top 10 >> > % highly cited papers together with the calculation method >> > ???fractional counting?? . This is the method which compares across >> > institutions and fields in the fairest way. >> > >> > >> > >> > The Leiden Ranking is based on data of the Web of Science. Data on >> > the arts and humanities are not included since these fields are not >> > well represented in the Web of Science. The Leiden Ranking >> > exclusively >> measures >> > the citation impact of research of the 500 largest universities in >> > the world. This prevents an arbritrary combination of performance >> > in education, valorization and research, a disadvantage of many >> > global university rankings. >> > >> > >> > >> > More information about the ranking results and its methodology: >> > www.leidenranking.com . >> > >> > >> > >> > With regards >> > >> > Paul Wouters >> > Professor of Scientometrics >> > Director Centre for Science and Technology Studies Leiden >> > University >> > >> > Visiting address: >> > Willem Einthoven Building >> > Wassenaarseweg 62A >> > 2333 AL Leiden >> > Mail address: P.O. Box 905 >> > 2300 AX Leiden >> > T: +31 71 5273909 (secr.) >> > F: +31 71 5273911 >> > E: p.f.wouters at cwts.leidenuniv.nl >> > >> > CWTS home page: www.cwts.nl >> > Blog about Citation Cultures: http://citationculture.wordpress.com/ >> > >> > Research Dreams: www.researchdreams.nl >> > >> > >> > -- >> > This message has been scanned for viruses and dangerous content by >> > MailScanner, and is believed to be clean. >> > >> > >> >> >> >> >> -- >> This message has been scanned for viruses and dangerous content by >> MailScanner, and is believed to be clean. >> >> > > > -- > Professor Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal > 48, 1012 CX Amsterdam > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; > http://www.leydesdorff.net/ > > -------------- next part -------------- A non-text attachment was scrubbed... Name: Leiden Rankings.GPrathap.19Dec11.xls Type: application/vnd.ms-excel Size: 138752 bytes Desc: Leiden Rankings.GPrathap.19Dec11.xls URL: From eugene.garfield at THOMSONREUTERS.COM Mon Dec 19 14:59:10 2011 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Mon, 19 Dec 2011 19:59:10 +0000 Subject: Papers of interest to SIG Merics readers Message-ID: -- -------------------------------------------------------------------------- TITLE: Evolution of the literature on clinical neurology in Spain, France, Italy and Germany over the period 2000-2009 (Article, Spanish) AUTHOR: Inigo, J; Iriarte, J SOURCE: REVISTA DE NEUROLOGIA 53 (10). NOV 16 2011. p.591-599 REVISTA DE NEUROLOGIA, BARCELONA SEARCH TERM(S): GARFIELD E rauth; HIRSCH JE P NATL ACAD SCI USA 102:16569 2005; REV NEUROLOGIA source_abbrev_20; GARFIELD E JAMA-J AM MED ASSOC 295:90 2006 KEYWORDS: Bibliometric analysis; Comparison between countries; Publications in neurology KEYWORDS+: SCIENCE-CITATION-INDEX; HIRSCHS H-INDEX; BIBLIOMETRIC INDICATORS; BIOMEDICAL PUBLICATIONS; SCIENTIFIC PUBLICATION; IMPACT FACTOR; JOURNALS; PRODUCTIVITY; QUALITY; WEB ABSTRACT: Aim. This study analyzes the productivity and visibility of Spanish publications in the area of clinical neurology in the period 2000-2009 and compared with those for Italy, France and Germany. Materials and methods. We used the database Web of Science. The analysis (annual and in five-year) was restricted to the citable documents (original articles, reviews and proceedings papers). Bibliometric indicators used were the number of publications, citations received by publications and Hirsch's h-index. We also assessed the slope of the annual growth rate (b), the number of publications by language and the international collaboration. Results. In the period 2000-2009 there were 46,114 publications in neurology clinic of which 6,998 were Spanish publications (h = 75), 11,629 in Italy (h = 101), French 9,745 (h = 102) and 20,143 in Germany (h = 124). The rate of increase in the total number of publications in Spain (b = 15) was lower than that observed in Italy (b = 65), Germany (b = 61) or France (b = 34). In the case of publications in English, the growth rate was higher for Spain (b = 37) than for France (b = 36) but lower than for Germany (b = 54) and Italy (b = 65). Conclusions. Although the total number of publications and the observed increase are lower in Spain compared to Italy, France or Germany, the Spanish publications in Clinical Neurology shows good trend indicators with regard to publications in English and international collaboration. This improvement was associated with greater visibility as shown by the five-year analysis of citations received by Spanish publications. AUTHOR ADDRESS: J Iriarte, Univ Navarra, Clin Univ Navarra, Pio XII 36, E-31080 Pamplona, Navarra, Spain -------------------------------------------------------------------------- TITLE: Collaboration and Productivity in Scientific Synthesis (Article, English) AUTHOR: Hampton, SE; Parker, JN SOURCE: BIOSCIENCE 61 (11). NOV 2011. p.900-910 AMER INST BIOLOGICAL SCI, WASHINGTON SEARCH TERM(S): PRICE DJD rauth KEYWORDS: synthetic science; sociology of collaboration; interdisciplinary science; scientific metrics; research productivity and impact KEYWORDS+: KNOWLEDGE; SPECIALIZATION; SCIENCES; BIOLOGY; TRUST ABSTRACT: Scientific synthesis has transformed ecological research and presents opportunities for advancements across the sciences; to date, however, little is known about the antecedents of success in synthesis. Building on findings from 10 years of detailed research on social interactions in synthesis groups at the National Center for Ecological Analysis and Synthesis, we demonstrated with large-scale quantitative analyses that face-to-face interaction has been vital to success in synthesis groups, boosting the production of peer-reviewed publications. But it has been about more than just meeting; the importance of resident scientists at synthesis centers was also evident, in that including synthesis-center residents in geographically distributed working groups further increased productivity. Moreover, multi-institutional collaboration, normally detrimental to productivity, was positively associated with productivity in this stimulating environment. Finally, participation in synthesis groups significantly increased scientists' collaborative propensity and visibility, positively affecting scientific careers and potentially increasing the capacity of the scientific community to leverage synthesis for enhanced scientific understanding. AUTHOR ADDRESS: SE Hampton, Univ Calif Santa Barbara, Natl Ctr Ecol Anal & Synth, Santa Barbara, CA 93106 USA -------------------------------------------------------------------------- TITLE: Patently Impossible (Article, English) AUTHOR: Seymore, SB SOURCE: VANDERBILT LAW REVIEW 64 (5). OCT 2011. p.1491-1544,1489 VANDERBILT LAW REVIEW, NASHVILLE SEARCH TERM(S): LOCK S BRIT MED J 290:1560 1985 KEYWORDS+: INSUFFICIENT DISCLOSURE REJECTIONS; SCIENCE; LAW; RETHINKING; INNOVATION; ECONOMICS; INVENTION; OFFICE; SYSTEM; RIGHTS ABSTRACT: The quest to achieve the impossible fuels creativity, spawns new fields of inquiry, illuminates old ones, and extends the frontiers of knowledge. It is difficult, however, to obtain a patent for an invention which seems impossible, incredible, or conflicts with well- established scientific principles. The principal patentability hurdle is operability, which an inventor cannot overcome if there is reason to doubt that the invention can really achieve the intended result. Despite its laudable gatekeeping role, this Article identifies two problems with the law of operability. First, though objective in theory, the operability analysis rests on subjective credibility assessments. These credibility assessments can introduce a bias toward unpatentability, with inventions emerging from new, poorly understood, and paradigm-shifting technologies as well as those from fields with a poor track record of success as the most vulnerable. Second, what happens when the impossible becomes possible? History reveals that the Patent Office and the courts will continue to deny patents for a long time thereafter. This Article argues that the mishandling of seemingly impossible inventions vitiates the presumption of patentability, prevents the patent system from sitting at the cutting edge of technology, and frustrates the patent system's overarching goal to promote scientific and technological progress. In an effort to resolve these problems and fill a gap in patent scholarship, this Article offers a new framework for gauging the patentability of seemingly impossible inventions. Briefly, it contends that a more robust enforcement of patent law's enablement requirement can and should perform the gatekeeping role because it can resolve whether an invention works by weighing objective, technical factors. This approach would quickly reveal technical merit for inventions that really work or, alternatively, the fatal flaw for inventions that are truly impossible. Its implementation would not only eliminate the need for the operability requirement, but it would also streamline patent examination, improve the disclosure function of the patent system, promote scientific and technological progress, and ultimately foster innovation. AUTHOR ADDRESS: SB Seymore, Vanderbilt Univ, Nashville, TN 37235 USA From juan.campanario at UAH.ES Thu Dec 22 13:29:01 2011 From: juan.campanario at UAH.ES (Juan Miguel CAMPANARIO) Date: Thu, 22 Dec 2011 19:29:01 +0100 Subject: 2 new papers In-Reply-To: <992628B8C4204CCAB5890ACC0890BD11@Vig6371596> Message-ID: Dear James: I am interested in these papers. Please, could you send me a pdf copy of both? Regards JM At 04:12 03/01/2004, you wrote: >Adminstrative info for SIGMETRICS (for example unsubscribe): >http://web.utk.edu/~gwhitney/sigmetrics.html >Colleagues may be interesed in two new papers One is on my research on >structured abstracts, and one on new ways of making academic articles to >read. The abstracts are attached. > >Copiesof both papers are available from me. > >Season's greetings > >Jim > >James Hartley >School of Psychology >Keele University >Staffordshire >http://www.keele.ac.uk/psychology/people/hartleyjames/ ------------------------------------------------------------------- Juan Miguel CAMPANARIO (http://www2.uah.es/jmc) ------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugene.garfield at THOMSONREUTERS.COM Mon Dec 26 00:53:34 2011 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Mon, 26 Dec 2011 05:53:34 +0000 Subject: paper of possible interest to SIG Metrics listserv Message-ID: TITLE: A substantial number of scientific publications originate from non-university hospitals (Article, English) AUTHOR: Fedder, J; Nielsen, GL; Petersen, LJ; Rasmussen, C; Lauszus, FF; Frost, L; Hornung, N; Lederballe, O; Andersen, JP SOURCE: DANISH MEDICAL BULLETIN 58 (11). NOV 2011. p.NIL_17-NIL_21 DANISH MEDICAL ASSOC, COPENHAGEN SEARCH TERM(S): GARFIELD E rauth; GARFIELD E JAMA-J AM MED ASSOC 295:90 2006 KEYWORDS+: CITATION; IMPACT ABSTRACT: INTRODUCTION: As we found no recent published reports on the amount and kind of research published from Danish hospitals without university affiliation, we have found it relevant to conduct a bibliometric survey disclosing these research activities. MATERIAL AND METHODS: We retrieved all scientific papers published in the period 2000-2009 emanating from all seven Danish non-university hospitals in two regions, comprising 1.8 million inhabitants, and which were registered in a minimum of one of the three databases: PubMed MEDLINE, Thomson Reuters Web of Science and Elsevier's Scopus. RESULTS: In 878 of 1,252 papers, the first and/or last author was affiliated to a non-university hospital. Original papers made up 69% of these publications versus 86% of publications with university affiliation on first or last place. Case reports and reviews most frequently had authors from regional hospitals as first and/or last authors. The total number of publications from regional hospitals increased by 48% over the 10-year period. Publications were cited more often if the first or last author was from a university hospital and even more so if they were affiliated to foreign institutions. Cardiology, gynaecology and obstetrics, and environmental medicine were the three specialities with the largest number of regional hospital publications. CONCLUSION: A substantial number of scientific publications originate from non-university hospitals. Almost two thirds of the publications were original research published in international journals. Variations between specialities may reflect local conditions. AUTHOR ADDRESS: J Fedder, Horsens Hosp, Sci Unit, Reprod Biol Lab, Sundvej 30, DK-8700 Horsens, Denmark ------------------------------------------------------------------------- TITLE: An Internet measure of the value of citations (Article, English) AUTHOR: Szymanski, BK; de la Rosa, JL; Krishnamoorthy, M SOURCE: INFORMATION SCIENCES 185 (1). FEB 15 2012. p.18-31 ELSEVIER SCIENCE INC, NEW YORK SEARCH TERM: HIRSCH JE P NATL ACAD SCI USA 102:16569 2005; KEYWORDS: Bibliometrics; Scientometrics; Citation analysis; Author ranking; Impact factor; PageRank KEYWORDS+: H-INDEX; PUBLICATION; ALGORITHM; PAGERANK ABSTRACT: A new method for computing the value of citations is introduced and compared with the PageRank algorithm for author ranking. In our proposed approach, the value of each publication is expressed in CENTs (sCientific currENcy Tokens). The publication's value is then divided by the number of citations made by that publication to yield a value for each citation. As citations are the acknowledgements of the work by authors other than oneself (indicating that it has been useful), self-citations count as zero in acknowledged citation value. Circular citations, a generalized type of self-citation, are considered to have a reduced acknowledged citation value. Finally, we propose a modification of the h-index to define it as the largest integer such that the i-th publication (on the list of publications sorted by their value in CENTs) is worth more than i CENTs. This new index, termed the i-index or i(2) in short, appears to be a more precise measure of the impact of publications and their authors' productivity than the h-index. (C) 2011 Elsevier Inc. All rights reserved. AUTHOR ADDRESS: JL de la Rosa, Univ Girona, EASY Innovat Ctr, Campus Montilivi, E-17071 Catalonia, EU, Spain ------------------------------------------------------------------------- TITLE: Temporal Effects in the Growth of Networks (Article, English) AUTHOR: Medo, M; Cimini, G; Gualdi, S SOURCE: PHYSICAL REVIEW LETTERS 107 (23). DEC 1 2011. p.NIL_16-NIL_19 AMER PHYSICAL SOC, COLLEGE PK SEARCH TERM(S): PRICE DJD rauth; PHYS REV LETT source_abbrev_20 KEYWORDS+: COMPLEX NETWORKS; PREFERENTIAL ATTACHMENT; EVOLVING NETWORKS; EVOLUTION; CONNECTIVITY; ADVANTAGE ABSTRACT: We show that to explain the growth of the citation network by preferential attachment (PA), one has to accept that individual nodes exhibit heterogeneous fitness values that decay with time. While previous PA-based models assumed either heterogeneity or decay in isolation, we propose a simple analytically treatable model that combines these two factors. Depending on the input assumptions, the resulting degree distribution shows an exponential, log-normal or power- law decay, which makes the model an apt candidate for modeling a wide range of real systems. AUTHOR ADDRESS: M Medo, Univ Fribourg, Dept Phys, CH-1700 Fribourg, Switzerland -------------------------------------------------------------------------- TITLE: Landslides: A state-of-the art on the current position in the landslide research community (Article, English) AUTHOR: Mikos, M SOURCE: LANDSLIDES 8 (4). DEC 2011. p.541-551 SPRINGER HEIDELBERG, HEIDELBERG SEARCH TERM(S): HIRSCH JE P NATL ACAD SCI USA 102:16569 2005; PUDOVKIN AI J AM SOC INF SCI TEC 53:1113 2002 KEYWORDS: Citation analysis; h-Index; Immediacy index; Impact factor; Landslides; Journal relatedness; Science citation Index; SCOPUS KEYWORDS+: SUSCEPTIBILITY ANALYSIS; CHARACTERISTIC CURVE; FUZZY- LOGIC; AREA; MODEL; WATER; GIS; BUILDINGS; RESERVOIR; VELOCITY ABSTRACT: The international journal Landslides (ISSN 1612-510X), launched in 2004 and published by Springer Verlag, soon gained international recognition as the only specialized scientific journal in the world dedicated to different aspects of landslides, and as one of the leading world journals in the field of geological engineering. After 7 years, seven published volumes with 28 issues and 290 published papers on 2,794 pages, there is time to make a comparison with other related journals that also cover the field of landslide risk mitigation. The critical review of these seven publishing years was done using ISI Journal Citation Reports produced by Thomson Reuters, and available scientometric data from the ISI Web of Knowledge and SCOPUS. The data presented in this paper and the analysis shown may help the Editorial Board to further improve the journal into the direction of a high quality scientific journal with even higher impact on the international research community in the field of landslide risk mitigation. AUTHOR ADDRESS: M Mikos, Univ Ljubljana, Fac Civil & Geodet Engn, Jamova Cesta 2, SI-1000 Ljubljana, Slovenia -------------------------------------------------------------------------- TITLE: The Matthew effect in environmental science publication: A bibliometric analysis of chemical substances in journal articles (Article, English) AUTHOR: Grandjean, P; Eriksen, ML; Ellegaard, O; Wallin, JA SOURCE: ENVIRONMENTAL HEALTH 10. NOV 10 2011. p.NIL_1-NIL_8 BIOMED CENTRAL LTD, LONDON SEARCH TERM(S): MERTON RK rauth; MERTON RK SCIENCE 159:56 1968; BIBLIOMETR* item_title; JOURNAL item_title KEYWORDS+: HEALTH ABSTRACT: Background: While environmental research addresses scientific questions of possible societal relevance, it is unclear to what degree research focuses on environmental chemicals in need of documentation for risk assessment purposes. Methods: In a bibliometric analysis, we used SciFinder to extract Chemical Abstract Service (CAS) numbers for chemicals addressed by publications in the 78 major environmental science journals during 2000- 2009. The Web of Science was used to conduct title searches to determine long-term trends for prominent substances and substances considered in need of research attention. Results: The 119,636 journal articles found had 760,056 CAS number links during 2000-2009. The top-20 environmental chemicals consisted of metals, (chlorinated) biphenyls, polyaromatic hydrocarbons, benzene, and ethanol and contributed 12% toward the total number of links-Each of the top-20 substances was covered by 2,000-10,000 articles during the decade. The numbers for the 10-year period were similar to the total numbers of pre- 2000 articles on the same chemicals. However, substances considered a high priority from a regulatory viewpoint, due to lack of documentation, showed very low publication rates. The persistence in the scientific literature of the top-20 chemicals was only weakly related to their publication in journals with a high impact factor, but some substances achieved high citation rates. Conclusions: The persistence of some environmental chemicals in the scientific literature may be due to a 'Matthew' principle of maintaining prominence for the very reason of having been well researched. Such bias detracts from the societal needs for documentation on less well known environmental hazards, and it may also impact negatively on the potentials for innovation and discovery in research. AUTHOR ADDRESS: P Grandjean, Univ So Denmark, Dept Environm Med, Odense, Denmark ---------------------------------------------------------------- TITLE: Growth and trends in publications about abdominal wall hernias and the impact of a specific journal on herniology: a bibliometric analysis (Review, English) AUTHOR: Kulacoglu, H; Oztuna, D SOURCE: HERNIA 15 (6). DEC 2011. p.615-628 SPRINGER, NEW YORK SEARCH TERM(S): BIBLIOMETR* GARFIELD E SCIENTIST 10:13 1996 KEYWORDS: Hernia; Publication; Journal; Bibliometric analysis KEYWORDS+: RANDOMIZED CONTROLLED-TRIALS; SUB-SPECIALIZATION; SURGICAL SUBSPECIALIZATION; GENERAL-SURGERY; UNITED- STATES; REPAIR; MEDLINE; MEDICINE; CANCER; PRODUCTIVITY ABSTRACT: The aim of this systematic review was to determine the exact volume and growth pattern of articles on abdominal wall hernias, in particular the effect of the journal Hernia on publications about hernias. A PubMed search was performed for every year between 1965 and 2010, using the title words "inguinal hernia," "incisional hernia," and "umbilical hernia." Then, two consecutive 10-year periods were chosen for a systematic PubMed search, before and after 2001-the year in which Hernia began to be indexed in PubMed. The main keywords used were as follows: "inguinal hernia" "incisional hernia" "umbilical hernia" "mesh" "laparoscopic" and "experimental." The number of all articles indexed in PubMed increased 1.6-fold between the periods 1991-2000 and 2001-2010. The number of articles with the title word "inguinal hernia" increased 1.7-fold, whereas the rises for incisional and umbilical hernias were more prominent: 3.9- and 2.6-fold. Article titles with the combined keywords "hernia and mesh" and "hernia and laparoscopic" increased 2.8- and 2.4-fold. The most striking combined search was for "umbilical hernia and mesh" with a 20.5-fold rise. The percentage of articles published in the journal Hernia among all articles in all 25 selected journals, including Hernia was 30% on average. Hernia, Surgical Endoscopy and the British Journal of Surgery were the leading journals for publications for inguinal hernia in the last decade. Growth in hernia papers is greater than the overall growth in PubMed. Articles on incisional hernia increased faster than did those on inguinal and umbilical hernias. The establishment and indexing of Hernia decreased the proportion of hernia publications in other journals. The core journals for herniology are Hernia, Surgical Endoscopy, and the British Journal of Surgery. AUTHOR ADDRESS: H Kulacoglu, Diskapi Yildirim Beyazit Teaching & Res Hosp, Dept Surg, 1 Cadde 109-5, TR-06490 Ankara, Turkey -------------------------------------------------------------------------- TITLE: The Limits of Sharing: An Ethical Analysis of the Arguments For and Against the Sharing of Databases and Material Banks (Article, English) AUTHOR: Smith, E SOURCE: ACCOUNTABILITY IN RESEARCH-POLICIES AND QUALITY ASSURANCE 18 (6). 2011. p.357-381 TAYLOR & FRANCIS LTD, ABINGDON SEARCH TERM(S): MERTON RK rauth KEYWORDS: databases; limitations; material banks; sharing; valorization KEYWORDS+: INTELLECTUAL PROPERTY; BIOBANK RESEARCH; GENETIC PRIVACY; CONSENT; SCIENCE; SOCIETY; COMMERCIALIZATION; PATENTS; HEALTH ABSTRACT: In this article, I study the challenges that make database and material bank sharing difficult for many researchers. I assert that if sharing is prima facie ethical (a view that I will defend), then any practices that limit sharing require justification. I argue that: 1) data and material sharing is ethical for many stakeholders; 2) there are, however, certain reasonable limits to sharing; and 3) the rationale and validity of arguments for any limitations to sharing must be made transparent. I conclude by providing general recommendations for how to ethically share databases and material banks. AUTHOR ADDRESS: E Smith, Univ Montreal, Dept Med Sociale & Prevent, Programmes Bioeth, Sch Publ Hlth, CP 6128,Succ Ctr Ville, Montreal, PQ H3C 3J7, Canada ------------------------------------------------------------------------- TITLE: Identifying unintended consequences of quality indicators: a qualitative study (Article, English) AUTHOR: Lester, HE; Hannon, KL; Campbell, SM SOURCE: BMJ QUALITY & SAFETY 20 (12). DEC 2011. p.1057-1061 B M J PUBLISHING GROUP, LONDON SEARCH TERM(S): MERTON RK rauth KEYWORDS+: PAY-FOR-PERFORMANCE; OUTCOMES FRAMEWORK; UNITED-KINGDOM; CARE; EXPERIENCE; DELIVERY; ENGLAND ABSTRACT: Background: For the first 5 years of the UK primary care pay for performance scheme, the Quality and Outcomes Framework (QOF), quality indicators were introduced without piloting. However, in 2009, potential new indicators were piloted in a nationally representative sample of practices. This paper describes an in-depth exploration of family physician, nurse and other primary-care practice staff views of the value of piloting with a particular focus on unintended consequences of 13 potential new QOF indicators. Method: Fifty-seven family-practice professionals were interviewed in 24 representative practices across England. Results: Almost all interviewees emphasised the value of piloting in terms of an opportunity to identify unintended consequences of potential QOF indicators in 'real world' settings with staff who deliver day-to-day care to patients. Four particular types of unintended consequences were identified: measure fixation, tunnel vision, misinterpretation and potential gaming. 'Measure fixation,' an inappropriate attention on isolated aspects of care, appeared to be the key unintended consequence. In particular, if the palliative care indicator had been introduced without piloting, this might have incentivised poorer care in a minority of practices with potential harm to vulnerable patients. Conclusions: It is important to identify concerns and experiences about unintended consequences of indicators at an early stage when there is time to remove or adapt problem indicators. Since the UK government currently spends over 1 pound billion each year on QOF, the 150 pound 000 spent on each piloting cohort (0.0005% of the total QOF budget) appears to be good value for money. AUTHOR ADDRESS: HE Lester, Univ Manchester, NIHR Sch Primary Care Res, Primary Care Grp, 7th Floor,Williamson Bldg,Oxford Rd, Manchester M13 9PL, Lancs, England ------------------------------------------------------------- TITLE: Progressive Trends and Impact of the Journal of Career Development: A Citation Analysis (Article, English) AUTHOR: Chaichanasakul, A; He, YH; Chen, HH; Allen, GEK; Khairallah, TS; Ramos, K SOURCE: JOURNAL OF CAREER DEVELOPMENT 38 (6). DEC 2011. p.456-478 SAGE PUBLICATIONS INC, THOUSAND OAKS SEARCH TERM(S): GARFIELD E rauth; SMITH LC LIBR TRENDS 30:83 1981; CITATION item_title; CITATION ANALYS* item_title; CITATION* item_title; JOURNAL item_title; GARFIELD E SCIENTOMETRICS 1:359 1979; J CAREER DEV source_abbrev_20 KEYWORDS: citation analysis; trend analysis; career development; Journal of Career Development; journal impact KEYWORDS+: PSYCHOLOGY JOURNALS; COUNSELING-PSYCHOLOGY; AMERICAN; RATINGS ABSTRACT: As one of the four premier journals in vocational psychology, the Journal of Career Development (JCD) has published over 830 articles over the past three decades. This study examined the performance of JCD through a citation analysis and provided evaluative data for scholars publishing in the field of vocation psychology. Articles published by JCD between 1986 and 2007 were analyzed. Additional data pertaining JCD's performance were also collected through the Journal Citation Reports. The analyses revealed a strong and growing impact of articles published by JCD on researchers and professionals. Specifically, results provided (a) the frequency and trends of JCD's citations overtime, (b) the journals where JCD's articles were most often cited, (c) the most frequently cited JCD's articles, (d) JCD's impact data in applied psychology, and (e) JCD's citation information among the other three vocational psychology journals. Implications of the results for the journal are discussed. AUTHOR ADDRESS: A Chaichanasakul, Univ Missouri, Counseling Psychol Program, 16 Hill Hall, Columbia, MO 65211 USA - From jonathan at LEVITT.NET Mon Dec 26 03:46:57 2011 From: jonathan at LEVITT.NET (Jonathan Levitt) Date: Mon, 26 Dec 2011 00:46:57 -0800 Subject: Quantitative Doctoral Forum in England In-Reply-To: Message-ID: Call for submissions & participation - Quantitative Research in Information Science 12-13 April 2012, University of Wolverhampton, UK > >Website: http://www.asis.org/Chapters/europe/ >We are very pleased to announce a Doctoral Forum, specialising in quantitative research in Information Science to be held on 12-13 April 2012 in England at the University of Wolverhampton. >This Doctoral Forum will begin with short introductory presentations by Prof Mike Thelwall and Dr Jonathan Levitt on the application of quantitative methods to Information Science. However, its main emphasis is on the students presenting and receiving feedback on their research and on meeting other doctoral students. Doctoral students who are either using, or considering using, quantitative methods in at least part of their doctoral research are encouraged to apply. >The venue, at the University of Wolverhampton, is located in Wolverhampton City Centre and is very close to Wolverhampton train station. Wolverhampton is a thirty minute train journey from Birmingham International airport. The event finishes on Friday afternoon, enabling participants to see something of England over the weekend; from Wolverhampton, Wales can be reached by train in thirty minutes and London can be reached by train in two hours. >This event, endorsed by the European Chapter of the American Society for Information Science and Technology (ASIST), is the first of two events that the European Chapter is endorsing, in celebration of the 75th anniversary of ASIST. The European Chapter is also endorsing Libraries In The Digital Age (LIDA) to be held on 18 ? 22 June 2012 in Zadar, Croatia. >This Doctoral Forum is open to all interested parties, including those who don?t present. If you wish to present your research at the Doctoral Forum please submit: a description of your research project and how you are using or are proposing to use quantitative methods (totalling a maximum of 150 words) and key related references. If you wish to attend the event and not present, please submit a description of your relevant experiences and reasons for wanting to attend the conference (totalling a maximum of 150 words). >Submissions need to be emailed to the Conference organiser >Dr. Jonathan Levitt, j.levitt at lboro.ac.uk) >and the Program manager (Prof. Mike Thelwall, m.thelwall at wlv.ac.uk) by mid-night UK-time on Sunday, February 12, 2012. >If you have any queries, please email Dr. Levitt (j.levitt at lboro.ac.uk) and Prof. Thelwall (m.thelwall at wlv.ac.uk). > >The cost of the event is ?50 per person, to cover the cost of four coffee/tea refreshment breaks, two sandwich lunch breaks and one buffet dinner. >Three low price hotels are located within easy walking distance of the venue, MC building, City Campus, University of Wolverhampton (http://wiki.worldflicks.org/mc_building.html). >LateRooms (http://www.laterooms.com) and Travelodge (http://www2.travelodge.co.uk) provide descriptions of nearby hotels and the option of reserving accommodation. > Website: http://www.asis.org/Chapters/europe/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugene.garfield at THOMSONREUTERS.COM Tue Dec 27 14:10:25 2011 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Tue, 27 Dec 2011 19:10:25 +0000 Subject: papers related to SIG metrics interests Message-ID: TITLE: Scientific Output, Social Infrastructure and Science Policies: the Case of Small and Medium-sized Countries (Article, English) AUTHOR: Horta, H; Veloso, FM SOURCE: VALUE-ADDED PARTNERING AND INNOVATION IN A CHANGING WORLD. 2009. p.42-67 PURDUE UNIV PRESS, W LAFAYETTE SEARCH TERM(S): PRICE DJD rauth; ZUCKERMAN H rauth; SMALL HG SOC STUD SCI 8:327 1978; ZUCKERMA.H MINERVA 9:66 1971 KEYWORDS+: PUBLICATIONS; IMPACT; COLLABORATION; INDICATORS; PATTERNS; NATIONS; SYSTEM AUTHOR ADDRESS: H Horta, Univ Tecn Lisbon, Ctr Innovat Technol & Policy Res IN, Inst Super Tecn, P-1100 Lisbon, Portugal ------------------------------------------------------------------------------ TITLE: Gender Trends of Urology Manuscript Authors in the United States: A 35-Year Progression (Article, English) AUTHOR: Weiss, DA; Kovshilovskaya, B; Breyer, BN SOURCE: JOURNAL OF UROLOGY 187 (1). JAN 2012. p.253-258 ELSEVIER SCIENCE INC, NEW YORK SEARCH TERM(S): HIRSCH JE P NATL ACAD SCI USA 102:16569 2005 KEYWORDS: urology; authorship; women; manuscripts; medical; periodicals as topic KEYWORDS+: FEMALE AUTHORSHIP; 3 DECADES; GAP ABSTRACT: Purpose: The presence of women in urology has gradually increased in the last 35 years with an accelerated rate in the last decade. We evaluated manuscript authorship trends by gender. Manuscript authorship is a metric that has been used as a marker of academic productivity. We hypothesized that the number of first and last author publications by women has increased proportionately to the number of women in the field during the last 35 years. Materials and Methods: We performed a bibliometric study to examine authorship gender in The Journal of Urology (R) and Urology (R). We reviewed all original articles published from American institutions in 1974, 1979, 1984, 1989, 1994, 1999, 2004 and 2009. Results: Of the 8,313 articles reviewed 5,461 were from American institutions, including 97.5% for which we determined author gender. There were 767 articles with female authors, including 440 first and 327 last authors. First and last female authorship increased from 2.7% of all authors in 1979 to 26.5% in 2009 ( test for trend p < 0.001). This authorship rate surpasses the rate of growth of women in urology, which increased from 0.24% in 1975 to 6.2% in 2008. Conclusions: Based on authorship gender analysis women urologists produce manuscripts at a rate that exceeds their number in the field. Findings show that women in urology are productive, active members of the academic community. AUTHOR ADDRESS: BN Breyer, Univ Calif San Francisco, Dept Urol, 400 Parnassus Ave,A610, San Francisco, CA 94143 USA ------------------------------------------------------------------------------------ Eugene Garfield, PhD. email: garfield at codex.cis.upenn.edu home page: www.eugenegarfield.org Tel: 610-525-8729 Fax: 610-560-4749 Chairman Emeritus, ThomsonReuters Scientific (formerly ISI) 1500 Spring Garden Street, Philadelphia, PA 19130-4067 Editor Emeritus, The Scientist LLC. www.the-scientist.com 121 W 27th Street, Suite 604, New York, NY 10001 Past President, American Society for Information Science and Technology (ASIS&T) www.asist.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From priem at EMAIL.UNC.EDU Wed Dec 28 18:00:25 2011 From: priem at EMAIL.UNC.EDU (Jason Priem) Date: Wed, 28 Dec 2011 18:00:25 -0500 Subject: Twitter predicts citation In-Reply-To: <1654640A36FE964C936514B2FD0B2CB407CC55@EAGF-ERFPMBX42.ERF.thomson.com> Message-ID: Highly-tweeted articles in JMIR are 11x more likely to later be highly-cited articles. Of course, this is a very specific sample, but it's a provocative finding. Eysenbach, G. (2011). Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact. Journal of Medical Internet Research, 13(4). doi:10.2196/jmir.2012 -- Jason Priem UNC Royster Scholar School of Information and Library Science University of North Carolina at Chapel Hill From loet at LEYDESDORFF.NET Fri Dec 30 02:50:02 2011 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 30 Dec 2011 08:50:02 +0100 Subject: recent papers on visualization and impact/excellence measurement Message-ID: Visualization and Analysis of Frames in Collections of Messages: Content Analysis and the Measurement of Meaning Esther Vlieger & Loet Leydesdorff A step-to-step introduction is provided on how to generate a semantic map from a collection of messages (full texts, paragraphs or statements) using freely available software and/or SPSS for the relevant statistics and the visualization. The techniques are discussed in the various theoretical contexts of (i) linguistics (e.g., Latent Semantic Analysis), (ii) sociocybernetics and social systems theory (e.g., the communication of meaning), and (iii) communication studies (e.g., framing and agenda-setting). We distinguish between the communication of information in the network space (social network analysis) and the communication of meaning in the vector space. The vector space can be considered a generated as an architecture by the network of relations in the network space; words are then not only related, but also positioned. These positions are expected rather than observed and therefore one can communicate meaning. Knowledge can be generated when these meanings can recursively be communicated and therefore also further codified. Forthcoming in: Manuel Mora, Ovsei Gelman, Annette Steenkamp, and Maresh S. Raisinghani (Eds.), Research Methodologies, Innovations and Philosophies in Systems Engineering and Information Systems, Hershey PA: Information Science Reference, 2012, pp. 322-340, doi: 10.4018/978-1-4666-0179-6.ch16. _____ Percentile Ranks and the Integrated Impact Indicator ( I3) Loet Leydesdorff & Lutz Bornmann We tested Rousseau's (in press) recent proposal to define percentile classes in the case of the Integrated Impact Indicator (I3) so that the largest number in a set always belongs to the highest (100th) percentile rank class. In the case a set of nine uncited papers and one with citation, however, the uncited papers would all be placed in the 90th percentile rank. A lowly-cited document set would thus be advantaged when compared with a highly-cited one. Notwithstanding our reservations, we extended the program for computing I3 in Web-of-Science data (at this http URL ) with this option; the quantiles without a correction are now the default. As Rousseau mentions, excellence indicators (e.g., the top-10%) can be considered as special cases of I3: only two percentile rank classes are distinguished for the evaluation. Both excellence and impact indicators can be tested statistically using the z-test for independent proportions. A shorter version of this paper is forthcoming as a Letter to the Editor of the Journal of the American Society for Information Science and Technology (in press). ** apologies for cross-postings _____ Loet Leydesdorff Professor, University of Amsterdam Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam. Tel. +31-20-525 6598; fax: +31-842239111 loet at leydesdorff.net ; http://www.leydesdorff.net/ Visiting Professor, ISTIC, Beijing; Honorary Fellow, SPRU, University of Sussex; http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en -------------- next part -------------- An HTML attachment was scrubbed... URL: