FW: [SIGMETRICS] Scientific Certainty

Loet Leydesdorff loet at LEYDESDORFF.NET
Fri Dec 2 13:56:36 EST 2011


Dear Stephen, 

 

As you know, I had my first degree in biochemistry. The University of Amsterdam had a leading laboratory at the time. Informally, we were told that it was always impossible to replicate results of others without contacting them first because most researchers misinformed in the legends (on purpose). People wanted others to contact them first in order to establish the hierarchy/trust.

 

Best, 

Loet

 

  _____  

Loet Leydesdorff 

Professor, University of Amsterdam
Amsterdam School of Communications Research (ASCoR), 
Kloveniersburgwal 48, 1012 CX Amsterdam. 
Tel.: +31-20- 525 6598; fax: +31-842239111
loet at leydesdorff.net  <mailto:loet at leydesdorff.net> ; http://www.leydesdorff.net/ ; http://scholar.google.com/citations?user=ych9gNYAAAAJ <http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en> &hl=en 

 

From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman
Sent: Friday, December 02, 2011 6:33 PM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: [SIGMETRICS] Scientific Certainty

 

Two Wall Street Journal articles on scientific certainty.  The firs discusses it in respect to the economics of it.

 

Stephen J Bensman, Ph.D.

LSU Libraries

Lousiana State University

Baton Rouge, LA 70803

USA

 

Description: Description: The Wall Street Journal

 

Scientists' Elusive Goal: Reproducing Study Results 

 

 

By  <http://online.wsj.com/search/term.html?KEYWORDS=GAUTAM+NAIK&bylinesearch=true> GAUTAM NAIK 

Two years ago, a group of Boston researchers published a study describing how they had destroyed cancer tumors by targeting a protein called STK33. Scientists at biotechnology firm  <http://online.wsj.com/public/quotes/main.html?type=djn&symbol=AMGN> Amgen Inc. quickly pounced on the idea and assigned two dozen researchers to try to repeat the experiment with a goal of turning the findings into a drug. 

 <http://online.wsj.com/article/SB10001424052970203764804577059841672541590.html?mod=ITP_pageone_0> Description: Description: http://m.wsj.net/video/20111202/120211hubamreproduce/120211hubamreproduce_512x288.jpg

WSJ's Gautam Naik has details of challenges scientists face in reproducing claims made by medical journals. Photo: Sandy Huffaker/The New York Times

It proved to be a waste of time and money. After six months of intensive lab work, Amgen found it couldn't replicate the results and scrapped the project.

"I was disappointed but not surprised," says Glenn Begley, vice president of research at Amgen of Thousand Oaks, Calif. "More often than not, we are unable to reproduce findings" published by researchers in journals.

This is one of medicine's dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can't be reproduced. 

Enlarge Image

 

Close

Description: Description: Reproduce

Bayer 

Researchers at Bayer's labs often find their experiments fail to match claims made in the scientific literature. 

"It's a very serious and disturbing issue because it obviously misleads people" who implicitly trust findings published in a respected peer-reviewed journal, says Bruce Alberts, editor of Science. On Friday, the U.S. journal is devoting a large chunk of its Dec. 2 issue to the problem of scientific replication.

Reproducibility is the foundation of all modern research, the standard by which scientific claims are evaluated. In the U.S. alone, biomedical research is a $100-billion-year enterprise. So when published medical findings can't be validated by others, there are major consequences.

Drug manufacturers rely heavily on early-stage academic research and can waste millions of dollars on products if the original results are later shown to be unreliable. Patients may enroll in clinical trials based on conflicting data, and sometimes see no benefits or suffer harmful side effects. 

There is also a more insidious and pervasive problem: a preference for positive results.

Description: Description: [REPRODUCE_p1]

Unlike pharmaceutical companies, academic researchers rarely conduct experiments in a "blinded" manner. This makes it easier to cherry-pick statistical findings that support a positive result. In the quest for jobs and funding, especially in an era of economic malaise, the growing army of scientists need more successful experiments to their name, not failed ones. An explosion of scientific and academic journals has added to the pressure. 

When it comes to results that can't be replicated, Dr. Alberts says the increasing intricacy of experiments may be largely to blame. "It has to do with the complexity of biology and the fact that methods [used in labs] are getting more sophisticated," he says.

It is hard to assess whether the reproducibility problem has been getting worse over the years; there are some signs suggesting it could be. For example, the success rate of Phase 2 human trials—where a drug's efficacy is measured—fell to 18% in 2008-2010 from 28% in 2006-2007, according to a global analysis published in the journal Nature Reviews in May. 

"Lack of reproducibility is one element in the decline in Phase 2 success," says Khusru Asadullah, a Bayer AG research executive. 

In September, Bayer published a study describing how it had halted nearly two-thirds of its early drug target projects because in-house experiments failed to match claims made in the literature. 

The German pharmaceutical company says that none of the claims it attempted to validate were in papers that had been retracted or were suspected of being flawed. Yet, even the data in the most prestigious journals couldn't be confirmed, Bayer said. 

Enlarge Image

Description: Description: REPRODUCE_jmp

Close

Description: Description: REPRODUCE_jmp

In 2008,  <http://online.wsj.com/public/quotes/main.html?type=djn&symbol=PFE> Pfizer Inc. made a high-profile bet, potentially worth more than $725 million, that it could turn a 25-year-old Russian cold medicine into an effective drug for Alzheimer's disease. 

The idea was promising. Published by the journal Lancet, data from researchers at Baylor College of Medicine and elsewhere suggested that the drug, an antihistamine called Dimebon, could improve symptoms in Alzheimer's patients. Later findings, presented by researchers at the University of California Los Angeles at a Chicago conference, showed that the drug appeared to prevent symptoms from worsening for up to 18 months.

"Statistically, the studies were very robust," says David Hung, chief executive officer of Medivation Inc., a San Francisco biotech firm that sponsored both studies.

In 2010, Medivation along with Pfizer released data from their own clinical trial for Dimebon, involving nearly 600 patients with mild to moderate Alzheimer's disease symptoms. The companies said they were unable to reproduce the Lancet results. They also indicated they had found no statistically significant difference between patients on the drug versus the inactive placebo.

Pfizer and Medivation have just completed a one-year study of Dimebon in over 1,000 patients, another effort to see if the drug could be a potential treatment for Alzheimer's. They expect to announce the results in coming months. 

Scientists offer a few theories as to why duplicative results may be so elusive. Two different labs can use slightly different equipment or materials, leading to divergent results. The more variables there are in an experiment, the more likely it is that small, unintended errors will pile up and swing a lab's conclusions one way or the other. And, of course, data that have been rigged, invented or fraudulently altered won't stand up to future scrutiny.

According to a report published by the U.K.'s Royal Society, there were 7.1 million researchers working globally across all scientific fields—academic and corporate—in 2007, a 25% increase from five years earlier. 

>From the Archives

 <http://online.wsj.com/article/SB10001424052702303627104576411850666582080.html> Mistakes in Scientific Studies Surge 8/10/2011

"Among the more obvious yet unquantifiable reasons, there is immense competition among laboratories and a pressure to publish," wrote Dr. Asadullah and others from Bayer, in their September paper. "There is also a bias toward publishing positive results, as it is easier to get positive results accepted in good journals." 

Science publications are under pressure, too. The number of research journals has jumped 23% between 2001 and 2010, according to Elsevier, which has analyzed the data. Their proliferation has ratcheted up competitive pressure on even elite journals, which can generate buzz by publishing splashy papers, typically containing positive findings, to meet the demands of a 24-hour news cycle. 

Dr. Alberts of Science acknowledges that journals increasingly have to strike a balance between publishing studies "with broad appeal," while making sure they aren't hyped. 

Drugmakers also have a penchant for positive results. A 2008 study published in the journal PLoS Medicine by researchers at the University of California San Francisco looked at data from 33 new drug applications submitted between 2001 and 2002 to the U.S. Food and Drug Administration. The agency requires drug companies to provide all data from clinical trials. However, the authors found that a quarter of the trial data—most of it unfavorable—never got published because the companies never submitted it to journals. 

The upshot: doctors who end up prescribing the FDA-approved drugs often don't get to see the unfavorable data.

"I would say that selectively publishing data is unethical because there are human subjects involved," says Lisa Bero of UCSF and co-author of the PLoS Medicine study.

In an email statement, a spokeswoman for the FDA said the agency considers all data it is given when reviewing a drug but "does not have the authority to control what a company chooses to publish."

Venture capital firms say they, too, are increasingly encountering cases of nonrepeatable studies, and cite it as a key reason why they are less willing to finance early-stage projects. Before investing in very early-stage research, Atlas Ventures, a venture-capital firm that backs biotech companies, now asks an outside lab to validate any experimental data. In about half the cases the findings can't be reproduced, says Bruce Booth, a partner in Atlas' Life Sciences group. 

There have been several prominent cases of nonreproducibility in recent months. For example, in September, the journal Science partially retracted a 2009 paper linking a virus to chronic fatigue syndrome because several labs couldn't replicate the published results. The partial retraction came after two of the 13 study authors went back to the blood samples they analyzed from chronic-fatigue patients and found they were contaminated. 

Some studies can't be redone for a more prosaic reason: the authors won't make all their raw data available to rival scientists.

John Ioannidis of Stanford University recently attempted to reproduce the findings of 18 papers published in the respected journal Nature Genetics. He noted that 16 of these papers stated that the underlying "gene expression" data for the studies were publicly available. 

But the supplied data apparently weren't detailed enough, and results from 16 of the 18 major papers couldn't fully be reproduced by Dr. Ioannidis and his colleagues. "We have to take it [on faith] that the findings are OK," said Dr. Ioannidis, an epidemiologist who studies the credibility of medical research. 

Veronique Kiermer, an editor at Nature, says she agrees with Dr. Ioannidis' conclusions, noting that the findings have prompted the journal to be more cautious when publishing large-scale genome analyses. 

When companies trying to find new drugs come up against the nonreproducibility problem, the repercussions can be significant.

A few years ago, several groups of scientists began to seek out new cancer drugs by targeting a protein called KRAS. The KRAS protein transmits signals received on the outside of a cell to its interior and is therefore crucial for regulating cell growth. But when certain mutations occur, the signaling can become continuous. That triggers excess growth such as tumors. 

The mutated form of KRAS is believed to be responsible for more than 60% of pancreatic cancers and half of colorectal cancers. It has also been implicated in the growth of tumors in many other organs, such as the lung. 

So scientists have been especially keen to impede KRAS and, thus, stop the constant signaling that leads to tumor growth.

In 2008, researchers at Harvard Medical School used cell-culture experiments to show that by inhibiting another protein, STK33, they could prevent the growth of tumor cell lines driven by the malfunctioning KRAS.

The finding galvanized researchers at Amgen, who first heard about the experiments at a scientific conference. "Everyone was trying to do this," recalls Dr. Begley of Amgen, which derives nearly half of its revenues from cancer drugs and related treatments. "It was a really big deal."

When the Harvard researchers published their results in the prestigious journal Cell, in May 2009, Amgen moved swiftly to capitalize on the findings. 

At a meeting in the company's offices in Thousand Oaks, Calif., Dr. Begley assigned a group of Amgen researchers the task of identifying small molecules that might inhibit STK33. Another team got a more basic job: reproduce the Harvard data.

"We're talking about hundreds of millions of dollars in downstream investments" if the approach works," says Dr. Begley. "So we need to be sure we're standing on something firm and solid."

But over the next few months, Dr. Begley and his team got increasingly disheartened. Amgen scientists, it turned out, couldn't reproduce any of the key findings published in Cell.

For example, there was no difference in the growth of cells where STK33 was largely blocked, compared with a control group of cells where STK33 wasn't blocked.

What could account for the irreproducibility of the results?

"In our opinion there were methodological issues" in Amgen's approach that could have led to the different findings, says Claudia Scholl, one of the lead authors of the original Cell paper.

Dr. Scholl points out, for example, that Amgen used a different reagent to suppress STK33 than the one reported in Cell. Yet, she acknowledges that even when slightly different reagents are used, "you should be able to reproduce the results." 

Now a cancer researcher at the University Hospital of Ulm in Germany, Dr. Scholl says her team has reproduced the original Cell results multiple times, and continues to have faith in STK33 as a cancer target.

Amgen, however, killed its STK33 program. In September, two dozen of the firm's scientists published a paper in the journal Cancer Research describing their failure to reproduce the main Cell findings.

Dr. Begley suggests that academic scientists, like drug companies, should perform more experiments in a "blinded" manner to reduce any bias toward positive findings. Otherwise, he says, "there is a human desire to get the results your boss wants you to get."

Adds Atlas' Mr. Booth: "Nobody gets a promotion from publishing a negative study." 

Write to Gautam Naik at  <mailto:gautam.naik at wsj.com> gautam.naik at wsj.com 

Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved

 

 

 

 

Description: Description: The Wall Street Journal

OPINION <http://online.wsj.com/public/search?article-doc-type=%7BCommentary+%28U.S.%29%7D&HEADER_TEXT=commentary+%28u.s.> 

DECEMBER 2, 2011

Absolute Certainty Is Not Scientific 

Global warming alarmists betray their cause when they declare that it is irresponsible to question them.

By DANIEL B. BOTKIN <http://online.wsj.com/search/term.html?KEYWORDS=DANIEL+B.+BOTKIN&bylinesearch=true>  

One of the changes among scientists in this century is the increasing number who believe that one can have complete and certain knowledge. For example, Michael J. Mumma, a NASA senior scientist who has led teams searching for evidence of life on Mars, was quoted in the New York Times as saying, "Based on evidence, what we do have is, unequivocally, the conditions for the emergence of life were present on Mars—period, end of story."

This belief in absolute certainty is fundamentally what has bothered me about the scientific debate over global warming in the 21st century, and I am hoping it will not characterize the discussions at the United Nations Climate Change Conference in Durban, South Africa, currently under way. 

 <http://online.wsj.com/article/SB10001424052970204630904577058111041127168.html?mod=WSJ_Opinion_LEADTop> Description: Description: http://m.wsj.net/video/20111201/120111opinionlomborg/120111opinionlomborg_512x288.jpg

Bjorn Lomborg on the ClimateGate 2.0 e-mail scandal and World AIDS Day.

Reading Mr. Mumma's statement, I thought immediately of physicist Niels Bohr, a Nobel laureate, who said, "Anyone who is not shocked by quantum theory has not understood it." To which Richard Feynman, another famous physicist and Nobel laureate, quipped, "Nobody understands quantum mechanics." 

I felt nostalgic for those times when even the greatest scientific minds admitted limits to what they knew. And when they recognized well that the key to the scientific method is that it is a way of knowing in which you can never completely prove that something is absolutely true. Instead, the important idea about the method is that any statement, to be scientific, must be open to disproof, and a way of knowing how to disprove it exists. 

Therefore, "Period, end of story" is something a scientist can say—but it isn't science. 

Enlarge Image

Description: Description: botkin

Close

 

Getty Images 

I was one of many scientists on several panels in the 1970s who reviewed the results from the Viking Landers on Mars, the ones that were supposed to conduct experiments that would help determine whether there was or wasn't life on that planet. I don't remember anybody on those panels talking in terms of absolute certainty. Instead, the discussions were about what the evidence did and did not suggest, and what might be disprovable from them and from future landers.

I was also one of a small number of scientists—mainly ecologists, climatologists and meteorologists—who in the 1970s became concerned about the possibility of a human-induced global warming, based on then-new measurements. It seemed to be an important scientific problem, both as part of the beginning of a new science of global ecology and as a potentially major practical problem that nations would have to deal with. It did not seem to be something that should or would rise above standard science and become something that one had to choose sides in. But that's what has happened.

Some scientists make "period, end of story" claims that human-induced global warming definitely, absolutely either is or isn't happening. For me, the extreme limit of this attitude was expressed by economist Paul Krugman, also a Nobel laureate, who wrote in his New York Times column in June, "Betraying the Planet" that "as I watched the deniers make their arguments, I couldn't help thinking that I was watching a form of treason—treason against the planet." What had begun as a true scientific question with possibly major practical implications had become accepted as an infallible belief (or if you're on the other side, an infallible disbelief), and any further questions were met, Joe-McCarthy style, "with me or agin me." 

Not only is it poor science to claim absolute truth, but it also leads to the kind of destructive and distrustful debate we've had in last decade about global warming. The history of science and technology suggests that such absolutism on both sides of a scientific debate doesn't often lead to practical solutions. 

It is helpful to go back to the work of the Wright brothers, whose invention of a true heavier-than-air flying machine was one kind of precursor to the Mars Landers. They basically invented aeronautical science and engineering, developed methods to test their hypotheses, and carefully worked their way through a combination of theory and experimentation. The plane that flew at Kill Devil Hill, a North Carolina dune, did not come out of true believers or absolute assertions, but out of good science and technological development. 

Let us hope that discussions about global warming can be more like the debates between those two brothers than between those who absolutely, completely agree with Paul Krugman and those who absolutely, completely disagree with him. How about a little agnosticism in our scientific assertions—and even, as with Richard Feynman, a little sense of humor so that we can laugh at our errors and move on? We should all remember that Feynman also said, "If you think that science is certain—well that's just an error on your part."

Mr. Botkin, president of the Center for the Study of the Environment and professor emeritus at the University of California, Santa Barbara, is the author of the forthcoming "Discordant Harmonies: Ecology in a Changing World" (Oxford University Press). 

Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111202/84b646a6/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.gif
Type: image/gif
Size: 2151 bytes
Desc: image001.gif
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111202/84b646a6/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.jpg
Type: image/jpeg
Size: 78339 bytes
Desc: image004.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111202/84b646a6/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image005.jpg
Type: image/jpeg
Size: 31808 bytes
Desc: image005.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111202/84b646a6/attachment-0001.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image006.jpg
Type: image/jpeg
Size: 15461 bytes
Desc: image006.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111202/84b646a6/attachment-0002.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image007.jpg
Type: image/jpeg
Size: 49542 bytes
Desc: image007.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111202/84b646a6/attachment-0003.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image020.jpg
Type: image/jpeg
Size: 17173 bytes
Desc: image020.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111202/84b646a6/attachment-0004.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.jpg
Type: image/jpeg
Size: 13419 bytes
Desc: image003.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111202/84b646a6/attachment-0005.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image021.jpg
Type: image/jpeg
Size: 7087 bytes
Desc: image021.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111202/84b646a6/attachment-0006.jpg>


More information about the SIGMETRICS mailing list