Peer Review Scandals

Stephen J Bensman notsjb at LSU.EDU
Tue Jul 15 09:14:33 EDT 2014


So much for peer review.

Stephen J Bensman, Ph.D.
LSU Libraries
Lousiana State University
Baton Rouge, LA 70803
USA


WALL STREET JOURNAL  OPINION PIECE

The Corruption of Peer Review Is Harming Scientific Credibility
Dubious studies on the danger of hurricane names may be laughable. But bad science can cause bad policy.
By
Hank Campbell
July 13, 2014 6:32 p.m. ET
Academic publishing was rocked by the news on July 8 that a company called Sage Publications is retracting 60 papers from its Journal of Vibration and Control, about the science of acoustics. The company said a researcher in Taiwan and others had exploited peer review so that certain papers were sure to get a positive review for placement in the journal. In one case, a paper's author gave glowing reviews to his own work using phony names.
Acoustics is an important field. But in biomedicine faulty research and a dubious peer-review process can have life-or-death consequences. In June, Dr. Francis Collins, director of the National Institutes of Health and responsible for $30 billion in annual government-funded research, held a meeting to discuss ways to ensure that more published scientific studies and results are accurate. According to a 2011 report in the monthly journal Nature Reviews Drug Discovery, the results of two-thirds of 67 key studies analyzed by Bayer researchers from 2008-2010 couldn't be reproduced.
Enlarge Image Close <javascript://>
[http://si.wsj.net/public/resources/images/BN-DR115_edp071_D_20140713123916.jpg][cat]
Getty Images
That finding was a bombshell. Replication is a fundamental tenet of science, and the hallmark of peer review is that other researchers can look at data and methodology and determine the work's validity. Dr. Collins and co-author Dr. Lawrence Tabak highlighted the problem in a January 2014 article in Nature. "What hope is there that other scientists will be able to build on such work to further biomedical progress," if no one can check and replicate the research, they wrote.
The authors pointed to several reasons for flawed studies, including "poor training of researchers in experimental design," an "emphasis on making provocative statements," and publications that don't "report basic elements of experimental design." They also said that "some scientists reputedly use a 'secret sauce' to make their experiments work-and withhold details from publication or describe them only vaguely to retain a competitive edge."
Papers with such problems or omissions would never see the light of day if sound peer-review practices were in place-and their absence at many journals is the root of the problem. Peer review involves an anonymous panel of objective experts critiquing a paper on its merits. Obviously, a panel should not contain anyone who agrees in advance to give the paper favorable attention and help it get published. Yet a variety of journals have allowed or overlooked such practices.
Absent rigorous peer review, we get the paper published in June in the Proceedings of the National Academy of Sciences. Titled "Female hurricanes are deadlier than male hurricanes," it concluded that hurricanes with female names cause more deaths than male-named hurricanes-ostensibly because implicit sexism makes people take the storms with a woman's name less seriously. The work was debunked once its methods were examined, but not before it got attention nationwide.
Such a dubious paper made its way into national media outlets because of the imprimatur of the prestigious National Academy of Sciences.
Yet a look at the organization's own submission guidelines makes clear that if you are a National Academy member today, you can edit a research paper that you wrote yourself and only have to answer a few questions before an editorial board; you can even arrange to be the official reviewer for people you know. The result of such laxity isn't just the publication of a dubious finding like the hurricane gender-bias claim. Some errors can have serious consequences if bad science leads to bad policy.
In 2002 and 2010, papers published in the Proceedings of the National Academy of Sciences claimed that a pesticide called atrazine was causing sex changes in frogs. As a result the Environmental Protection Agency set up special panels to re-examine the product's safety. Both papers had the same editor, David Wake of the University of California, Berkeley, who is a colleague of the papers' lead author, Tyrone Hayes, also of Berkeley.
In keeping with National Academy of Sciences policy, Prof. Hayes preselected Prof. Wake as his editor. Both studies were published without a review of the data used to reach the finding. No one has been able to reproduce the results of either paper, including the EPA, which did expensive, time-consuming reviews of the pesticide brought about by the published claims. As the agency investigated, it couldn't even use those papers about atrazine's alleged effects because the research they were based on didn't meet the criteria for legitimate scientific work. The authors refused to hand over data that led them to their claimed results-which meant no one could run the same computer program and match their results.
Earlier this month, Nature retracted two studies it had published in January in which researchers from the Riken Center for Development Biology in Japan asserted that they had found a way to turn some cells into embryonic stem cells by a simple stress process. The studies had passed peer review, the magazine said, despite flaws that included misrepresented information.
Fixing peer review won't be easy, although exposing its weaknesses is a good place to start. Michael Eisen, a biologist at UC Berkeley, is a co-founder of the Public Library of Science, one of the world's largest nonprofit science publishers. He told me in an email that, "We need to get away from the notion, proven wrong on a daily basis, that peer review of any kind at any journal means that a work of science is correct. What it means is that a few (1-4) people read it over and didn't see any major problems. That's a very low bar in even the best of circumstances."
But even the most rigorous peer review can be effective only if authors provide the data they used to reach their results, something that many still won't do and that few journals require for publication. Some publishers have begun to mandate open data. In March the Public Library of Science began requiring that study data be publicly available. That means anyone with the ability to check should be able to reproduce, validate and understand the findings in a published paper. This should also ensure that there is much better scrutiny of flawed claims about sexist weather events and hermaphroditic frogs-before they appear on every news station in America.
Mr. Campbell is the founder of Science 2.0 and co-author of "Science Left Behind" (PublicAffairs, 2012).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20140715/6193d09c/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 22479 bytes
Desc: image001.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20140715/6193d09c/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.jpg
Type: image/jpeg
Size: 76790 bytes
Desc: image002.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20140715/6193d09c/attachment-0001.jpg>


More information about the SIGMETRICS mailing list