Science, Nature, Peer Review and Open Access Metrics

Stevan Harnad harnad at ECS.SOTON.AC.UK
Tue Feb 2 08:28:18 EST 2010

On 2-Feb-10, at 7:41 AM, Christopher Gutteridge wrote:

> Thanks Stevan, but there's one thing you didn't address. Which is  
> that the medical community, I believe, started the rules about not  
> accepting previously self-published works, as a measure against  
> quackery. The idea being to discourage reputable scientists from  
> publishing outside reviewed journals.
> I've seen a few of the "crackpot" papers that get submitted to  
> repositories, but bad medical information is far more immediately  
> dangerous to society than bad physics information.
> OA gives a risk of having overlay repositories, for example, which  
> could list both genuine quality work and "quackery", the genuine  
> articles lending their respectability to the others....
> ps. You are welcome to quote me by name :)

Chris Gutteridge is quite right.

(Let me first introduce Chris for those who don't already know who he  
is: Chris is the award-winning developer of  
EPrints; he has for years now been the successor of EPrints' original  
designer, Rob Tansley, who was then likewise at U Southampton 
   and has since gone on to design DSpace at MIT and is now at Google!)

Yes, there is a danger, especially in health-related research, openly  
publicizing unrefereed reports that endanger public health. This is  
another of the many reasons why the self-archiving of refereed final  
drafts can and should be mandated by institutions and fundees, but the  
self-archiving of unrefereed preprints cannot and should not be  

Open Access will help in two ways: It will raise the potential profile  
of published papers that have been unfairly relegated to a lower level  
of the peer-reviewed journal hierarchy than they deserved. It will  
also help to catch errors (in both published and unpublished  
postings), through broader peer feedback.

But users can and will learn to weight the credibility of a report  
with the track-record for quality of the journal in which it appeared.  
OA metrics will help.

I've just posted this comment to where Kent  
Anderson raises another valid and related worry that online postings  
and their tags and comments  (including journals and their citations)  
could be inflating the impact of biassed and bogus content  through  
"dynamic filtering" and "amplification":

The solution is open, multiple metrics. Citation alone has inflated  
power right now, but with Open Access, it will have many potential  
competitors and complements. Multiple joint “weights” on the metrics  
can also be controlled by the user. And abuses can be detected as  
departures from the joint pattern — and named and shamed where needed.  
It’s far easier to abuse one metric, like citations, than to  
manipulate the whole lot. (As with spamming and spam-filtering, and  
other online abuses, it is more like the old “Spy vs. Spy” series in  
Mad Magazine, where each spy was always getting one step ahead of the  

Harnad, S. (1979) Creative disagreement. The Sciences 19: 18 - 20.

Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study  
in scientific quality control, New York: Cambridge University Press.

Harnad, S. (1984) Commentaries, opinions and the growth of scientific  
knowledge. American Psychologist 39: 1497 - 1498.

Harnad, S (1985) Rational disagreement in peer review. Science,  
Technology and Human Values, 10 p.55-62.

Harnad, S. (1986) Policing the Paper Chase. (Review of S. Lock, A  
difficult balance: Peer review in biomedical publication.) Nature 322:  
24 - 5.

Harnad, S. (1995) Interactive Cognition: Exploring the Potential of  
Electronic Quote/Commenting. In: B. Gorayska & J.L. Mey (Eds.)  
Cognitive Technology: In Search of a Humane Interface. Elsevier. Pp.  

Harnad, S. (1996) Implementing Peer Review on the Net: Scientific  
Quality Control in Scholarly Electronic Journals. In: Peek, R. &  
Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier.  
Cambridge MA: MIT Press. Pp 103-118.

Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer  
Review, Peer Commentary and Copyright. Learned Publishing 11(4)  

Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature  
[online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B.  
(2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield.  
Pp. 235-242.

Harnad, S. (2003/2004)  Back to the Oral Tradition Through Skywriting  
at the Speed of Thought. Interdisciplines.   Salaun, Jean-Michel &  
Vendendorpe, Christian (dir). Le defis de la publication sur le web:  
hyperlectures, cybertextes et meta-editions. Presses de l'enssib.

Harnad, S. (2008) Validating Research Performance Metrics Against Peer  
Rankings. Ethics in Science and Environmental Politics 8 (11) doi: 
10.3354/esep00088  The Use And Misuse Of Bibliometric Indices In  
Evaluating Scholarly Performance
Harnad, S. (2009) Open Access Scientometrics and the UK Research  
Assessment Exercise. Scientometrics 79 (1) Also in Proceedings of 11th  
Annual Meeting of the International Society for Scientometrics and  
Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and  
Moed, H. F., Eds.  (2007)

> Stevan Harnad wrote:
>>> anon: "This article demonstrates some of the obvious issues with  
>>> peer review."
>> (1) Yes, peer review is imperfect, because human judgment is  
>> imperfect.
>> But there is no alternative system that is as good as or better for
>> checking, improving and tagging the quality of specialized research  
>> than
>> qualified specialist review, answerable to a qualified specialist
>> editor or board.
>> (2) If there is a weak link in peer review, it is not the peer review
>> itself, but the editor not doing a conscientious enough job. (The
>> solution is to make editors more answerable. It would also be good to
>> publish the name of the accepting reviewers -- but not of the  
>> rejecting
>> ones, if a paper is rejected, to protect the anonymity and hence the
>> honesty of reviewers who may be criticizing the work of someone who  
>> can
>> pay them back. Justice is the editor's responsibility.)
>> (3) Nature and Science are vastly over-rated, and OA will change  
>> this.
>> They are not just journals with high quality standards but also
>> especially highly desired "brands" because they can only accept a  
>> tiny
>> percentage of submissions, hence giving the impression of being the  
>> best
>> of the best. (Most Nature/Science rejections still go on to appear in
>> the top specialized journals in their fields. They just don't get the
>> big extra boost in visibility and impact that the Nature and Science
>> "brand" and publicity machine adds.) In reality, their actual choices
>> are often extremely arbitrary or stilted.
>> (4) So it is not true that peer review in general blocks good work,  
>> or
>> favors some work over others. Referees sometimes do that, but there  
>> are
>> always other journals to submit to. (Just about everything is  
>> published
>> somewhere, eventually.) Again, OA will help level this playing field.
>> (5) Mark Wolpert is right that authors tend to get "paranoid" about
>> this, when their work is rejected, especially by Nature and Science.
>> Sometimes they are right. Mostly they are not, and Nature and Science
>> are less biased than they are arbitrary, often going for what looks
>> like more lustre rather than more substance.
>> (6) Yes, in some sub-areas it is almost certain that Nature and  
>> Science
>> have clique sub-editors and referees, and that their choices are  
>> hence
>> sometimes biased and driven by competition rather than quality. These
>> should be exposed wherever possible, and the editor in chief should
>> frankly face up to it.  But this is a flaw in Nature and Science's  
>> vastly
>> inflated brand effect, not in peer review. And again, once OA becomes
>> universal, it will help to counteract this. (OA will also help to  
>> catch
>> errors, before and after peer review.)
>> Stevan Harnad
> -- 
> Christopher Gutteridge --
> Lead Developer, EPrints Project,
> Web Projects Manager, School of Electronics and Computer Science,
> University of Southampton.

More information about the SIGMETRICS mailing list