Wikiscience

Stephen J Bensman notsjb at LSU.EDU
Mon Oct 31 13:49:50 EDT 2011


I have been using the Publish or Perish software, which was created by
Anne-Wil Harzing, to study the h-index publications of the winners of
the Nobel Prize in chemistry.  These publications fulfilled the
stipulation Garfield's law of concentration by all being articles
published in the few elite journals highest in total cites.  The median
rank of these journals by total cites was 22.  What struck me most about
these publication was the amount of co-authorship of these articles  and
the fact that the winners of the Nobel prize most often were not the
primary authors but ranked far down the authorship list.  It struck me
that breakthrough chemical research was highly collaborative and
authorship position is not indicative of the author's importance.  One
of these papers had 22 co-authors, and the prize winner was last.  It
struck me that attributing citations to one author or another in certain
fields is archaic as we are dealing with collectives or what I call
"wikiscience."  For this reason, I found the Wall Street Journal article
below of extreme interest.  It seems that, to evaluate a scientist's
true importance, you must use something like Google Scholar, which can
retrieve the scientist's works no matter what her/his authorship
position.  Harzing's Publish or Perish software can be downloaded for
free from the following Web site: http://www.harzing.com/.  

 

     

Stephen J Bensman

LSU Libraries

Lousiana State University

Baton Rouge, LA 70803

 

 

LIFE & CULTURE
<http://online.wsj.com/public/search?article-doc-type=%7BLife+%26+Style%
7D&HEADER_TEXT=life+%26+style> 

OCTOBER 29, 2011

The New Einsteins Will Be Scientists Who Share 

>From cancer to cosmology, researchers could race ahead by working
together-online and in the open

 

By MICHAEL NIELSEN
<http://online.wsj.com/search/term.html?KEYWORDS=MICHAEL+NIELSEN&bylines
earch=true>  

In January 2009, a mathematician at Cambridge University named Tim
Gowers decided to use his blog to run an unusual social experiment. He
picked out a difficult mathematical problem and tried to solve it
completely in the open, using his blog to post ideas and partial
progress. He issued an open invitation for others to contribute their
own ideas, hoping that many minds would be more powerful than one. He
dubbed the experiment the Polymath Project.

 Alex Nabaum 

On an experimental blog, a far-flung group of mathematicians cracked a
tough problem in weeks.

Several hours after Mr. Gowers opened up his blog for discussion, a
Canadian-Hungarian mathematician posted a comment. Fifteen minutes
later, an Arizona high-school math teacher chimed in. Three minutes
after that, the UCLA mathematician Terence Tao commented. The discussion
ignited, and in just six weeks, the mathematical problem had been
solved.

Other challenges have followed, and though the polymaths haven't found
solutions every time, they have pioneered a new approach to
problem-solving. Their work is an example of the experiments in
networked science that are now being done to study everything from
galaxies to dinosaurs.

These projects use online tools as cognitive tools to amplify our
collective intelligence. The tools are a way of connecting the right
people to the right problems at the right time, activating what would
otherwise be latent expertise. 

Networked science has the potential to speed up dramatically the rate of
discovery across all of science. We may well see the day-to-day process
of scientific research change more fundamentally over the next few
decades than over the past three centuries. 

But there are major obstacles to realizing this goal. Though you might
think that scientists would aggressively adopt new tools for discovery,
they have been surprisingly inhibited. Ventures such as the Polymath
Project remain the exception, not the rule.

Consider the idea of sharing scientific data online. The best-known
example of this is the human genome project, whose data may be
downloaded by anyone. When you read in the news that a certain gene is
associated with a particular disease, you're almost certainly seeing a
discovery made possible by the project's open-data policy. 

Despite the value of open data, most labs make no systematic effort to
share data with other scientists. As one biologist told me, he had been
"sitting on [the] genome" for an entire species of life for more than a
year. A whole species of life! Just imagine the vital discoveries that
other scientists could have made if that genome had been uploaded to an
online database.

Why don't scientists share?

If you're a scientist applying for a job or a grant, the biggest factor
determining your success will be your record of scientific publications.
If that record is stellar, you'll do well. If not, you'll have a
problem. So you devote your working hours to tasks that will lead to
papers in scientific journals.

Even if you personally think it would be far better for science as a
whole if you carefully curated and shared your data online, that is time
away from your "real" work of writing papers. Except in a few fields,
sharing data is not something your peers will give you credit for doing.


There are other ways in which scientists are still backward in using
online tools. Consider, for example, the open scientific wikis launched
by a few brave pioneers in fields like quantum computing, string theory
and genetics (a wiki allows the sharing and collaborative editing of an
interlinked body of information, the best-known example being
Wikipedia). 

Specialized wikis could serve as up-to-date reference works on the
latest research in a field, like rapidly evolving super-textbooks. They
could include descriptions of major unsolved scientific problems and
serve as a tool to find solutions.

But most such wikis have failed. They have the same problem as data
sharing: Even if scientists believe in the value of contributing, they
know that writing a single mediocre paper will do far more for their
careers. The incentives are all wrong.

If networked science is to reach its potential, scientists will have to
embrace and reward the open sharing of all forms of scientific
knowledge, not just traditional journal publication. Networked science
must be open science. But how to get there?

A good start would be for government grant agencies (like the National
Institutes of Health and the National Science Foundation) to work with
scientists to develop requirements for the open sharing of knowledge
that is discovered with public support. Such policies have already
helped to create open data sets like the one for the human genome. But
they should be extended to require earlier and broader sharing. Grant
agencies also should do more to encourage scientists to submit new kinds
of evidence of their impact in their fields-not just papers!-as part of
their applications for funding. 

The scientific community itself needs to have an energetic, ongoing
conversation about the value of these new tools. We have to overthrow
the idea that it's a diversion from "real" work when scientists conduct
high-quality research in the open. Publicly funded science should be
open science.

Improving the way that science is done means speeding us along in curing
cancer, solving the problem of climate change and launching humanity
permanently into space. It means fundamental insights into the human
condition, into how the universe works and what it's made of. It means
discoveries not yet dreamt of. 

In the years ahead, we have an astonishing opportunity to reinvent
discovery itself. But to do so, we must first choose to create a
scientific culture that embraces the open sharing of knowledge.

-Mr. Nielsen is a pioneer in the field of quantum computing and the
author of "Reinventing Discovery: The New Era of Networked Science,"
from which this is adapted. 

Copyright 2011 Dow Jones & Company, Inc. All Rights Reserved

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111031/6bfce74d/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.gif
Type: image/gif
Size: 2151 bytes
Desc: image001.gif
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111031/6bfce74d/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.jpg
Type: image/jpeg
Size: 41229 bytes
Desc: image002.jpg
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20111031/6bfce74d/attachment.jpg>


More information about the SIGMETRICS mailing list