FW: Re: New ways of measuring research by Stevan Harnad
Eugene Garfield
eugene.garfield at THOMSONREUTERS.COM
Wed Oct 8 14:18:08 EDT 2008
Clearly a message of interest to the subscribers to Sig Metrics of
ASIST. Geen Garfield
-----Original Message-----
From: American Scientist Open Access Forum
[mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG] On
Behalf Of Stevan Harnad
Sent: Wednesday, October 08, 2008 11:03 AM
To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG
Subject: Re: New ways of measuring research
On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill
<Bill.Valdez at science.doe.gov> wrote:
> the primary reason that I believe bibliometrics, innovation
> indices, patent analysis and econometric modeling are flawed is that
> they rely upon the counting of things (paper, money, people, etc.)
> without understanding the underlying motivations of the actors within
> the scientific ecosystem.
There are two ways to evaluate:
Subjectively (expert judgement, peer review, opinion polls)
or
Objectively: counting things
The same is true of motives: you can assess them subjectively or
objectively. If objectively, you have to count things.
That's metrics.
Philosophers say "Show me someone who wishes to discard metaphysics,
and I'll show you a metaphysician with a rival (metaphysical) system."
The metric equivalent is "Show me someone who wishes to discard
metrics (counting things), and I'll show you a metrician with a rival
(metric) system."
Objective metrics, however, must be *validated*, and that usually
begins by initializing their weights based on their correlation with
existing (already validated, or face-valid) metrics and/or peer review
(expert judgment).
Note also that there are a-priori evaluations (research funding
proposals, research findings submittedf or publication) and
a-posteriori evaluations (research performance assessment).
> what,,, motivates scientists to collaborate?
You can ask them (subjective), or you can count things
(co-authorships, co-citations, etc.) to infer what factors underlie
collaboration (objective).
> Second, what science policy makers want is a set of decision support
> tools that supplement the existing gold standard (expert judgment) and
> provide options for the future.
New metrics need to be validated against existing, already validated
(or face-valid) metrics which in turn have to be validated against the
"gold standard" (expert judgment. Once shown to be reliable and valid,
metrics can then predict on their own, especially jointly, with
suitable weights:
The UK RAE 2008 offers an ideal opportunity to validate a wide
spectrum of old and new metrics, jointly, field by field, against
expert judgment:
Harnad, S. (2007) Open Access Scientometrics and the UK Research
Assessment Exercise. In Proceedings of 11th Annual Meeting of the
International Society for Scientometrics and Informetrics 11(1), pp.
27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
http://eprints.ecs.soton.ac.uk/13804/
Sample of candidate
OA-era metrics:
Citations (C)
CiteRank
Co-citations
Downloads (D)
C/D Correlations
Hub/Authority index
Chronometrics: Latency/Longevity
Endogamy/Exogamy
Book citation index
Research funding
Students
Prizes
h-index
Co-authorships
Number of articles
Number of publishing years
Semiometrics (latent semantic indexing, text overlap, etc.)
> policy makers need to understand the benefits and effectiveness of
their
> investment decisions in R&D. Currently, policy makers rely on big
> committee reviews, peer review, and their own best judgment to make
> those decisions. The current set of tools available don't provide
> policy makers with rigorous answers to the benefits/effectiveness
> questions... and they are too difficult to use and/or
> inexplicable to the normal policy maker. The result is the laundry
list
> of "metrics" or "indicators" that are contained in the "Gathering
Storm"
> or any of the innovation indices that I have seen to date.
The difference between unvalidated and validated metrics is the
difference between night and day.
The role of expert judgment will obviously remain primary in the case
of a-priori evaluations (specific research proposals and submissions
for publication) and a-posteriori evaluations (research performance
evaluation, impact studies)
> Finally, I don't think we know enough about the functioning of the
> innovation system to begin making judgments about which
> metrics/indicators are reliable enough to provide guidance to policy
> makers. I believe that we must move to an ecosystem model of
innovation
> and that if you do that, then non-obvious indicators (relative
> competitiveness/openness of the system, embedded infrastructure, etc.)
> become much more important than the traditional metrics used by NSF,
> OECD, EU and others. In addition, the decision support tools will
> gravitate away from the static (econometric modeling,
> patent/bibliometric citations) and toward the dynamic (systems
modeling,
> visual analytics).
I'm not sure what all these measures are, but assuming they are
countale metrics, they all need prior validation against validated or
face-valid criteria, fields by field, and preferably a large battery
of candidate metrics, validated jointly, initializing the weights of
each.
OA will help provide us with a rich new spectrum of candidate metrics
and an open means of monitoring, validating, and fine-tuning them.
Stevan Harnad
More information about the SIGMETRICS
mailing list