: New ways of measuring the economic and other impacts of research

Eugene Garfield eugene.garfield at THOMSONREUTERS.COM
Thu Oct 9 15:29:00 EDT 2008


Interest in methods of measuring the return on investment in research is
not new. Ed Mansfield ( now deceased) at Penn was active in this area as
were Zvi Griliches and others at Harvard. About six years we established
an annual award at Research!America which will again take place next
week in Washington. You can find information at
http://www.researchamerica.org/economicimpact_award

And 

http://www.researchamerica.org/event_detail/id:32

The awards committee of R!A welcomes nominations of papers which
contribute to demonstrating the economic impact of basic research,
prevention, etc. 

David Wojick and others are correct to point out the difficulties in
measuring these impacts but that should not prevent us from seeking
solutions. 






 
 

-----Original Message-----
From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David E. Wojick
Sent: Thursday, October 09, 2008 2:47 PM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] FW: Re: New ways of measuring research by
Stevan Harnad


I can't speak for Valdez but I know him and his work and share some of
his interests and concerns. The basic issue to me is one that we find
throughout science. One the one hand we find lots of statistical
analysis. But on the other we find the development of theoretical
processes and  mechanisms that explain the numbers. It is the discovery
of these processes and mechanisms that the science of science presently
lacks.

Most of the people on this list are familiar with some of these issues.
One, which Valdez alludes to, is the calculation of return on
investment. We are pretty sure that science is valuable but how do we
measure that value? We have many programs on which we have spent over $1
billion over the last 10 years. What has been the return to society?
What is it likely to be in future? Has one program returned more than
another? Why is this so hard to figure out?

 Another is the quality of research. Surely some research is better than
others, some papers better than others, in several different ways. For
that matter, what is the goal of science, or are there more than one?
Which fields are achieving which goals, to what degree? Are some fields
more productive than others? Are some speeding up, while others slow
down? Economics has resolved many of these issues with models of
rational behavior. Why can't the science of science do likewise? (It is
okay if it can't as long as we know why it can't.)

The point is that we know we are measuring something important but we
don't know what it is. Most of the terms we use to talk about science
lack an operational definition. In this sense the measurement of
scientific activity is ahead of our understanding of this activity. We
do not have a fundamental theory of the nature of science. We are like
geology before plate tectonics, or epidemiology before the germ theory
of disease, measuring what we do not understand.

David Wojick

>
> Clearly a message of interest to the subscribers to Sig Metrics of
>ASIST. Gene Garfield
>
>-----Original Message-----
>From: American Scientist Open Access Forum
>[mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG] On
>Behalf Of Stevan Harnad
>Sent: Wednesday, October 08, 2008 11:03 AM
>To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG
>Subject: Re: New ways of measuring research
>
>On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill
><Bill.Valdez at science.doe.gov> wrote:
>
>> the primary reason that I believe bibliometrics, innovation
>> indices, patent analysis and econometric modeling are flawed is that
>> they rely upon the counting of things (paper, money, people, etc.)
>> without understanding the underlying motivations of the actors within
>> the scientific ecosystem.
>
>There are two ways to evaluate:
>
>Subjectively (expert judgement, peer review, opinion polls)
>or
>Objectively: counting things
>
>The same is true of motives: you can assess them subjectively or
>objectively. If objectively, you have to count things.
>
>That's metrics.
>
>Philosophers say "Show me someone who wishes to discard metaphysics,
>and I'll show you a metaphysician with a rival (metaphysical) system."
>
>The metric equivalent is "Show me someone who wishes to discard
>metrics (counting things), and I'll show you a metrician with a rival
>(metric) system."
>
>Objective metrics, however, must be *validated*, and that usually
>begins by initializing their weights based on their correlation with
>existing (already validated, or face-valid) metrics and/or peer review
>(expert judgment).
>
>Note also that there are a-priori evaluations (research funding
>proposals, research findings submittedf or publication) and
>a-posteriori evaluations (research performance assessment).
>
>> what,,, motivates scientists to collaborate?
>
>You can ask them (subjective), or you can count things
>(co-authorships, co-citations, etc.) to infer what factors underlie
>collaboration (objective).
>
>> Second, what science policy makers want is a set of decision support
>> tools that supplement the existing gold standard (expert judgment)
and
>> provide options for the future.
>
>New metrics need to be validated against existing, already validated
>(or face-valid) metrics which in turn have to be validated against the
>"gold standard" (expert judgment. Once shown to be reliable and valid,
>metrics can then predict on their own, especially jointly, with
>suitable weights:
>
>The UK RAE 2008 offers an ideal opportunity to validate a wide
>spectrum of old and new metrics, jointly, field by field, against
>expert judgment:
>
>Harnad, S. (2007) Open Access Scientometrics and the UK Research
>Assessment Exercise. In Proceedings of 11th Annual Meeting of the
>International Society for Scientometrics and Informetrics 11(1), pp.
>27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
>http://eprints.ecs.soton.ac.uk/13804/
>
>Sample of candidate 
>OA-era metrics:
>
>Citations (C)
>CiteRank
>Co-citations
>Downloads (D)
>C/D Correlations
>Hub/Authority index
>Chronometrics: Latency/Longevity
>Endogamy/Exogamy
>Book citation index
>Research funding
>Students
>Prizes
>h-index
>Co-authorships
>Number of articles
>Number of publishing years
>Semiometrics (latent semantic indexing, text overlap, etc.)
>
>> policy makers need to understand the benefits and effectiveness of
>their
>> investment decisions in R&D.  Currently, policy makers rely on big
>> committee reviews, peer review, and their own best judgment to make
>> those decisions.  The current set of tools available don't provide
>> policy makers with rigorous answers to the benefits/effectiveness
>> questions... and they are too difficult to use and/or
>> inexplicable to the normal policy maker.  The result is the laundry
>list
>> of "metrics" or "indicators" that are contained in the "Gathering
>Storm"
>> or any of the innovation indices that I have seen to date.
>
>The difference between unvalidated and validated metrics is the
>difference between night and day.
>
>The role of expert judgment will obviously remain primary in the case
>of a-priori evaluations (specific research proposals and submissions
>for publication) and a-posteriori evaluations (research performance
>evaluation, impact studies)
>
>> Finally, I don't think we know enough about the functioning of the
>> innovation system to begin making judgments about which
>> metrics/indicators are reliable enough to provide guidance to policy
>> makers.  I believe that we must move to an ecosystem model of
>innovation
>> and that if you do that, then non-obvious indicators (relative
>> competitiveness/openness of the system, embedded infrastructure,
etc.)
>> become much more important than the traditional metrics used by NSF,
>> OECD, EU and others.  In addition, the decision support tools will
>> gravitate away from the static (econometric modeling,
>> patent/bibliometric citations) and toward the dynamic (systems
>modeling,
>> visual analytics).
>
>I'm not sure what all these measures are, but assuming they are
>countale metrics, they all need prior validation against validated or
>face-valid criteria, fields by field, and preferably a large battery
>of candidate metrics, validated jointly, initializing the weights of
>each.
>
>OA will help provide us with a rich new spectrum of candidate metrics
>and an open means of monitoring, validating, and fine-tuning them.
>
>Stevan Harnad

-- 

"David E. Wojick, PhD" <WojickD at osti.gov>
Senior Consultant for Innovation
Office of Scientific and Technical Information
US Department of Energy
http://www.osti.gov/innovation/
391 Flickertail Lane, Star Tannery, VA 22654 USA
540-858-3136

http://www.bydesign.com/powervision/resume.html provides my bio and past
client list. 
http://www.bydesign.com/powervision/Mathematics_Philosophy_Science/
presents some of my own research on information structure and dynamics. 



More information about the SIGMETRICS mailing list