SV: [SIGMETRICS] SV: [SIGMETRICS] FW: Re: New ways of measuring research by Stevan Harnad

Jeppe Nicolaisen jni at DB.DK
Fri Oct 10 11:39:04 EDT 2008


Dear Loet,

 

Neither did I mean this personally to anybody. The anti-theoretical stance is widespread in our field. And I believe the example I gave is just one example of this widespread stance.

 

I think you are right about the atmosphere on the qualitative side being more often more problematic than in our community. That is if you by "problematic" mean something like less polite or even less friendly. Perhaps that is actually a sign of good health! Perhaps that is because they are discussing their theories or absolute presuppositions much more than we do. As R.G. Collingwood writes in his famous classic "Metaphysics" (Oxford, UK: The Clarendon Press, 1940) "people are apt to be ticklish in their absolute presuppositions":

 

"If you were talking to a pathologist about a certain disease and asked him 'What is the cause of the event E which you say sometimes happens in this disease?' he will reply 'The cause of E is C'; and if he were in a communicative mood he might go on to say 'That was established by So-and-so, in a piece of research that is now regarded as classical.' You might go on to ask: 'I suppose before So-and-so found out what the cause of E was, he was quite sure it had a cause?' The answer would be 'Quite sure, of course.' If you now say 'Why?' he will probably answer 'Because everything that happens has a cause.' If you are importunate enough to ask 'But how do you know that everything that happens has a cause?' he will probably blow right up in your face, because you have put your finger on one of his absolute presuppositions, and people are apt to be ticklish in their absolute presuppositions. But if he keeps his temper and gives you a civil and candid answer, it will be to the following effect. 'That is a thing we take for granted in my job. We don't question it. We don't try to verify it. It isn't a thing anybody has discovered, like microbes or the circulation of the blood. It is a thing we just take for granted'" (Collingwood, 1940, p. 31).

 

Best wishes,

Jeppe


________________________________

Fra: ASIS&T Special Interest Group on Metrics på vegne af Loet Leydesdorff
Sendt: fr 10-10-2008 14:11
Til: SIGMETRICS at LISTSERV.UTK.EDU
Emne: Re: [SIGMETRICS] SV: [SIGMETRICS] FW: Re: New ways of measuring research by Stevan Harnad




Dear Jeppe,

I did not mean this personally to anybody, and I doubt that more
organization of interfacing (by ISSI or others) really helps because the
problems are structural and encoded in our (quantitative and qualitative)
literatures.

The atmosphere on the qualitative side is often more problematic than in our
community, insofar as I can see. I thought that we just should be reflexive
about these problems.

Best, Loet

________________________________

Loet Leydesdorff
Amsterdam School of Communications Research (ASCoR),
Kloveniersburgwal 48, 1012 CX Amsterdam.
Tel.: +31-20- 525 6598; fax: +31-20- 525 3681
loet at leydesdorff.net ; http://www.leydesdorff.net/



> -----Original Message-----
> From: ASIS&T Special Interest Group on Metrics
> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jeppe Nicolaisen
> Sent: Friday, October 10, 2008 1:22 PM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: [SIGMETRICS] SV: [SIGMETRICS] FW: Re: New ways of
> measuring research by Stevan Harnad
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> Dear Loet,
>
> You wrote: "I go to scientometrics conferences and the
> atmosphere is almost anti-theoretical. (There are good
> exceptions [...]!)"
>
> I completely agree with you. Although there are good
> exceptions from this tendency, it is nevertheless common to
> find this anti-theoretical stance even in our most esteemed
> journals and books. The best example is probably from the
> special issue of Scientometrics on theories of citing:
>
> "I think the current state of our field calls for more
> empirical and practical work, and all this theorising should
> wait till a very large body - beyond a threshold - of
> empirical knowledge is built" (Arunachalam, 1998, p. 142).
>
> One has to respect Arunachalam for being so honest about it.
> Most of his "fellow anti-theorists" are not that outspoken.
> But it is my impression that a vast majority of the
> colleagues in our field tend to agree with Arunachalam on
> this. They want to get on with the counting and measuring
> business and leave all this "theorizing" for later (or for
> the theorists).
>
> I know of course that you instantly recognized the
> theoretical and philosophical problems with the quote above.
> Others are referred to my article in ARIST, 41 (2007).
> However, the point of my letter is not just to point fingers
> :-) , but to start a discussion about what to do about it.
> The hostility against Scientometrics from other fields that
> you write about is definitely not something we want. A
> stronger theoretically anchored Scientometrics is definitely
> worth working for. But how? Sometimes I can't help thinking
> whether we have the right people working for us as editors,
> conference chairs, program chairs, etc. For instance, will
> the upcoming ISSI conference help to strengthen the
> theoretical foundation of our field? According to the
> objectives of the conference "The ISSI 2009 Conference will
> provide an international open forum for scientists, research
> managers and authorities, information and communication
> related professionals to debate the current status and advanceme!
>  nts of Scientometrics theory and applications ...". This
> leaves some hope, of course. But then again, just because
> such a forum for Scientometrics theory is provided doesn't
> necessarily take us anywhere. It depends, of course, on the
> people attending and their willingness to engage in
> theoretical discussions. The editors, conference chairs,
> program chairs, etc. of our field play an important role as
> gatekeepers. They are the ones who can set the agenda and
> turn the field (gradually) toward a more theoretically
> orientation. It's a huge responsibility. Are they able to
> live up to it?
>
> Have a nice weekend!
>
> Best wishes,
>
> Jeppe Nicolaisen
> Associate Professor, PhD
> Royal School of Library and Information Science
> Copenhagen, Denmark
>
> -----Oprindelig meddelelse-----
> Fra: ASIS&T Special Interest Group on Metrics
> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] På vegne af Loet Leydesdorff
> Sendt: 10. oktober 2008 09:37
> Til: SIGMETRICS at LISTSERV.UTK.EDU
> Emne: Re: [SIGMETRICS] FW: Re: New ways of measuring research
> by Stevan Harnad
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> Dear David,
>
> There is theorizing in science & technology studies, the
> (evolutionary)
> economics of innovation, etc., but it has drifted so far
> apart from the
> scientometrics enterprise that one cannot recognize the
> relevance of each
> other's work on both sides. The Science-of-Science-Policy
> program of the NSF
> may aim at bringing the two sides together, but processes of
> codification
> and self-organization at the field level cannot be organized
> away; they tend
> to be reproduced.
>
> I go to 4S conferences, etc., and there is hostility against
> scientometrics;
> and I go to scientometrics conferences and the atmosphere is almost
> anti-theoretical. (There are good exceptions on both sides!)
> Similarly with
> referee comments which one receives when submitting on either side,
> including the advice not to try to bridge the gap but instead
> to submit on
> the other side. See also my paper (with Peter van den Besselaar):
> "Scientometrics and Communication Theory: Towards
> Theoretically Informed
> Indicators," Scientometrics 38(1) (1997) 155-174.
>
> Ever since, new developments in the theoretical domain like Luhmann's
> sociology of communication provides opportunities to
> conceptualize processes
> like codification in terms which are both socially relevant
> and perhaps
> amenable to measurement. I have tried to work on this relation in:
> "Scientific Communication and Cognitive Codification: Social
> Systems Theory
> and the Sociology of Scientific Knowledge," European Journal of Social
> Theory 10(3), 375-388, 2007 (available from my website).
>
> The problem, in my opinion, is that we have only indicators
> for information
> processing and not for the processing of meaning or even beyond that
> knowledge. The indicators of a knowledge-based economy
> program of the OECD
> has not been so successful (Godin, 2006), but led mainly to a
> rearrangement
> of existing indicators.
>
> With best wishes,
>
>
> Loet
>
> ________________________________
>
> Loet Leydesdorff
> Amsterdam School of Communications Research (ASCoR),
> Kloveniersburgwal 48, 1012 CX Amsterdam.
> Tel.: +31-20- 525 6598; fax: +31-20- 525 3681
> loet at leydesdorff.net ; http://www.leydesdorff.net/
>
> 
>
> > -----Original Message-----
> > From: ASIS&T Special Interest Group on Metrics
> > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David E. Wojick
> > Sent: Thursday, October 09, 2008 8:46 PM
> > To: SIGMETRICS at LISTSERV.UTK.EDU
> > Subject: Re: [SIGMETRICS] FW: Re: New ways of measuring
> > research by Stevan Harnad
> >
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> >
> > I can't speak for Valdez but I know him and his work and
> > share some of his interests and concerns. The basic issue to
> > me is one that we find throughout science. One the one hand
> > we find lots of statistical analysis. But on the other we
> > find the development of theoretical processes and  mechanisms
> > that explain the numbers. It is the discovery of these
> > processes and mechanisms that the science of science
> presently lacks.
> >
> > Most of the people on this list are familiar with some of
> > these issues. One, which Valdez alludes to, is the
> > calculation of return on investment. We are pretty sure that
> > science is valuable but how do we measure that value? We have
> > many programs on which we have spent over $1 billion over the
> > last 10 years. What has been the return to society? What is
> > it likely to be in future? Has one program returned more than
> > another? Why is this so hard to figure out?
> >
> >  Another is the quality of research. Surely some research is
> > better than others, some papers better than others, in
> > several different ways. For that matter, what is the goal of
> > science, or are there more than one? Which fields are
> > achieving which goals, to what degree? Are some fields more
> > productive than others? Are some speeding up, while others
> > slow down? Economics has resolved many of these issues with
> > models of rational behavior. Why can't the science of science
> > do likewise? (It is okay if it can't as long as we know why
> it can't.)
> >
> > The point is that we know we are measuring something
> > important but we don't know what it is. Most of the terms we
> > use to talk about science lack an operational definition. In
> > this sense the measurement of scientific activity is ahead of
> > our understanding of this activity. We do not have a
> > fundamental theory of the nature of science. We are like
> > geology before plate tectonics, or epidemiology before the
> > germ theory of disease, measuring what we do not understand.
> >
> > David Wojick
> >
> > >
> > > Clearly a message of interest to the subscribers to Sig Metrics of
> > >ASIST. Gene Garfield
> > >
> > >-----Original Message-----
> > >From: American Scientist Open Access Forum
> > >[mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMA
> > XI.ORG] On
> > >Behalf Of Stevan Harnad
> > >Sent: Wednesday, October 08, 2008 11:03 AM
> > >To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG
> > >Subject: Re: New ways of measuring research
> > >
> > >On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill
> > ><Bill.Valdez at science.doe.gov> wrote:
> > >
> > >> the primary reason that I believe bibliometrics, innovation
> > >> indices, patent analysis and econometric modeling are
> > flawed is that
> > >> they rely upon the counting of things (paper, money,
> people, etc.)
> > >> without understanding the underlying motivations of the
> > actors within
> > >> the scientific ecosystem.
> > >
> > >There are two ways to evaluate:
> > >
> > >Subjectively (expert judgement, peer review, opinion polls)
> > >or
> > >Objectively: counting things
> > >
> > >The same is true of motives: you can assess them subjectively or
> > >objectively. If objectively, you have to count things.
> > >
> > >That's metrics.
> > >
> > >Philosophers say "Show me someone who wishes to discard
> metaphysics,
> > >and I'll show you a metaphysician with a rival
> > (metaphysical) system."
> > >
> > >The metric equivalent is "Show me someone who wishes to discard
> > >metrics (counting things), and I'll show you a metrician
> with a rival
> > >(metric) system."
> > >
> > >Objective metrics, however, must be *validated*, and that usually
> > >begins by initializing their weights based on their
> correlation with
> > >existing (already validated, or face-valid) metrics and/or
> > peer review
> > >(expert judgment).
> > >
> > >Note also that there are a-priori evaluations (research funding
> > >proposals, research findings submittedf or publication) and
> > >a-posteriori evaluations (research performance assessment).
> > >
> > >> what,,, motivates scientists to collaborate?
> > >
> > >You can ask them (subjective), or you can count things
> > >(co-authorships, co-citations, etc.) to infer what factors underlie
> > >collaboration (objective).
> > >
> > >> Second, what science policy makers want is a set of
> > decision support
> > >> tools that supplement the existing gold standard (expert
> > judgment) and
> > >> provide options for the future.
> > >
> > >New metrics need to be validated against existing, already
> validated
> > >(or face-valid) metrics which in turn have to be validated
> > against the
> > >"gold standard" (expert judgment. Once shown to be reliable
> > and valid,
> > >metrics can then predict on their own, especially jointly, with
> > >suitable weights:
> > >
> > >The UK RAE 2008 offers an ideal opportunity to validate a wide
> > >spectrum of old and new metrics, jointly, field by field, against
> > >expert judgment:
> > >
> > >Harnad, S. (2007) Open Access Scientometrics and the UK Research
> > >Assessment Exercise. In Proceedings of 11th Annual Meeting of the
> > >International Society for Scientometrics and Informetrics
> 11(1), pp.
> > >27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
> > >http://eprints.ecs.soton.ac.uk/13804/
> > >
> > >Sample of candidate
> > >OA-era metrics:
> > >
> > >Citations (C)
> > >CiteRank
> > >Co-citations
> > >Downloads (D)
> > >C/D Correlations
> > >Hub/Authority index
> > >Chronometrics: Latency/Longevity
> > >Endogamy/Exogamy
> > >Book citation index
> > >Research funding
> > >Students
> > >Prizes
> > >h-index
> > >Co-authorships
> > >Number of articles
> > >Number of publishing years
> > >Semiometrics (latent semantic indexing, text overlap, etc.)
> > >
> > >> policy makers need to understand the benefits and
> effectiveness of
> > >their
> > >> investment decisions in R&D.  Currently, policy makers
> rely on big
> > >> committee reviews, peer review, and their own best
> judgment to make
> > >> those decisions.  The current set of tools available
> don't provide
> > >> policy makers with rigorous answers to the benefits/effectiveness
> > >> questions... and they are too difficult to use and/or
> > >> inexplicable to the normal policy maker.  The result is
> the laundry
> > >list
> > >> of "metrics" or "indicators" that are contained in the "Gathering
> > >Storm"
> > >> or any of the innovation indices that I have seen to date.
> > >
> > >The difference between unvalidated and validated metrics is the
> > >difference between night and day.
> > >
> > >The role of expert judgment will obviously remain primary
> in the case
> > >of a-priori evaluations (specific research proposals and
> submissions
> > >for publication) and a-posteriori evaluations (research performance
> > >evaluation, impact studies)
> > >
> > >> Finally, I don't think we know enough about the
> functioning of the
> > >> innovation system to begin making judgments about which
> > >> metrics/indicators are reliable enough to provide guidance
> > to policy
> > >> makers.  I believe that we must move to an ecosystem model of
> > >innovation
> > >> and that if you do that, then non-obvious indicators (relative
> > >> competitiveness/openness of the system, embedded
> > infrastructure, etc.)
> > >> become much more important than the traditional metrics
> > used by NSF,
> > >> OECD, EU and others.  In addition, the decision support
> tools will
> > >> gravitate away from the static (econometric modeling,
> > >> patent/bibliometric citations) and toward the dynamic (systems
> > >modeling,
> > >> visual analytics).
> > >
> > >I'm not sure what all these measures are, but assuming they are
> > >countale metrics, they all need prior validation against
> validated or
> > >face-valid criteria, fields by field, and preferably a
> large battery
> > >of candidate metrics, validated jointly, initializing the
> weights of
> > >each.
> > >
> > >OA will help provide us with a rich new spectrum of
> candidate metrics
> > >and an open means of monitoring, validating, and fine-tuning them.
> > >
> > >Stevan Harnad
> >
> > --
> >
> > "David E. Wojick, PhD" <WojickD at osti.gov>
> > Senior Consultant for Innovation
> > Office of Scientific and Technical Information
> > US Department of Energy
> > http://www.osti.gov/innovation/
> > 391 Flickertail Lane, Star Tannery, VA 22654 USA
> > 540-858-3136
> >
> > http://www.bydesign.com/powervision/resume.html provides my
> > bio and past client list.
> > http://www.bydesign.com/powervision/Mathematics_Philosophy_Sci
> > ence/ presents some of my own research on information
> > structure and dynamics.
> >
>



More information about the SIGMETRICS mailing list