Paper on scientometrics

Loet Leydesdorff loet at LEYDESDORFF.NET
Tue Jul 30 02:38:09 EDT 2013


Dear Fil,

I agree. It is empirically difficult to distinguish because the exogenous
developments can reflexively be endogenized. Furthermore, the sources of
change may vary along the time axis; and from the perspective of hindsight
(Henry Small's remark).

Best,
Loet



On Mon, Jul 29, 2013 at 6:55 PM, Fil Menczer <fil at indiana.edu> wrote:

> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> Dear Loet et al.,
>
> On Sun, Jul 28, 2013 at 2:36 AM, Loet Leydesdorff <loet at leydesdorff.net>
> wrote:
> >
> > It seems to me that in your paper scientific developments are exogenous:
> > “and exogenous events, such as scientific discoveries.” You assume that
> > collaborations in social networks (e.g., coauthorships) are the drivers
> of
> > new developments. One could argue that this is the case in normal science
> > more than in periods of radical change.
>
> You are right that our contribution (in the paper I mentioned earlier:
> http://dx.doi.org/10.1038/srep01069) was more along the distinction
> between endogenous and exogenous change, than between normal and
> radical change --- the latter is an output rather than an input of our
> model. For example we observe some disciplines emerging and
> "exploding" in popularity, just as we find in the empirical data.
>
> Our point was to see how much one could predict or explain (in a
> quantitative sense) the empirical data about the evolution of
> disciplines (and their relationship to authors and papers) under the
> assumption that endogenous (social) interactions are the main (in our
> model, the only) drivers in the dynamics of science. The key
> contribution is the empirical validation of the model against data (in
> our case, three large-scale data sets), suggesting the model is quite
> successful and therefore the assumption is valid --- to the extent of
> the accuracy of our predictions.
>
> So, yes, one could definitely argue that exogenous changes exist (I
> believe it). But if one wants to argue that such changes are
> *necessary* to explain the evolution of science, one has to test such
> assumptions against empirical data, and show that they generate better
> quantitative predictions/explanations of the data, compared to a
> simpler model without exogenous events.
>
> Cheers,
> -Fil
>
> P.S. As a footnote, I find it exciting that we have people with
> diverse backgrounds contributing to this debate. I am a newcomer in
> this area; our group's background is a mix of physical, computing, and
> information sciences. We are particularly interested in quantitative
> models to be empirically validated against large scale data across
> disciplines, rather than against particular case studies or examples.
> But we're learning a lot from all the different contributions in this
> list. Thanks for the feedback!
>
> Filippo Menczer
> Professor of Informatics and Computer Science
> Director, Center for Complex Networks and Systems Research
> Indiana University, Bloomington
> http://cnets.indiana.edu/people/filippo-menczer
>
>
> On Mon, Jul 29, 2013 at 1:13 AM, Loet Leydesdorff <loet at leydesdorff.net>
> wrote:
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> >
> > Dear David,
> >
> >
> >
> > “Understanding,” indeed, is always a first goal. When studying complex
> > systems, however, one risks to focus on the specificities in each case
> and
> > thus to specify variation. In a next step, the understanding can be used
> to
> > specify selection mechanisms that can be tested on other case materials
> or
> > against the whole database after upscaling.
> >
> >
> >
> > For example, is concurrency of competing research programs a necessary
> > condition? Does a paradigm change lead to auto-catalytic growth that
> > overshadows other research programs – let’s say after ten years? Or does
> it
> > more often lead to differentiation within specialties?
> >
> >
> >
> > Perhaps, I should not have used the word “prediction” in this more
> technical
> > sense of statistical testing. Let’s say: specification of an
> expectation. It
> > seems to me that lots of contributions to this discussion went in this
> > direction.
> >
> >
> >
> > Selection is deterministic (unlike variation) and can therefore be
> tested.
> > Preferential attachment, for example, can be considered as a possible
> > selection mechanism.
> >
> >
> >
> > Best,
> >
> > Loet
> >
> >
> >
> > ________________________________
> >
> > Loet Leydesdorff
> >
> > Professor, University of Amsterdam
> > Amsterdam School of Communications Research (ASCoR)
> >
> > Kloveniersburgwal 48, 1012 CX Amsterdam
> > loet at leydesdorff.net ; http://www.leydesdorff.net/
> > Honorary Professor, SPRU, University of Sussex; Visiting Professor,
> ISTIC,
> > Beijing;
> > http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en
> >
> >
> >
> > From: ASIS&T Special Interest Group on Metrics
> > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David Wojick
> > Sent: Sunday, July 28, 2013 8:29 PM
> >
> >
> > To: SIGMETRICS at LISTSERV.UTK.EDU
> > Subject: Re: [SIGMETRICS] Paper on scientometrics
> >
> >
> >
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> >
> > Dear Loet,
> >
> > Yes we often know a revolution when we see one, but that is not the same
> as
> > having an operational definition that lets us individuate them. We cannot
> > say for example how many revolutions occurred in discipline d during
> period
> > t. It is very hard to do meaningful empirical analyses of things we
> cannot
> > even count. Thus I think talk of prediction assumes a level of
> understanding
> > that we do not have. Understand is the goal in my view.
> >
> > David
> >
> > At 01:15 PM 7/28/2013, you wrote:
> >
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> > Dear David and colleagues,
> >
> > One basic problem is that we do not have an agreed upon operational
> > definition of revolution. So if we are measuring different things under
> the
> > same name we may get differing results that do not actually disagree.
> >
> > Although we don’t have such a definition, it is not so difficult to
> point ex
> > post to instances that have provided breakthroughs and led to the
> > development of new specialties. For example, “oncogene” in 1988,
> > “interference RNA” in 1998; super-conductivity in 1987(?) at higher
> > temperatures, etc.
> >
> > It seems to me that there are two main questions that should not be
> > confused:
> >
> > 1. is it possible to predict such breakthroughs in terms of a specific
> set
> > of conditions? The notion of a void (as Chaomei named it) seems relevant
> > here: structural holes; synergies among redundant research programs, etc.
> >
> > 2. ex post: early warning indicators, upscaling conditions, etc. For
> > example, in the case of RNA-interference we hypothesized that first
> > preferential attachment is with the initial inventors, but then the
> system
> > globalizes and on preferentially attaches with world centers of
> excellence
> > (in Boston, London or Seoul). (Leydesdorff & Rafols, 2011).
> >
> > In my opinion, the problem is that one can study these cases, derive
> > hypotheses, but then during the upscaling one fails to develop predictors
> > from them. For example, we found an entropy measure for new developments
> in
> > (Leydesdorff et al., 1994), but it did not work for the prediction at the
> > level of the file of aggregated journal-journal citations. Ron Kostoff’s
> > tomography was another idea that eventually did not lead us to the
> > prediction of emerging fields (Leydesdorff, 2002).
> >
> > I mean to say that if one finds for example, that an important new
> > development leads to a new citation structure, is it then also possible
> to
> > scan the database for such structures and in order to find new
> developments?
> >
> > Best,
> > Loet
> >
> > References:
> > ·        Loet Leydesdorff, Susan E. Cozzens, and Peter Van den Besselaar,
> > Tracking Areas of Strategic Importance using Scientometric Journal
> Mappings,
> > Research Policy 23 (1994) 217-229.
> > ·        Loet Leydesdorff, Indicators of Structural Change in the
> Dynamics
> > of Science: Entropy Statistics of the SCI Journal Citation Reports,
> > Scientometrics 53(1) (2002) 131-159.
> > ·        Loet Leydesdorff & Ismael Rafols, How do emerging technologies
> > conquer the world? An exploration of patterns of diffusion and network
> > formation, Journal of the American Society for Information Science and
> > Technology 62(5) (2011) 846-860.
> >
>
>


-- 
Professor, University of Amsterdam
Amsterdam School of Communications Research (ASCoR)
Honorary Professor, SPRU, <http://www.sussex.ac.uk/spru/>University of
Sussex; Visiting Professor, ISTIC,
<http://www.istic.ac.cn/Eng/brief_en.html>Beijing;
http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20130730/eb23c33f/attachment.html>


More information about the SIGMETRICS mailing list