From loet at LEYDESDORFF.NET Wed Oct 1 07:23:31 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Wed, 1 Oct 2008 13:23:31 +0200 Subject: FW: Journals under Threat: A Joint Response from History of Science, Technology and Medicine Editors Message-ID: Fyi. Loet -----Original Message----- From: H-NET List on the History of Science, Medicine, and Technology [mailto:H-SCI-MED-TECH at H-NET.MSU.EDU] On Behalf Of Christophe Lecuyer (h-sci-med-tech) Sent: 30 September 2008 11:33 PM From: Finn Arne J?rgensen Date: Tue, September 30, 2008 2:31 pm Journals under Threat: A Joint Response from History of Science, Technology and Medicine Editors We live in an age of metrics. All around us, things are being standardized, quantified, measured. Scholars concerned with the work of science and technology must regard this as a fascinating and crucial practical, cultural and intellectual phenomenon. Analysis of the roots and meaning of metrics and metrology has been a preoccupation of much of the best work in our field for the past quarter century at least. As practitioners of the interconnected disciplines that make up the field of science studies we understand how significant, contingent and uncertain can be the process of rendering nature and society in grades, classes and numbers. We now confront a situation in which our own research work is being subjected to putatively precise accountancy by arbitrary and unaccountable agencies. Some may already be aware of the proposed European Reference Index for the Humanities (ERIH), an initiative originating with the European Science Foundation. The ERIH is an attempt to grade journals in the humanities - including "history and philosophy of science". The initiative proposes a league table of academic journals, with premier, second and third divisions. According to the European Science Foundation, ERIH "aims initially to identify, and gain more visibility for, top-quality European Humanities research published in academic journals in, potentially, all European languages". It is hoped "that ERIH will form the backbone of a fully-fledged research information system for the Humanities". What is meant, however, is that ERIH will provide funding bodies and other agencies in Europe and elsewhere with an allegedly exact measure of research quality. In short, if research is published in a premier league journal it will be recognized as first rate; if it appears somewhere in the lower divisions, it will be rated(and not funded) accordingly. This initiative is entirely defective in conception and execution. Consider the major issues of accountability and transparency. The process of producing the graded list of journals in science studies was overseen by a committee of four (the membership is currently listed at http://www.esf.org/research-areas/humanities/research- infrastructures-including-erih/erih-governance-and-panels/erih-expert- panels.html). This committee cannot be considered representative. It was not selected in consultation with any of the various disciplinary organizations that currently represent our field such as the European Association for the History of Medicine and Health, the Society for the Social History of Medicine, the British Society for the History of Science, the History of Science Society, the Philosophy of Science Association, the Society for the History of Technology or the Society for Social Studies of Science. Journal editors were only belatedly informed of the process and its relevant criteria or asked to provide any information regarding their publications. No indication was given of the means through which the list was compiled; nor how it might be maintained in the future. The ERIH depends on a fundamental misunderstanding of conduct and publication of research in our field, and in the humanities in general. Journals' quality cannot be separated from their contents and their review processes. Great research may be published anywhere and in any language. Truly ground-breaking work may be more likely to appear from marginal, dissident or unexpected sources, rather than from a well-established and entrenched mainstream journal. Our journals are various, heterogeneous and distinct. Some are aimed at a broad, general and international readership, others are more specialized in their content and implied audience. Their scope and readership say nothing about the quality of their intellectual content. The ERIH, on the other hand, confuses internationality with quality in a way that is particularly prejudicial to specialist and non-English language journals. In a recent report, the British Academy, with judicious understatement, concludes that "the European Reference Index for the Humanities as presently conceived does not represent a reliable way in which metrics of peer-reviewed publications can be constructed" (Peer Review: the Challenges for the Humanities and Social Sciences, September 2007: http://www.britac.ac.uk/reports/peer-review). Such exercises as ERIH can become self- fulfilling prophecies. If such measures as ERIH are adopted as metrics by funding and other agencies, then many in our field will conclude that they have little choice other than to limit their publications to journals in the premier division. We will sustain fewer journals, much less diversity and impoverish our discipline. Along with many others in our field, this Journal has concluded that we want no part of this dangerous and misguided exercise. This joint Editorial is being published in journals across the fields of history of science and science studies as an expression of our collective dissent and our refusal to allow our field to be managed and appraised in this fashion. We have asked the compilers of the ERIH to remove our journals' titles from their lists. Hanne Andersen (Centaurus) Roger Ariew & Moti Feingold (Perspectives on Science) A. K. Bag (Indian Journal of History of Science) June Barrow-Green & Benno van Dalen (Historia mathematica) Keith Benson (History and Philosophy of the Life Sciences) Marco Beretta (Nuncius) Michel Blay (Revue d'Histoire des Sciences) Cornelius Borck (Berichte zur Wissenschaftsgeschichte) Geof Bowker and Susan Leigh Star (Science, Technology and Human Values) Massimo Bucciantini & Michele Camerota (Galilaeana: Journal of Galilean Studies) Jed Buchwald and Jeremy Gray (Archive for History of Exacft Sciences) Vincenzo Cappelletti & Guido Cimino (Physis) Roger Cline (International Journal for the History of Engineering & Technology) Stephen Clucas & Stephen Gaukroger (Intellectual History Review) Hal Cook & Anne Hardy (Medical History) Leo Corry, Alexandre M?traux & J?rgen Renn (Science in Context) D.Diecks & J.Uffink (Studies in History and Philosophy of Modern Physics) Brian Dolan & Bill Luckin (Social History of Medicine) Hilmar Duerbeck & Wayne Orchiston (Journal of Astronomical History & Heritage) Moritz Epple, Mikael H?rd, Hans-J?rg Rheinberger & Volker Roelcke (NTM: Zeitschrift f?r Geschichte der Wissenschaften, Technik und Medizin) Steven French (Metascience) Willem Hackmann (Bulletin of the Scientific Instrument Society) Bosse Holmqvist (Lychnos) Paul Farber (Journal of the History of Biology) Mary Fissell & Randall Packard (Bulletin of the History of Medicine) Robert Fox (Notes & Records of the Royal Society) Jim Good (History of the Human Sciences) Michael Hoskin (Journal for the History of Astronomy) Ian Inkster (History of Technology) Marina Frasca Spada (Studies in History and Philosophy of Science) Nick Jardine (Studies in History and Philosophy of Biological and Biomedical Sciences) Trevor Levere (Annals of Science) Bernard Lightman (Isis) Christoph L?thy (Early Science and Medicine) Michael Lynch (Social Studies of Science) Stephen McCluskey & Clive Ruggles (Archaeostronomy: the Journal of Astronomy in Culture) Peter Morris (Ambix) E. Charles Nelson (Archives of Natural History) Ian Nicholson (Journal of the History of the Behavioural Sciences) Iwan Rhys Morus (History of Science) John Rigden & Roger H Stuewer (Physics in Perspective) Simon Schaffer (British Journal for the History of Science) Paul Unschuld (Sudhoffs Archiv) Peter Weingart (Minerva) Stefan Zamecki (Kwartalnik Historii Nauki i Techniki) -- H-SCI-MED-TECH The H-Net list for the History of Science, Medicine and Technology Email address for postings: h-sci-med-tech at h-net.msu.edu Homepage: http://www.h-net.org/~smt/ To unsubscribe or change your subscription options, please use the Web Interface: http://www.h-net.org/lists/manage.cgi From dwojick at HUGHES.NET Wed Oct 1 14:08:05 2008 From: dwojick at HUGHES.NET (David E. Wojick) Date: Wed, 1 Oct 2008 14:08:05 -0400 Subject: FW: Journals under Threat: A Joint Response from History of Science, Technology and Medicine Editors In-Reply-To: <001f01c923b8$230a9320$6402a8c0@loet> Message-ID: An interesting fight. They start off by suggesting that the new grading system is based on some sort of metiric but never say what it is. Rather they attack the committee as though the rankings are subjective. They also claim that funding of research will be based on these journal rankings, without evidence. But if the rankings are well founded then perhaps funding decisions should be influenced by them. All in all it sounds like they just do not want to be measured. No one does but it is often important to do so. David Wojick >Adminstrative info for SIGMETRICS (for example unsubscribe): >http://web.utk.edu/~gwhitney/sigmetrics.html > >Fyi. Loet > >-----Original Message----- >From: H-NET List on the History of Science, Medicine, and Technology >[mailto:H-SCI-MED-TECH at H-NET.MSU.EDU] On Behalf Of Christophe Lecuyer >(h-sci-med-tech) >Sent: 30 September 2008 11:33 PM > >From: Finn Arne J?rgensen >Date: Tue, September 30, 2008 2:31 pm > >Journals under Threat: A Joint Response from History of Science, >Technology and Medicine Editors > >We live in an age of metrics. All around us, things are being >standardized, quantified, measured. Scholars concerned with the work of >science and technology must regard this as a fascinating and crucial >practical, cultural and intellectual phenomenon. Analysis of the roots >and meaning of metrics and metrology has been a preoccupation of much of >the best work in our field for the past quarter century at least. As >practitioners of the interconnected disciplines that make up the field of >science studies we understand how significant, contingent and uncertain >can be the process of rendering nature and society in grades, classes and >numbers. We now confront a situation in which our own research work is >being subjected to putatively precise accountancy by arbitrary and >unaccountable agencies. Some may already be aware of the proposed European >Reference Index for the Humanities (ERIH), an initiative originating with >the European Science Foundation. The ERIH is an attempt to grade journals >in the humanities - including "history and philosophy of science". The >initiative proposes a league table of academic journals, with premier, >second and third divisions. According to the European Science Foundation, >ERIH "aims initially to identify, and gain more visibility for, >top-quality European Humanities research published in academic journals >in, potentially, all European languages". It is hoped "that ERIH will form >the backbone of a fully-fledged research information system for the >Humanities". What is meant, however, is that ERIH will provide funding >bodies and other agencies in Europe and elsewhere with an allegedly exact >measure of research quality. In short, if research is published in a >premier league journal it will be recognized as first rate; if it appears >somewhere in the lower divisions, it will be rated(and not funded) >accordingly. This initiative is entirely defective in conception and >execution. Consider the major issues of accountability and transparency. >The process of producing the graded list of journals in science studies >was overseen by a committee of four (the membership is >currently listed at http://www.esf.org/research-areas/humanities/research- >infrastructures-including-erih/erih-governance-and-panels/erih-expert- >panels.html). This committee cannot be considered representative. It was >not selected in consultation with any of the various disciplinary >organizations that currently represent our field such as the European >Association for the History of Medicine and Health, the Society for the >Social History of Medicine, the British Society for the History of >Science, the History of Science Society, the Philosophy of Science >Association, the Society for the History of Technology or the Society for >Social Studies of Science. Journal editors were only belatedly informed of >the process and its relevant criteria or asked to provide any information >regarding their publications. No indication was given of the means through >which the list was compiled; nor how it might be maintained in the >future. The ERIH depends on a fundamental misunderstanding of conduct and >publication of research in our field, and in the humanities in general. >Journals' quality cannot be separated from their contents and their review >processes. Great research may be published anywhere and in any language. >Truly ground-breaking work may be more likely to appear from marginal, >dissident or unexpected sources, rather than from a well-established and >entrenched mainstream journal. Our journals are various, heterogeneous and >distinct. Some are aimed at a broad, general and international readership, >others are more specialized in their content and implied audience. Their >scope and readership say nothing about the quality of their intellectual >content. The ERIH, on the other hand, confuses internationality with >quality in a way that is particularly prejudicial to specialist and >non-English language journals. In a recent report, the British Academy, >with judicious understatement, concludes that "the European Reference >Index for the Humanities as presently conceived does not represent a >reliable way in which metrics of peer-reviewed publications can be >constructed" (Peer Review: the Challenges for the Humanities and Social >Sciences, September 2007: http://www.britac.ac.uk/reports/peer-review). >Such exercises as ERIH can become self- fulfilling prophecies. If such >measures as ERIH are adopted as metrics by funding and other agencies, >then many in our field will conclude that they have little choice other >than to limit their publications to journals in the premier division. We >will sustain fewer journals, much less diversity and impoverish our >discipline. Along with many others in our field, this Journal has >concluded that we want no part of this dangerous and misguided exercise. >This joint Editorial is being published in journals across the fields of >history of science and science studies as an expression of our collective >dissent and our refusal to allow our field to be managed and appraised in >this fashion. We have asked the compilers of the ERIH to remove our >journals' titles from their lists. > >Hanne Andersen (Centaurus) >Roger Ariew & Moti Feingold (Perspectives on Science) >A. K. Bag (Indian Journal of History of Science) >June Barrow-Green & Benno van Dalen (Historia mathematica) >Keith Benson (History and Philosophy of the Life Sciences) >Marco Beretta (Nuncius) >Michel Blay (Revue d'Histoire des Sciences) >Cornelius Borck (Berichte zur Wissenschaftsgeschichte) >Geof Bowker and Susan Leigh Star (Science, Technology and Human Values) >Massimo Bucciantini & Michele Camerota (Galilaeana: Journal of Galilean >Studies) >Jed Buchwald and Jeremy Gray (Archive for History of Exacft Sciences) >Vincenzo Cappelletti & Guido Cimino (Physis) >Roger Cline (International Journal for the History of Engineering & >Technology) >Stephen Clucas & Stephen Gaukroger (Intellectual History Review) >Hal Cook & Anne Hardy (Medical History) >Leo Corry, Alexandre M?traux & J?rgen Renn (Science in Context) >D.Diecks & J.Uffink (Studies in History and Philosophy of Modern >Physics) >Brian Dolan & Bill Luckin (Social History of Medicine) >Hilmar Duerbeck & Wayne Orchiston (Journal of Astronomical History & >Heritage) >Moritz Epple, Mikael H?rd, Hans-J?rg Rheinberger & Volker Roelcke (NTM: >Zeitschrift f?r Geschichte der Wissenschaften, Technik und Medizin) >Steven French (Metascience) >Willem Hackmann (Bulletin of the Scientific Instrument Society) >Bosse Holmqvist (Lychnos) >Paul Farber (Journal of the History of Biology) >Mary Fissell & Randall Packard (Bulletin of the History of Medicine) >Robert Fox (Notes & Records of the Royal Society) >Jim Good (History of the Human Sciences) >Michael Hoskin (Journal for the History of Astronomy) >Ian Inkster (History of Technology) >Marina Frasca Spada (Studies in History and Philosophy of Science) >Nick Jardine (Studies in History and Philosophy of Biological and >Biomedical Sciences) >Trevor Levere (Annals of Science) >Bernard Lightman (Isis) >Christoph L?thy (Early Science and Medicine) >Michael Lynch (Social Studies of Science) >Stephen McCluskey & Clive Ruggles (Archaeostronomy: the Journal of >Astronomy in Culture) >Peter Morris (Ambix) >E. Charles Nelson (Archives of Natural History) >Ian Nicholson (Journal of the History of the Behavioural Sciences) >Iwan Rhys Morus (History of Science) >John Rigden & Roger H Stuewer (Physics in Perspective) >Simon Schaffer (British Journal for the History of Science) >Paul Unschuld (Sudhoffs Archiv) >Peter Weingart (Minerva) >Stefan Zamecki (Kwartalnik Historii Nauki i Techniki) > > > > > >-- >H-SCI-MED-TECH >The H-Net list for the History of Science, Medicine and Technology >Email address for postings: h-sci-med-tech at h-net.msu.edu >Homepage: http://www.h-net.org/~smt/ >To unsubscribe or change your subscription options, please use the >Web Interface: http://www.h-net.org/lists/manage.cgi -- "David E. Wojick, PhD" Senior Consultant for Innovation Office of Scientific and Technical Information US Department of Energy http://www.osti.gov/innovation/ 391 Flickertail Lane, Star Tannery, VA 22654 USA 540-858-3136 http://www.bydesign.com/powervision/resume.html provides my bio and past client list. http://www.bydesign.com/powervision/Mathematics_Philosophy_Science/ presents some of my own research on information structure and dynamics. From Chris.Armbruster at EUI.EU Wed Oct 1 15:47:15 2008 From: Chris.Armbruster at EUI.EU (Armbruster, Chris) Date: Wed, 1 Oct 2008 21:47:15 +0200 Subject: FW: Journals under Threat: A Joint Response from History of Science, Technology and Medicine Editors Message-ID: If I am not mistaken, then ERIH is based on peer judgement, not metrics. National research councils were asked to nominate experts. Because it was a European project, some form of "national proportional representation" was maintained. Furthermore, ERIH seems a response to the rise of research evaluation and the 'feeling' that the Humanities must also offer something. Initially, the ERIH A, B and C classification was not meant as a ranking, but as a differentiation that was meant to value category C as a collection of important regional and national journals. But this is not how it turned out. A, B and C is understood as ranking. Interestingly, ISI has shown that there is a 66% overlap between the ISI list and category A, but little overlap with category B and C. On the one hand, ERIH hands power to editors and publishers (one would expect steep price rises for category A). On the other hand, ERIH offers some sort of guidance in terms of the quality of journals... Maybe the main problem with ERIH is that it so rudimentary..... Chris Armbruster -----Original Message----- From: ASIS&T Special Interest Group on Metrics on behalf of David E. Wojick Sent: Wed 10/1/2008 20:08 To: SIGMETRICS at listserv.utk.edu Subject: Re: [SIGMETRICS] FW: Journals under Threat: A Joint Response from History of Science, Technology and Medicine Editors An interesting fight. They start off by suggesting that the new grading system is based on some sort of metiric but never say what it is. Rather they attack the committee as though the rankings are subjective. They also claim that funding of research will be based on these journal rankings, without evidence. But if the rankings are well founded then perhaps funding decisions should be influenced by them. All in all it sounds like they just do not want to be measured. No one does but it is often important to do so. David Wojick >Adminstrative info for SIGMETRICS (for example unsubscribe): >http://web.utk.edu/~gwhitney/sigmetrics.html > >Fyi. Loet > >-----Original Message----- >From: H-NET List on the History of Science, Medicine, and Technology >[mailto:H-SCI-MED-TECH at H-NET.MSU.EDU] On Behalf Of Christophe Lecuyer >(h-sci-med-tech) >Sent: 30 September 2008 11:33 PM > >From: Finn Arne J?rgensen >Date: Tue, September 30, 2008 2:31 pm > >Journals under Threat: A Joint Response from History of Science, >Technology and Medicine Editors > >We live in an age of metrics. All around us, things are being >standardized, quantified, measured. Scholars concerned with the work of >science and technology must regard this as a fascinating and crucial >practical, cultural and intellectual phenomenon. Analysis of the roots >and meaning of metrics and metrology has been a preoccupation of much of >the best work in our field for the past quarter century at least. As >practitioners of the interconnected disciplines that make up the field of >science studies we understand how significant, contingent and uncertain >can be the process of rendering nature and society in grades, classes and >numbers. We now confront a situation in which our own research work is >being subjected to putatively precise accountancy by arbitrary and >unaccountable agencies. Some may already be aware of the proposed European >Reference Index for the Humanities (ERIH), an initiative originating with >the European Science Foundation. The ERIH is an attempt to grade journals >in the humanities - including "history and philosophy of science". The >initiative proposes a league table of academic journals, with premier, >second and third divisions. According to the European Science Foundation, >ERIH "aims initially to identify, and gain more visibility for, >top-quality European Humanities research published in academic journals >in, potentially, all European languages". It is hoped "that ERIH will form >the backbone of a fully-fledged research information system for the >Humanities". What is meant, however, is that ERIH will provide funding >bodies and other agencies in Europe and elsewhere with an allegedly exact >measure of research quality. In short, if research is published in a >premier league journal it will be recognized as first rate; if it appears >somewhere in the lower divisions, it will be rated(and not funded) >accordingly. This initiative is entirely defective in conception and >execution. Consider the major issues of accountability and transparency. >The process of producing the graded list of journals in science studies >was overseen by a committee of four (the membership is >currently listed at http://www.esf.org/research-areas/humanities/research- >infrastructures-including-erih/erih-governance-and-panels/erih-expert- >panels.html). This committee cannot be considered representative. It was >not selected in consultation with any of the various disciplinary >organizations that currently represent our field such as the European >Association for the History of Medicine and Health, the Society for the >Social History of Medicine, the British Society for the History of >Science, the History of Science Society, the Philosophy of Science >Association, the Society for the History of Technology or the Society for >Social Studies of Science. Journal editors were only belatedly informed of >the process and its relevant criteria or asked to provide any information >regarding their publications. No indication was given of the means through >which the list was compiled; nor how it might be maintained in the >future. The ERIH depends on a fundamental misunderstanding of conduct and >publication of research in our field, and in the humanities in general. >Journals' quality cannot be separated from their contents and their review >processes. Great research may be published anywhere and in any language. >Truly ground-breaking work may be more likely to appear from marginal, >dissident or unexpected sources, rather than from a well-established and >entrenched mainstream journal. Our journals are various, heterogeneous and >distinct. Some are aimed at a broad, general and international readership, >others are more specialized in their content and implied audience. Their >scope and readership say nothing about the quality of their intellectual >content. The ERIH, on the other hand, confuses internationality with >quality in a way that is particularly prejudicial to specialist and >non-English language journals. In a recent report, the British Academy, >with judicious understatement, concludes that "the European Reference >Index for the Humanities as presently conceived does not represent a >reliable way in which metrics of peer-reviewed publications can be >constructed" (Peer Review: the Challenges for the Humanities and Social >Sciences, September 2007: http://www.britac.ac.uk/reports/peer-review). >Such exercises as ERIH can become self- fulfilling prophecies. If such >measures as ERIH are adopted as metrics by funding and other agencies, >then many in our field will conclude that they have little choice other >than to limit their publications to journals in the premier division. We >will sustain fewer journals, much less diversity and impoverish our >discipline. Along with many others in our field, this Journal has >concluded that we want no part of this dangerous and misguided exercise. >This joint Editorial is being published in journals across the fields of >history of science and science studies as an expression of our collective >dissent and our refusal to allow our field to be managed and appraised in >this fashion. We have asked the compilers of the ERIH to remove our >journals' titles from their lists. > >Hanne Andersen (Centaurus) >Roger Ariew & Moti Feingold (Perspectives on Science) >A. K. Bag (Indian Journal of History of Science) >June Barrow-Green & Benno van Dalen (Historia mathematica) >Keith Benson (History and Philosophy of the Life Sciences) >Marco Beretta (Nuncius) >Michel Blay (Revue d'Histoire des Sciences) >Cornelius Borck (Berichte zur Wissenschaftsgeschichte) >Geof Bowker and Susan Leigh Star (Science, Technology and Human Values) >Massimo Bucciantini & Michele Camerota (Galilaeana: Journal of Galilean >Studies) >Jed Buchwald and Jeremy Gray (Archive for History of Exacft Sciences) >Vincenzo Cappelletti & Guido Cimino (Physis) >Roger Cline (International Journal for the History of Engineering & >Technology) >Stephen Clucas & Stephen Gaukroger (Intellectual History Review) >Hal Cook & Anne Hardy (Medical History) >Leo Corry, Alexandre M?traux & J?rgen Renn (Science in Context) >D.Diecks & J.Uffink (Studies in History and Philosophy of Modern >Physics) >Brian Dolan & Bill Luckin (Social History of Medicine) >Hilmar Duerbeck & Wayne Orchiston (Journal of Astronomical History & >Heritage) >Moritz Epple, Mikael H?rd, Hans-J?rg Rheinberger & Volker Roelcke (NTM: >Zeitschrift f?r Geschichte der Wissenschaften, Technik und Medizin) >Steven French (Metascience) >Willem Hackmann (Bulletin of the Scientific Instrument Society) >Bosse Holmqvist (Lychnos) >Paul Farber (Journal of the History of Biology) >Mary Fissell & Randall Packard (Bulletin of the History of Medicine) >Robert Fox (Notes & Records of the Royal Society) >Jim Good (History of the Human Sciences) >Michael Hoskin (Journal for the History of Astronomy) >Ian Inkster (History of Technology) >Marina Frasca Spada (Studies in History and Philosophy of Science) >Nick Jardine (Studies in History and Philosophy of Biological and >Biomedical Sciences) >Trevor Levere (Annals of Science) >Bernard Lightman (Isis) >Christoph L?thy (Early Science and Medicine) >Michael Lynch (Social Studies of Science) >Stephen McCluskey & Clive Ruggles (Archaeostronomy: the Journal of >Astronomy in Culture) >Peter Morris (Ambix) >E. Charles Nelson (Archives of Natural History) >Ian Nicholson (Journal of the History of the Behavioural Sciences) >Iwan Rhys Morus (History of Science) >John Rigden & Roger H Stuewer (Physics in Perspective) >Simon Schaffer (British Journal for the History of Science) >Paul Unschuld (Sudhoffs Archiv) >Peter Weingart (Minerva) >Stefan Zamecki (Kwartalnik Historii Nauki i Techniki) > > > > > >-- >H-SCI-MED-TECH >The H-Net list for the History of Science, Medicine and Technology >Email address for postings: h-sci-med-tech at h-net.msu.edu >Homepage: http://www.h-net.org/~smt/ >To unsubscribe or change your subscription options, please use the >Web Interface: http://www.h-net.org/lists/manage.cgi -- "David E. Wojick, PhD" Senior Consultant for Innovation Office of Scientific and Technical Information US Department of Energy http://www.osti.gov/innovation/ 391 Flickertail Lane, Star Tannery, VA 22654 USA 540-858-3136 http://www.bydesign.com/powervision/resume.html provides my bio and past client list. http://www.bydesign.com/powervision/Mathematics_Philosophy_Science/ presents some of my own research on information structure and dynamics. From loet at LEYDESDORFF.NET Wed Oct 1 17:04:28 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Wed, 1 Oct 2008 23:04:28 +0200 Subject: FW: Journals under Threat: A Joint Response from History of Science, Technology and Medicine Editors In-Reply-To: <454C4EFF24E347449521ABDC1B63025D0367A977@MAILSRV1.iue.private> Message-ID: Is 66% reliability convincing? It is not Russian roulette, but not nice in terms of the odds if tenure decisions are based on it. Best, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Armbruster, Chris > Sent: Wednesday, October 01, 2008 9:47 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] FW: Journals under Threat: A Joint > Response from History of Science, Technology and Medicine Editors > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > If I am not mistaken, then ERIH is based on peer judgement, > not metrics. National research councils were asked to > nominate experts. Because it was a European project, some > form of "national proportional representation" was > maintained. Furthermore, ERIH seems a response to the rise of > research evaluation and the 'feeling' that the Humanities > must also offer something. Initially, the ERIH A, B and C > classification was not meant as a ranking, but as a > differentiation that was meant to value category C as a > collection of important regional and national journals. But > this is not how it turned out. A, B and C is understood as > ranking. Interestingly, ISI has shown that there is a 66% > overlap between the ISI list and category A, but little > overlap with category B and C. On the one hand, ERIH hands > power to editors and publishers (one would expect steep price > rises for category A). On the other hand, ERIH offers some > sort of guidance in terms of the quality of journals... > > Maybe the main problem with ERIH is that it so rudimentary..... > > Chris Armbruster > > > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics on behalf of > David E. Wojick > Sent: Wed 10/1/2008 20:08 > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] FW: Journals under Threat: A Joint > Response from History of Science, Technology and Medicine Editors > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > An interesting fight. They start off by suggesting that the > new grading system is based on some sort of metiric but never > say what it is. Rather they attack the committee as though > the rankings are subjective. They also claim that funding of > research will be based on these journal rankings, without > evidence. But if the rankings are well founded then perhaps > funding decisions should be influenced by them. > > All in all it sounds like they just do not want to be > measured. No one does but it is often important to do so. > > David Wojick > > >Adminstrative info for SIGMETRICS (for example unsubscribe): > >http://web.utk.edu/~gwhitney/sigmetrics.html > > > >Fyi. Loet > > > >-----Original Message----- > >From: H-NET List on the History of Science, Medicine, and Technology > >[mailto:H-SCI-MED-TECH at H-NET.MSU.EDU] On Behalf Of Christophe Lecuyer > >(h-sci-med-tech) > >Sent: 30 September 2008 11:33 PM > > > >From: Finn Arne J?rgensen > >Date: Tue, September 30, 2008 2:31 pm > > > >Journals under Threat: A Joint Response from History of Science, > >Technology and Medicine Editors > > > >We live in an age of metrics. All around us, things are being > >standardized, quantified, measured. Scholars concerned with > the work of > >science and technology must regard this as a fascinating and crucial > >practical, cultural and intellectual phenomenon. Analysis > of the roots > >and meaning of metrics and metrology has been a > preoccupation of much of > >the best work in our field for the past quarter century at least. As > >practitioners of the interconnected disciplines that make up > the field of > >science studies we understand how significant, contingent > and uncertain > >can be the process of rendering nature and society in > grades, classes and > >numbers. We now confront a situation in which our own > research work is > >being subjected to putatively precise accountancy by arbitrary and > >unaccountable agencies. Some may already be aware of the > proposed European > >Reference Index for the Humanities (ERIH), an initiative > originating with > >the European Science Foundation. The ERIH is an attempt to > grade journals > >in the humanities - including "history and philosophy of > science". The > >initiative proposes a league table of academic journals, > with premier, > >second and third divisions. According to the European > Science Foundation, > >ERIH "aims initially to identify, and gain more visibility for, > >top-quality European Humanities research published in > academic journals > >in, potentially, all European languages". It is hoped "that > ERIH will form > >the backbone of a fully-fledged research information system for the > >Humanities". What is meant, however, is that ERIH will > provide funding > >bodies and other agencies in Europe and elsewhere with an > allegedly exact > >measure of research quality. In short, if research is published in a > >premier league journal it will be recognized as first rate; > if it appears > >somewhere in the lower divisions, it will be rated(and not funded) > >accordingly. This initiative is entirely defective in > conception and > >execution. Consider the major issues of accountability and > transparency. > >The process of producing the graded list of journals in > science studies > >was overseen by a committee of four (the membership is > >currently listed at > http://www.esf.org/research-areas/humanities/research- > >infrastructures-including-erih/erih-governance-and-panels/eri > h-expert- > >panels.html). This committee cannot be considered > representative. It was > >not selected in consultation with any of the various disciplinary > >organizations that currently represent our field such as the European > >Association for the History of Medicine and Health, the > Society for the > >Social History of Medicine, the British Society for the History of > >Science, the History of Science Society, the Philosophy of Science > >Association, the Society for the History of Technology or > the Society for > >Social Studies of Science. Journal editors were only > belatedly informed of > >the process and its relevant criteria or asked to provide > any information > >regarding their publications. No indication was given of the > means through > >which the list was compiled; nor how it might be maintained in the > >future. The ERIH depends on a fundamental misunderstanding > of conduct and > >publication of research in our field, and in the humanities > in general. > >Journals' quality cannot be separated from their contents > and their review > >processes. Great research may be published anywhere and in > any language. > >Truly ground-breaking work may be more likely to appear from > marginal, > >dissident or unexpected sources, rather than from a > well-established and > >entrenched mainstream journal. Our journals are various, > heterogeneous and > >distinct. Some are aimed at a broad, general and > international readership, > >others are more specialized in their content and implied > audience. Their > >scope and readership say nothing about the quality of their > intellectual > >content. The ERIH, on the other hand, confuses internationality with > >quality in a way that is particularly prejudicial to specialist and > >non-English language journals. In a recent report, the > British Academy, > >with judicious understatement, concludes that "the European Reference > >Index for the Humanities as presently conceived does not represent a > >reliable way in which metrics of peer-reviewed publications can be > >constructed" (Peer Review: the Challenges for the Humanities > and Social > >Sciences, September 2007: > http://www.britac.ac.uk/reports/peer-review). > >Such exercises as ERIH can become self- fulfilling > prophecies. If such > >measures as ERIH are adopted as metrics by funding and other > agencies, > >then many in our field will conclude that they have little > choice other > >than to limit their publications to journals in the premier > division. We > >will sustain fewer journals, much less diversity and impoverish our > >discipline. Along with many others in our field, this Journal has > >concluded that we want no part of this dangerous and > misguided exercise. > >This joint Editorial is being published in journals across > the fields of > >history of science and science studies as an expression of > our collective > >dissent and our refusal to allow our field to be managed and > appraised in > >this fashion. We have asked the compilers of the ERIH to remove our > >journals' titles from their lists. > > > >Hanne Andersen (Centaurus) > >Roger Ariew & Moti Feingold (Perspectives on Science) > >A. K. Bag (Indian Journal of History of Science) > >June Barrow-Green & Benno van Dalen (Historia mathematica) > >Keith Benson (History and Philosophy of the Life Sciences) > >Marco Beretta (Nuncius) > >Michel Blay (Revue d'Histoire des Sciences) > >Cornelius Borck (Berichte zur Wissenschaftsgeschichte) > >Geof Bowker and Susan Leigh Star (Science, Technology and > Human Values) > >Massimo Bucciantini & Michele Camerota (Galilaeana: Journal > of Galilean > >Studies) > >Jed Buchwald and Jeremy Gray (Archive for History of Exacft Sciences) > >Vincenzo Cappelletti & Guido Cimino (Physis) > >Roger Cline (International Journal for the History of Engineering & > >Technology) > >Stephen Clucas & Stephen Gaukroger (Intellectual History Review) > >Hal Cook & Anne Hardy (Medical History) > >Leo Corry, Alexandre M?traux & J?rgen Renn (Science in Context) > >D.Diecks & J.Uffink (Studies in History and Philosophy of Modern > >Physics) > >Brian Dolan & Bill Luckin (Social History of Medicine) > >Hilmar Duerbeck & Wayne Orchiston (Journal of Astronomical History & > >Heritage) > >Moritz Epple, Mikael H?rd, Hans-J?rg Rheinberger & Volker > Roelcke (NTM: > >Zeitschrift f?r Geschichte der Wissenschaften, Technik und Medizin) > >Steven French (Metascience) > >Willem Hackmann (Bulletin of the Scientific Instrument Society) > >Bosse Holmqvist (Lychnos) > >Paul Farber (Journal of the History of Biology) > >Mary Fissell & Randall Packard (Bulletin of the History of Medicine) > >Robert Fox (Notes & Records of the Royal Society) > >Jim Good (History of the Human Sciences) > >Michael Hoskin (Journal for the History of Astronomy) > >Ian Inkster (History of Technology) > >Marina Frasca Spada (Studies in History and Philosophy of Science) > >Nick Jardine (Studies in History and Philosophy of Biological and > >Biomedical Sciences) > >Trevor Levere (Annals of Science) > >Bernard Lightman (Isis) > >Christoph L?thy (Early Science and Medicine) > >Michael Lynch (Social Studies of Science) > >Stephen McCluskey & Clive Ruggles (Archaeostronomy: the Journal of > >Astronomy in Culture) > >Peter Morris (Ambix) > >E. Charles Nelson (Archives of Natural History) > >Ian Nicholson (Journal of the History of the Behavioural Sciences) > >Iwan Rhys Morus (History of Science) > >John Rigden & Roger H Stuewer (Physics in Perspective) > >Simon Schaffer (British Journal for the History of Science) > >Paul Unschuld (Sudhoffs Archiv) > >Peter Weingart (Minerva) > >Stefan Zamecki (Kwartalnik Historii Nauki i Techniki) > > > > > > > > > > > >-- > >H-SCI-MED-TECH > >The H-Net list for the History of Science, Medicine and Technology > >Email address for postings: h-sci-med-tech at h-net.msu.edu > >Homepage: http://www.h-net.org/~smt/ > >To unsubscribe or change your subscription options, please use the > >Web Interface: http://www.h-net.org/lists/manage.cgi > > -- > > "David E. Wojick, PhD" > Senior Consultant for Innovation > Office of Scientific and Technical Information > US Department of Energy > http://www.osti.gov/innovation/ > 391 Flickertail Lane, Star Tannery, VA 22654 USA > 540-858-3136 > > http://www.bydesign.com/powervision/resume.html provides my > bio and past client list. > http://www.bydesign.com/powervision/Mathematics_Philosophy_Sci > ence/ presents some of my own research on information > structure and dynamics. > From ibis.alozano at GMAIL.COM Thu Oct 2 09:51:31 2008 From: ibis.alozano at GMAIL.COM (Ibis Lozano) Date: Thu, 2 Oct 2008 09:51:31 -0400 Subject: No subject Message-ID: From garfield at CODEX.CIS.UPENN.EDU Fri Oct 3 12:21:05 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Fri, 3 Oct 2008 12:21:05 -0400 Subject: Smith, DR (Smith, Derek R.) Bibliometrics, dermatology and contact dermatitis CONTACT DERMATITIS, 59 (3): 133-136 2008 Message-ID: Author(s): Smith, DR (Smith, Derek R.) Title: Bibliometrics, dermatology and contact dermatitis Source: CONTACT DERMATITIS, 59 (3): 133-136 2008 Language: English Document Type: Review Author Keywords: bibliometrics; citation analysis; contact dermatitis; dermatology; impact factor Keywords Plus: OF-INVESTIGATIVE-DERMATOLOGY; IMPACT FACTOR; JOURNAL IMPACT; REFERENCE ACCURACY; CITATION-INDEX; ARTICLES; AUTHORS; PUBLICATIONS; RELEVANCE; HISTORY Abstract: Although the fields of bibliometrics and citation analysis have existed for many years, relatively few studies have specifically focused on the dermatological literature. This article reviews citation-based research in the dermatology journals, with a particular interest in manuscripts that have included Contact Dermatitis as part of their analysis. Overall, it can be seen that the rise of bibliometrics during the mid-20th century and its subsequent application to dermatology has provided an interesting insight into the progression of research within our discipline. Further investigation of citation trends and top-cited papers in skin research periodicals would certainly help complement the current body of knowledge. Addresses: Univ Newcastle, Fac Hlth, Res Ctr Excellence Sch Hlth Sci, Ourimbah, NSW 2258, Australia Reprint Address: Smith, DR, Univ Newcastle, Fac Hlth, Res Ctr Excellence Sch Hlth Sci, Ourimbah, NSW 2258, Australia. E-mail Address: derek.smith at newcastle.edu.au Cited Reference Count: 41 Times Cited: 0 Publisher: BLACKWELL PUBLISHING Publisher Address: 9600 GARSINGTON RD, OXFORD OX4 2DQ, OXON, ENGLAND ISSN: 0105-1873 29-char Source Abbrev.: CONTACT DERMATITIS ISO Source Abbrev.: Contact Dermatitis Source Item Page Count: 4 Subject Category: Allergy; Dermatology ISI Document Delivery No.: 341QA CONTACT DERMATITIS E : ALABOUD FM Dermatological publications in the Gulf Cooperation Council countries - An analysis of 1966-2004 Medline papers SAUDI MEDICAL JOURNAL 25 : 1652 2004 ANDERSON PC CONSIDERING EXCELLENCE ARCHIVES OF DERMATOLOGY 129 : 1188 1993 ARNDT KA MY JOURNAL IS SMALLER THAN YOUR JOURNAL AMERICAN JOURNAL OF DERMATOPATHOLOGY 16 : 231 1994 ARNDT KA PEERING AT THE DERMATOLOGY LITERATURE ARCHIVES OF DERMATOLOGY 131 : 602 1995 ARNDT KA INFORMATION EXCESS IN MEDICINE - OVERVIEW, RELEVANCE TO DERMATOLOGY, AND STRATEGIES FOR COPING ARCHIVES OF DERMATOLOGY 128 : 1249 1992 CALNAN CD CONTACT DERMATITIS 1 : 1 1975 DELLAVALLE RP Refining dermatology journal impact factors using PageRank JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY 57 : 116 DOI 10.1016/j.jaad.2007.03.005 2007 DIDIERJEAN X "Editors! - Check your impact factor data!" DERMATOLOGY 205 : 327 DOI 10.1159/000067003 2002 DIODATO V DICT BIBLIOMETRICS : 1994 DUBIN D CITATION-CLASSICS IN CLINICAL DERMATOLOGICAL JOURNALS - CITATION ANALYSIS, BIOMEDICAL JOURNALS, AND LANDMARK ARTICLES, 1945-1990 ARCHIVES OF DERMATOLOGY 129 : 1121 1993 DUBIN DB The homelands of top-cited articles ARCHIVES OF DERMATOLOGY 133 : 21 1997 DUBIN DB Organizational impact in the dermatologic literature ARCHIVES OF DERMATOLOGY 132 : 1293 1996 DUBIN DB THE IMPACT OF DERMATOLOGY JOURNALS ARCHIVES OF DERMATOLOGY 131 : 1059 1995 DUNST KM Analysis of original contributions in three dermatology journals JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY 52 : 355 2005 ENK CD Achievements of dermatological research in Denmark and Israel: a comparative 10-year study INTERNATIONAL JOURNAL OF DERMATOLOGY 42 : 398 2003 GARFIELD E The evolution of the Science Citation Index INTERNATIONAL MICROBIOLOGY 10 : 65 DOI 10.2436/20.1501.01.10 2007 GARFIELD E The history and meaning of the journal impact factor JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 295 : 90 2006 GEORGE PM REFERENCE ACCURACY IN THE DERMATOLOGICAL LITERATURE JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY 31 : 61 1994 JELLINEK NJ The clinical influence of the JAAD JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY 50 : 470 DOI 10.1016/j.jaad.2003.08.012 2004 JEMEC GB BMC DERMATOL 1 : 7 2001 JEMEC GBE A bibliometric study of dermatology in central Europe 1991-2002 INTERNATIONAL JOURNAL OF DERMATOLOGY 45 : 922 2006 LEE SY A survey of reference accuracy in two Asian dermatologic journals (the Journal of Dermatology and the Korean Journal of Dermatology) INTERNATIONAL JOURNAL OF DERMATOLOGY 38 : 357 1999 MARKS R The art, the science, and the practice of dermatology in the next millennium INTERNATIONAL JOURNAL OF DERMATOLOGY 38 : 343 1999 MENNE T Statistics and impact factor for Contact Dermatitis 2005 CONTACT DERMATITIS 55 : 129 2006 NGUYEN NQ Authors in dermatologic surgery DERMATOLOGIC SURGERY 26 : 1092 2000 NORRIS DA THE SCIENTIFIC CITATION INDEX AND THE-JOURNAL-OF-INVESTIGATIVE-DERMATOLOGY JOURNAL OF INVESTIGATIVE DERMATOLOGY 92 : S147 1989 POTTER BS Bibliographic landmarks in the history of dermatology JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY 48 : 919 DOI 10.1067/mjd.2003.291 2003 REES JL Recent record of the UK to publication in top dermatology journals BRITISH JOURNAL OF DERMATOLOGY 154 : 1016 DOI 10.1111/j.1365- 2133.2006.07221.x 2006 SAURAT JH THE IMPACT FACTOR OF DERMATOLOGY DERMATOLOGY 191 : 362 1995 SAURAT JH 50 YEARS OF CLINICAL RESEARCH IN THE-JOURNAL-OF-INVESTIGATIVE-DERMATOLOGY JOURNAL OF INVESTIGATIVE DERMATOLOGY 92 : S132 1989 SMITH DR Impact factors and contact dermatitis CONTACT DERMATITIS 58 : 191 2008 SMITH DR Historical development of the journal impact and its relevance for occupational health INDUSTRIAL HEALTH 45 : 730 2007 SMOLLER BR Impact factor: certainly a factor, but just whom does it impact? Important lessons from another discipline JOURNAL OF CUTANEOUS PATHOLOGY 33 : 458 2006 STERN RS Top-cited dermatology authors publishing in 5 "high-impact" general medical journals ARCHIVES OF DERMATOLOGY 136 : 357 2000 STERN RS Top cited authors in dermatology - A citation study from 24 journals: 1982- 1996 ARCHIVES OF DERMATOLOGY 135 : 299 1999 STERN RS Classic and near-classic articles in the dermatologic literature ARCHIVES OF DERMATOLOGY 135 : 948 1999 STERN RS Growth of international contributors to dermatologic literature ARCHIVES OF DERMATOLOGY 135 : 1074 1999 SULZBERGER MB J INVEST DERMATOL 92 : S142 1989 VANHOOYDONK G A BIBLIOTHECONOMIC ANALYSIS OF THE IMPACT FACTORS OF SCIENTIFIC DISCIPLINES SCIENTOMETRICS 30 : 65 1994 WILGUS ML Volume, trend and citation analyses of skin related publications from 1966 to 2003 JOURNAL OF DERMATOLOGICAL SCIENCE 37 : 125 DOI 10.1016/j.jdermsci.2004.10.010 2005 From garfield at CODEX.CIS.UPENN.EDU Fri Oct 3 13:33:12 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Fri, 3 Oct 2008 13:33:12 -0400 Subject: Bravo-Vinaja, A (Bravo-Vinaja, Angel); Sanz-Casado, E (Sanz-Casado, Elias) Bibliometric analysis of the Mexican scientific production in agricultural sciences during the years 1983-2002 REVISTA FITOTECNIA MEXICANA, 31 (3): 187-194 JUL-SEP 2008 Message-ID: E-mail Address: abravo at colpos.mx Author(s): Bravo-Vinaja, A (Bravo-Vinaja, Angel); Sanz-Casado, E (Sanz- Casado, Elias) Title: Bibliometric analysis of the Mexican scientific production in agricultural sciences during the years 1983-2002 Source: REVISTA FITOTECNIA MEXICANA, 31 (3): 187-194 JUL-SEP 2008 Language: Spanish Document Type: Article Author Keywords: Mexico; agricultural sciences; bibliometric analysis; databases; scientific literature Keywords Plus: TECHNOLOGY; REPRODUCTION; RESEARCHERS; IMPACT Abstract: In this research the Mexican scientific production in agricultural sciences is characterized, by means or bibliometric indicators, from the production included in several databases: Agricola, Agris, Cab Abstracts, Tropag & Rural, Science Citation Index (SCI) and Social Science Citation Index (SSCI). In the first four databases, only the address of the corresponding author was available, who usually is the first author. The indicators used here allow to characterize the evolution of articles published in journals, the national contribution by states and by institutions and sectors, the languages in which the articles are published, the collaboration between authors and the coauthorship index. The scientific production during the evaluated period ascended to 15 736 journal articles. Two states, the Federal District and the State of Mexico, published more than half of articles thus showing a high concentration of the national scientific research. The institutions that published more articles were public universities and institutes or research centers. The languages in which more journal articles were published were English and Spanish. The average rate of articles signed in coauthorship was of 87.62 %. The coauthorship index increased from 2.47 authors per article in 1983 to 4.08 in 2002. Addresses: Ctr Documentac & Biblioteca, Montecillo 56230, Texcoco, Mexico; Univ Carlos III Madrid, Dept Binliotecon & Documentac, Madrid, Spain Reprint Address: Bravo-Vinaja, A, Ctr Documentac & Biblioteca, Carretera Mexico Texcoco,Km 36-5, Montecillo 56230, Texcoco, Mexico. E-mail Address: abravo at colpos.mx Cited Reference Count: 30 Times Cited: 0 Publisher: SOC MEXICANA FITOGENETICA Publisher Address: APARTADO POSTAL NO 21, CHAPINGO, ESTADO MEXICO 56 230, MEXICO ISSN: 0187-7380 29-char Source Abbrev.: REV FITOTEC MEX ISO Source Abbrev.: Rev. Fitotec. Mex. Source Item Page Count: 8 Subject Category: Agronomy; Horticulture ISI Document Delivery No.: 341FK *CONACYT IND ACT CIENT TECN : 1997 *CONACYT INF GEN EST CIENC TE : 2004 *CONACYT INF GEN EST CIENC TE : 2003 *THOM REUT SCI J CITATION R 071 : 2007 ALFARAZ PH Bibliometric study on food science and technology: Scientific production in Iberian-American countries (1991-2000) SCIENTOMETRICS 61 : 89 2004 ANTA E VET MEXICO 20 : 3 1989 ARECHIGA U INVESTIGACION CIENTI : 1995 ARENAS M AN DOCUM 7 : 29 2004 DALESSANDRO E VET MEXICO 31 : 261 2000 DELGADO H IMPACT OF STUDIES PUBLISHED IN THE INTERNATIONAL LITERATURE BY SCIENTISTS AT THE NATIONAL-UNIVERSITY-OF-MEXICO SCIENTOMETRICS 23 : 75 1992 DELICEA AJ AN DOCUM 6 : 145 2003 EKBOIR J ANALISIS SISTEMA MEX : 2003 GAILLARD J IFS IMPACT MEXICO 25 : 2001 GALINA CS The impact of the International Foundation for Science (IFS) funding on Latin American research in animal health and production INTERCIENCIA 25 : 30 2000 GALINA CS WORLD ANIM REV 80 : 3 1994 GONZALEZ UL REV GRAL INF DOCUM 7 : 2001 1997 LOMNITZ LA VETERINARY-MEDICINE AND ANIMAL HUSBANDRY IN MEXICO - FROM EMPIRICISM TO SCIENCE AND TECHNOLOGY MINERVA 32 : 144 1994 LOPEZ JM MED CLIN-BARCELONA 98 : 142 1992 MIRANDE A RESEARCH IN ANIMAL REPRODUCTION - AN ANALYSIS OF THE CONTRIBUTION MADE BY LATIN-AMERICA THERIOGENOLOGY 28 : 121 1987 NASIR AM BIBLIOMETRIC EVALUATION OF AGRICULTURAL LITERATURE PUBLISHED IN MALAYSIA SCIENTOMETRICS 29 : 191 1994 RUSSELL JM ANIM BREED ABSTR 55 : 819 1987 RUSSELL JM PRODUCTIVITY OF AUTHORS PUBLISHING ON TROPICAL BOVINE REPRODUCTION INTERCIENCIA 13 : 311 1988 RUSSELL JM RESEARCH AND PUBLICATION TRENDS OF A LATIN-AMERICAN VETERINARY FACULTY INTERCIENCIA 12 : 243 1987 RUSSELL JM LIVSTOCK REPROD LATI : 285 1990 RUSSELL JM THE INCREASING ROLE OF INTERNATIONAL-COOPERATION IN SCIENCE AND TECHNOLOGY RESEARCH IN MEXICO SCIENTOMETRICS 34 : 45 1995 RUSSELL JM PATTERNS OF LITERATURE CITATION BY UNDERGRADUATE STUDENTS AND RESEARCHERS IN THE VETERINARY FIELD SCIENTOMETRICS 12 : 73 1987 SAAVEDRA FO REV ESP DOCUM CIENT 25 : 151 2002 SANCHO R REV ESPANOLA DOCUMEN 13 : 842 1990 SANZ CE REV GRAL INF DOCUM 7 : 41 1997 SPINAK E DICCIONARIO ENCICLOP : 1996 From nouruzi at GMAIL.COM Mon Oct 6 08:56:42 2008 From: nouruzi at GMAIL.COM (Alireza Noruzi) Date: Mon, 6 Oct 2008 17:26:42 +0430 Subject: Webology: Volume 5, Number 2, 2008 Message-ID: Dear All, apologies for cross-posting. We are pleased to inform you that Vol. 5, No. 2 of Webology, an OPEN ACCESS journal, is published and available ONLINE now. ------------------ Webology: Volume 5, Number 2, 2008 TOC: http://www.webology.ir/2008/v5n2/toc.html This issue contains: ----------------------------------------- Editorial - Citation-Linking between Open Access Journals -- Alireza Noruzi -- Keywords: Open access journals; Hyperlink; Citation-linking; Reference-linking -- http://www.webology.ir/2008/v5n2/editorial16.html ----------------------------------------- Articles - Search Engines and Power: A Politics of Online (Mis-) Information -- Elad Segev -- Keywords: Search engines; Information inequalities; Misinformation; Structural biases; Politics of online information; Google Earth -- http://www.webology.ir/2008/v5n2/a54.html - Information retrieval and machine learning: Supporting technologies for web mining research and practice -- MPS Bhatia & Akshi Kumar Khalid -- Keywords: Web Information Retrieval; Machine Learning Paradigms; Web mining -- http://www.webology.ir/2008/v5n2/a55.html - Marketing of library and information services in global era: A current approach -- Basanta Kumar Das & Sanjay Kumar Karn -- Keywords: Library services; Information services; Marking; Market; Management -- http://www.webology.ir/2008/v5n2/a56.html - Web citation behaviour in scholarly electronic journals in the field of library and information science -- Smt. Veena R. Bhat & B. T. Sampath Kumar -- Keywords: Internet; Electronic Journals; Web-based sources; Web references; Citation analysis; Library and Information Science -- http://www.webology.ir/2008/v5n2/a57.html ----------------------------------------- Book Reviews - American libraries and the Internet: The social construction of Web appropriation and use --- Bin Li --- Book reviewer: Hamid R. Jamali --- http://www.webology.ir/2008/v5n2/bookreview14.html - The human side of reference and information services in academic libraries: Adding value in the digital world --- Lesley S.J. Farmer --- Book reviewer: Hamid R. Jamali --- http://www.webology.ir/2008/v5n2/bookreview15.html - Evidence-based librarianship: Case studies and active learning exercises --- Elisabeth Connor --- Book reviewer: N. K. Swain --- http://www.webology.ir/2008/v5n2/bookreview16.html ----------------------------------------- Call for Papers -- http://www.webology.ir/callforpapers.html ========================================= Best regards, Alireza Noruzi, Ph.D. Editor-in-Chief of Webology Website: www.webology.ir ~ The great aim of Open Access journals is knowledge sharing.~ From ibis.alozano at GMAIL.COM Mon Oct 6 13:00:06 2008 From: ibis.alozano at GMAIL.COM (Ibis Lozano) Date: Mon, 6 Oct 2008 12:00:06 -0500 Subject: No subject Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From loet at LEYDESDORFF.NET Tue Oct 7 00:47:53 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Tue, 7 Oct 2008 06:47:53 +0200 Subject: The Triple Helix Model: Configurational Information as Potentially Negative Entropy Message-ID: Configurational Information as Potentially Negative Entropy: The Triple Helix Model Entropy 10(4) (2008), 391-420 Abstract: Configurational information is generated when three or more sources of variance interact. The variations not only disturb each other relationally, but by selecting upon each other, they are also positioned in a configuration. A configuration can be stabilized and/or globalized. Different stabilizations can be considered as second-order variation, and globalization as a second-order selection. The positive manifestations and the negative selections operate upon one another by adding and reducing uncertainty, respectively. Reduction of uncertainty in a configuration can be measured in bits of information. The variables can also be considered as dimensions of the probabilistic entropy in the system(s) under study. The configurational information then provides us with a measure of synergy within a complex system. For example, the knowledge base of an economy can be considered as such a synergy in the otherwise virtual (that is, fourth) dimension of a regime. Keywords: Information theory; probabilistic entropy; anticipation; triple helix; transmission; configuration; university-industry-government relations; scientometrics; emergence < pdf-version> _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam loet at leydesdorff.net ; http://www.leydesdorff.net/ Visiting Professor 2007-2010, ISTIC, Beijing; Honorary Fellow 2007-2010, SPRU, University of Sussex Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated, 385 pp.; US$ 18.95; The Self-Organization of the Knowledge-Based Society ; The Challenge of Scientometrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From leo.egghe at UHASSELT.BE Tue Oct 7 08:20:12 2008 From: leo.egghe at UHASSELT.BE (Leo Egghe) Date: Tue, 7 Oct 2008 14:20:12 +0200 Subject: Journal of Informetrics Wins 2008 ALPSP Award for Best New Journal Message-ID: Journal of Informetrics Wins 2008 ALPSP Award for Best New Journal Cutting Edge Journal of Informetrics Wins Accolades and ?Expands Boundaries? Amsterdam, September 17th, 2008 ? Elsevier is delighted to announce that the Journal of Informetrics (JOI) is the winner of the 2008 Association of Learned and Professional Society Publishers (ALPSP) award for Best New Journal. The distinguished ALPSP award is open to journals launched within the past 3 years and considers a range of attributes including high quality peer reviewed articles. Edited by Leo Egghe, JOI is in its second year of publication and has published a wide range of high quality articles from leading authors across all fields of information science, several of which have policy implications for research evaluation. JOI has been accepted by ISI/Thomson for inclusion in the Social Science Citation Index and will receive its first Impact Factor in 2009. Over 3,700 institutions worldwide have online access to the full text of the journal and it is included within the UN HINARI, AGORA and OARE initiatives, providing free and low cost access to the journal in developing countries. JOI is also currently experimenting with new e-tools including links to the 2Collab social network (http://www.2collab.com/), the ability to publish supplementary data sets and models, and engaging with ontologies for the advancement of understanding in the fields the journal serves. According to Eugene Garfield, Chairman Emeritus ISI, President The Scientist LLC, Past President ASIS&T, ?By lowering the barriers between these fields, expanding the boundaries of bibliometric research, and contributing to the increased degree of ?exactness? of informetrics, this journal will make a valuable contribution to the research community.? Diane Cogan, Elsevier Publishing Director, remarked, ?Given the increased focus on measurement, the creation of the H and G indices, and sustained growth in articles, JOI represents an invaluable home for leading research in this field.? Commenting on the honor, Tony Roche, Elsevier Publisher of JOI noted, "This award is especially pleasing as Elsevier has worked hard to develop a journal which meets a real need within the research community.? # # # About Journal of Informetrics The mission of JOI is to focus upon fundamental quantitative aspects of information science, ?hardening? the field and lowering barriers between neighboring disciplines. To date, the journal has published a stream of high quality articles, several of which may have policy implications for research evaluation. The journal, although limited to metrics aspects, has a broad scope: in principle, all quantitative analysis of original problems in information science are within the scope of JOI . For more information visit: www.elsevier.com/locate/joi About Elsevier Elsevier is a world-leading publisher of scientific, technical and medical information products and services. Working in partnership with the global science and health communities, Elsevier?s 7,000 employees in over 70 offices worldwide publish more than 2,000 journals and 1,900 new books per year, in addition to offering a suite of innovative electronic products, such as ScienceDirect, MD Consult, Scopus, bibliographic databases, and online reference works. Elsevier is a global business headquartered in Amsterdam, The Netherlands and has offices worldwide. Elsevier is part of Reed Elsevier Group plc, a world-leading publisher and information provider. Operating in the science and medical, legal, education and business-to-business sectors, Reed Elsevier provides high-quality and flexible information solutions to users, with increasing emphasis on the Internet as a means of delivery. Reed Elsevier's ticker symbols are REN (Euronext Amsterdam), REL (London Stock Exchange), RUK and ENL (New York Stock Exchange). Press release: Date Sept 19th 2008 Contact Tony Roche, Publisher Phone +44 1865 84 3471 E-mail t.roche at elsevier.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3660 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Journal of Informetrics PR_final.doc Type: application/msword Size: 62976 bytes Desc: not available URL: From geisler at STUART.IIT.EDU Tue Oct 7 09:25:56 2008 From: geisler at STUART.IIT.EDU (Eliezer Geisler) Date: Tue, 7 Oct 2008 08:25:56 -0500 Subject: Journal of Informetrics Wins 2008 ALPSP Award for Best New Journal Message-ID: Dear Leo: Hearty congratulations on this great achievement! With warm regards, Elie _______________________________ Elie Geisler Distinguished Professor Director, CMMT Stuart School of Business Illinois Institute of Technology 565 W. Adams Street Chicago, IL 60661 Tel:(312) 906-6532 ---------- Original Message ---------------------------------- From: Leo Egghe Reply-To: ASIS&T Special Interest Group on Metrics Date: Tue, 7 Oct 2008 14:20:12 +0200 >Adminstrative info for SIGMETRICS (for example unsubscribe): >http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > >Journal of Informetrics Wins 2008 ALPSP Award for Best New Journal > > > > >Cutting Edge Journal of Informetrics Wins Accolades and > > >?Expands Boundaries? > > > > > > >Amsterdam, September 17th, 2008 ? Elsevier is delighted to announce that the >Journal of Informetrics (JOI) is the winner of the 2008 Association of >Learned and Professional Society Publishers (ALPSP) award for Best New >Journal. The distinguished ALPSP award is open to journals launched within >the past 3 years and considers a range of attributes including high quality >peer reviewed articles. > > > >Edited by Leo Egghe, JOI is in its second year of publication and has >published a wide range of high quality articles from leading authors across >all fields of information science, several of which have policy implications >for research evaluation. JOI has been accepted by ISI/Thomson for inclusion >in the Social Science Citation Index and will receive its first Impact >Factor in 2009. > > > >Over 3,700 institutions worldwide have online access to the full text of the >journal and it is included within the UN HINARI, AGORA and OARE initiatives, >providing free and low cost access to the journal in developing countries. >JOI is also currently experimenting with new e-tools including links to the >2Collab social network (http://www.2collab.com/), the ability to publish >supplementary data sets and models, and engaging with ontologies for the >advancement of understanding in the fields the journal serves. > > > >According to Eugene Garfield, Chairman Emeritus ISI, President The Scientist >LLC, Past President ASIS&T, ?By lowering the barriers between these fields, >expanding the boundaries of bibliometric research, and contributing to the >increased degree of ?exactness? of informetrics, this journal will make a >valuable contribution to the research community.? > > > >Diane Cogan, Elsevier Publishing Director, remarked, ?Given the increased >focus on measurement, the creation of the H and G indices, and sustained >growth in articles, JOI represents an invaluable home for leading research >in this field.? > > > >Commenting on the honor, Tony Roche, Elsevier Publisher of JOI noted, "This >award is especially pleasing as Elsevier has worked hard to develop a >journal which meets a real need within the research community.? > > > > > ># # # > > > > > >About Journal of Informetrics > >The mission of JOI is to focus upon fundamental quantitative aspects of >information science, ?hardening? the field and lowering barriers between >neighboring disciplines. To date, the journal has published a stream of high >quality articles, several of which may have policy implications for research >evaluation. The journal, although limited to metrics aspects, has a broad >scope: in principle, all quantitative analysis of original problems in >information science are within the scope of JOI >tion#description> . For more information visit: www.elsevier.com/locate/joi > > > > > >About Elsevier > >Elsevier is a world-leading publisher of scientific, technical and medical >information products and services. Working in partnership with the global >science and health communities, Elsevier?s 7,000 employees in over 70 >offices worldwide publish more than 2,000 journals and 1,900 new books per >year, in addition to offering a suite of innovative electronic products, >such as ScienceDirect, MD Consult, Scopus, bibliographic databases, >and online reference works. > > > >Elsevier is a global business headquartered in Amsterdam, The Netherlands >and has offices worldwide. Elsevier is part of Reed Elsevier Group plc, a >world-leading publisher and information provider. Operating in the science >and medical, legal, education and business-to-business sectors, Reed >Elsevier provides high-quality and flexible information solutions to users, >with increasing emphasis on the Internet as a means of delivery. Reed >Elsevier's ticker symbols are REN (Euronext Amsterdam), REL (London Stock >Exchange), RUK and ENL (New York Stock Exchange). > > > > > >Press release: > >Date Sept 19th 2008 > >Contact Tony Roche, Publisher > >Phone +44 1865 84 3471 > >E-mail t.roche at elsevier.com > > > > > > > > > > > > From garfield at CODEX.CIS.UPENN.EDU Tue Oct 7 13:40:10 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Tue, 7 Oct 2008 13:40:10 -0400 Subject: Eysenbach, G (Eysenbach, Gunther) Citation advantage of open access articles PLOS BIOLOGY, 4 (5): 692-698 MAY 2006 Message-ID: E-mail Address: geysenba at uhnres.utoronto.ca Author(s): Eysenbach, G (Eysenbach, Gunther) Title: Citation advantage of open access articles Source: PLOS BIOLOGY, 4 (5): 692-698 MAY 2006 Language: English Document Type: Article Keywords Plus: IMPACT; PNAS Abstract: Open access (OA) to the research literature has the potential to accelerate recognition and dissemination of research findings, but its actual effects are controversial. This was a longitudinal bibliometric analysis of a cohort of OA and non-OA articles published between June 8, 2004, and December 20, 2004, in the same journal (PNAS: Proceedings of the National Academy of Sciences). Article characteristics were extracted, and citation data were compared between the two groups at three different points in time: at "quasi-baseline" (December 2004,0-6 mo after publication), in April 2005 (410 mo after publication), and in October 2005 (10-16 mo after publication). Potentially confounding variables, including number of authors, authors' lifetime publication count and impact, submission track, country of corresponding author, funding organization, and discipline, were adjusted for in logistic and linear multiple regression models. A total of 1,492 original research articles were analyzed: 212 (14.2% of all articles) were OA articles paid by the author, and 1,280 (85.8%) were non-OA articles. In April 2005 (mean 206 d after publication), 627 (49.0%) of the non-OA articles versus 78 (36.8%) of the OA articles were not cited (relative risk = 1.3 [95% Confidence Interval: 1.1-1.6]; p = 0.001). 6 mo later (mean 288 d after publication), non-OA articles were still more likely to be uncited (non-OA: 172 [13.6%], OA: 11 [5.2%]; relative risk = 2.6 [1.4-4.7]; p < 0.001). The average number of citations of OA articles was higher compared to non-OA articles (April 2005: 1.5 [SD = 2.5] versus 1.2 [SD =2.0]; Z = 3.123; p = 0.002; October 2005: 6.4 [SD = 10.4] versus 4.5 [SD = 4.9]; Z = 4.058; p < 0.001). In a logistic regression model, controlling for potential confounders, OA articles compared to non-OA articles remained twice as likely to be cited (odds ratio = 2.1 [1.5-2.9]) in the first 4-10 mo after publication (April 2005), with the odds ratio increasing to 2.9 (1.5-5.5) 10-16 mo after publication (October 2005). Articles published as an immediate OA article on the journal site have higher impact than self- archived or otherwise openly accessible CIA articles. We found strong evidence that, even in a journal that is widely available in research libraries, OA articles are more immediately recognized and cited by peers than non-OA articles published in the same journal. OA is likely to benefit science by accelerating dissemination and uptake of research findings. Addresses: Univ Hlth Network, Ctr Global eHealth Innovat, Toronto, ON, Canada; Univ Toronto, Dept Hlth Policy Management & Evaluat, Toronto, ON, Canada Reprint Address: Eysenbach, G, Univ Hlth Network, Ctr Global eHealth Innovat, Toronto, ON, Canada. E-mail Address: geysenba at uhnres.utoronto.ca Cited Reference Count: 17 Times Cited: 35 Publisher: PUBLIC LIBRARY SCIENCE Publisher Address: 185 BERRY ST, STE 1300, SAN FRANCISCO, CA 94107 USA ISSN: 1544-9173 DOI: 10.1371/journal.pbio.0040157 29-char Source Abbrev.: PLOS BIOL ISO Source Abbrev.: PLoS. Biol. Source Item Page Count: 7 Subject Category: Biochemistry & Molecular Biology; Biology ISI Document Delivery No.: 048SN ANDERSON K J ELECTR PUBL 6 : 2001 ANTELMAN K Do open-access articles have a greater research impact? COLLEGE & RESEARCH LIBRARIES 65 : 372 2004 ARONSON JK Commentary: Open access publishing: too much oxygen? BRITISH MEDICAL JOURNAL 330 : 759 2005 COZZARELLI NR An open access option for PNAS PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 101 : 8509 2004 COZZARELLI NR PNAS at volume 100 PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 100 : 3005 DOI 10.1073/pnas.0730842100 2003 DAVIS PM UNPUB DOES ARXIV HEA : 2006 EYSENBACH G The impact of preprint servers and electronic publishing on biomedical research CURRENT OPINION IN IMMUNOLOGY 12 : 499 2000 EYSENBACH G J MED INTERNET RES 7 : E60 2005 EYSENBACH G J MED INTERNET RES 1 : E9 DOI 10.2196/JMIR.1.2.E9 1999 HARNAD S DLIB MAGAZINE : 10 2004 HARNAD S SERIALS REV : 2004 HUNTER K Critical issues in the development of STM journal publishing LEARNED PUBLISHING 18 : 51 2005 KURTZ MJ EFFECT USE ACCESS CI : 2005 LAWRENCE S Free online availability substantially increases a paper's impact NATURE 411 : 521 2001 LAWRENCE S Accessibility of information on the web NATURE 400 : 107 1999 MCVEIGH ME OPEN ACCESS J ISI CI : 2004 WREN JD Open access and openly accessible: a study of scientific publications shared via the internet BRITISH MEDICAL JOURNAL 330 : 1128 2005 From garfield at CODEX.CIS.UPENN.EDU Tue Oct 7 16:02:53 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Tue, 7 Oct 2008 16:02:53 -0400 Subject: Estelle Brodman: Medical Historian-Librarian Extraordinaire - A Series of Papers in Her Honor, Journal of the Medical Library Association 96 (3) Message-ID: Estelle Brodman was one of my earliest mentors. She was not only an outstanding medical librarian and bibliographer but also an accomplished medical historian. I first met her when I joined the Welch Medical Library Research Project in 1951 and remained her friend and admirer until her retirement. She was a member of that remarkable group of librarians and indexers at the NLM consisting of Samuel Lazerow, David Kronick, Seymour Taine, Robert Hayne, Thelma Charen, Winnie Sewell and Frank B. Rogers who taught and inspired me during the early days of my career. It gives me great pleasure to call attention to this series of papers in Estelle's honor. Eugene Garfield Author(s): Messerle, J (Messerle, Judith) Title: Celebrating individual heroes: the continuing relevance of Estelle Brodman Source: JOURNAL OF THE MEDICAL LIBRARY ASSOCIATION, 96 (3): 181-182 JUL 2008 E-mail Address: jmesserle at frontiernet.net Language: English Document Type: Editorial Material Addresses: Harvard Univ, Lib Med, Boston, MA 02115 USA Reprint Address: Messerle, J, 12815 Quarry Trail, Carlinville, IL 62626 USA. E-mail Address: jmesserle at frontiernet.net Cited Reference Count: 1 Times Cited: 1 Publisher: MEDICAL LIBRARY ASSOC Publisher Address: 65 EAST WACKER PLACE, STE 1900, CHICAGO, IL 60601-7298 USA ISSN: 1536-5050 DOI: 10.3163/1536-5050.96.3.001 29-char Source Abbrev.: J MED LIBR ASSOC ISO Source Abbrev.: J. Med. Libr. Assoc. Source Item Page Count: 2 Subject Category: Information Science & Library Science Cited references ZINN N HIST ASS ORAL HIST P : 1981 Personal Recollections of the Contributions of Estelle Brodman: An Enduring Legacy for Health Sciences Librarianship Lucretia W. McClure E- mail Address: Lucretia_McClure at hms.harvard.edu J Med Libr Assoc. 2008 July; 96(3): 239?241. doi: 10.3163/1536-5050.96.3.012. Becoming an Internationalist: Reflections on the International Activities of Estelle Brodman J. Michael Homan E-mail Address: homan at mayo.edu J Med Libr Assoc. 2008 July; 96(3): 242?248. doi: 10.3163/1536-5050.96.3.013. Estelle Brodman: Educator Extraordinaire Nancy M. Lorenzi E-mail Address: nancy.lorenzi at vanderbilt.edu J Med Libr Assoc. 2008 July; 96(3): 249?254. doi: 10.3163/1536-5050.96.3.014. A Student of History: Perspectives on the Contributions of Estelle Brodman Lucretia W. McClure E-mail Address: Lucretia_McClure at hms.harvard.edu J Med Libr Assoc. 2008 July; 96(3): 255?261. doi: 10.3163/1536-5050.96.3.015. Estelle Brodman and the First Generation of Library Automation Wayne J. Peay and Paul Schoening E-mail Addresses: wayne at lib.med.utah.edu, paul.schoening at wustl.edu J Med Libr Assoc. 2008 July; 96(3): 262?267. doi: 10.3163/1536-5050.96.3.016. From amsciforum at GMAIL.COM Wed Oct 8 09:41:32 2008 From: amsciforum at GMAIL.COM (Stevan Harnad) Date: Wed, 8 Oct 2008 09:41:32 -0400 Subject: Are Online and Free Online Access Broadening or Narrowing Research? Message-ID: The decline in the concentration of citations, 1900-2007 Vincent Lariviere, Yves Gingras, Eric Archambault http://arxiv.org/abs/0809.5250 (Deposited on 30 Sep 2008) This paper challenges recent research (Evans, 2008) reporting that the concentration of cited scientific literature increases with the online availability of articles and journals. Using Thomson Reuters' Web of Science, the present paper analyses changes in the concentration of citations received (two- and five-year citation windows) by papers published between 1900 and 2005. Three measures of concentration are used: the percentage of papers that received at least one citation (cited papers); the percentage of papers needed to account for 20, 50 and 80 percent of the citations; and, the Herfindahl-Hirschman index. These measures are used for four broad disciplines: natural sciences and engineering, medical fields, social sciences, and the humanities. All these measures converge and show that, contrary to what was reported by Evans, the dispersion of citations is actually increasing. From eugene.garfield at THOMSONREUTERS.COM Wed Oct 8 14:18:08 2008 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Wed, 8 Oct 2008 14:18:08 -0400 Subject: FW: Re: New ways of measuring research by Stevan Harnad Message-ID: Clearly a message of interest to the subscribers to Sig Metrics of ASIST. Geen Garfield -----Original Message----- From: American Scientist Open Access Forum [mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG] On Behalf Of Stevan Harnad Sent: Wednesday, October 08, 2008 11:03 AM To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG Subject: Re: New ways of measuring research On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill wrote: > the primary reason that I believe bibliometrics, innovation > indices, patent analysis and econometric modeling are flawed is that > they rely upon the counting of things (paper, money, people, etc.) > without understanding the underlying motivations of the actors within > the scientific ecosystem. There are two ways to evaluate: Subjectively (expert judgement, peer review, opinion polls) or Objectively: counting things The same is true of motives: you can assess them subjectively or objectively. If objectively, you have to count things. That's metrics. Philosophers say "Show me someone who wishes to discard metaphysics, and I'll show you a metaphysician with a rival (metaphysical) system." The metric equivalent is "Show me someone who wishes to discard metrics (counting things), and I'll show you a metrician with a rival (metric) system." Objective metrics, however, must be *validated*, and that usually begins by initializing their weights based on their correlation with existing (already validated, or face-valid) metrics and/or peer review (expert judgment). Note also that there are a-priori evaluations (research funding proposals, research findings submittedf or publication) and a-posteriori evaluations (research performance assessment). > what,,, motivates scientists to collaborate? You can ask them (subjective), or you can count things (co-authorships, co-citations, etc.) to infer what factors underlie collaboration (objective). > Second, what science policy makers want is a set of decision support > tools that supplement the existing gold standard (expert judgment) and > provide options for the future. New metrics need to be validated against existing, already validated (or face-valid) metrics which in turn have to be validated against the "gold standard" (expert judgment. Once shown to be reliable and valid, metrics can then predict on their own, especially jointly, with suitable weights: The UK RAE 2008 offers an ideal opportunity to validate a wide spectrum of old and new metrics, jointly, field by field, against expert judgment: Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. http://eprints.ecs.soton.ac.uk/13804/ Sample of candidate OA-era metrics: Citations (C) CiteRank Co-citations Downloads (D) C/D Correlations Hub/Authority index Chronometrics: Latency/Longevity Endogamy/Exogamy Book citation index Research funding Students Prizes h-index Co-authorships Number of articles Number of publishing years Semiometrics (latent semantic indexing, text overlap, etc.) > policy makers need to understand the benefits and effectiveness of their > investment decisions in R&D. Currently, policy makers rely on big > committee reviews, peer review, and their own best judgment to make > those decisions. The current set of tools available don't provide > policy makers with rigorous answers to the benefits/effectiveness > questions... and they are too difficult to use and/or > inexplicable to the normal policy maker. The result is the laundry list > of "metrics" or "indicators" that are contained in the "Gathering Storm" > or any of the innovation indices that I have seen to date. The difference between unvalidated and validated metrics is the difference between night and day. The role of expert judgment will obviously remain primary in the case of a-priori evaluations (specific research proposals and submissions for publication) and a-posteriori evaluations (research performance evaluation, impact studies) > Finally, I don't think we know enough about the functioning of the > innovation system to begin making judgments about which > metrics/indicators are reliable enough to provide guidance to policy > makers. I believe that we must move to an ecosystem model of innovation > and that if you do that, then non-obvious indicators (relative > competitiveness/openness of the system, embedded infrastructure, etc.) > become much more important than the traditional metrics used by NSF, > OECD, EU and others. In addition, the decision support tools will > gravitate away from the static (econometric modeling, > patent/bibliometric citations) and toward the dynamic (systems modeling, > visual analytics). I'm not sure what all these measures are, but assuming they are countale metrics, they all need prior validation against validated or face-valid criteria, fields by field, and preferably a large battery of candidate metrics, validated jointly, initializing the weights of each. OA will help provide us with a rich new spectrum of candidate metrics and an open means of monitoring, validating, and fine-tuning them. Stevan Harnad From eugene.garfield at THOMSONREUTERS.COM Wed Oct 8 14:57:00 2008 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Wed, 8 Oct 2008 14:57:00 -0400 Subject: FW: New ways of measuring research Message-ID: Clearly of potential interest to citation analysts. EG -----Original Message----- From: American Scientist Open Access Forum [mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG] On Behalf Of Bill Hooker Sent: Wednesday, October 08, 2008 12:56 AM To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG Subject: Re: New ways of measuring research The following recent paper: Despina G. Contopoulos-Ioannidis, George A. Alexiou, Theodore C. Gouvias, and John P. A. Ioannidis, "Life Cycle of Translational Research for Medical Interventions," Science (5 September 2008) Vol. 321, no. 5894, 1298 - 1299. (link: http://www.sciencemag.org/cgi/content/summary/321/5894/1298) may be of interest to those following this thread. Alma mentions measurements of ROI, and the paper is an attempt to measure the time between an initial discovery and a highly-cited clinical trial showing effective intervention. One can imagine expanding this study to encompass clinical trials that did not have a positive outcome, or automating the process in order to cover a representative sample of *all* reported trials, so as to paint a better picture of actual ROI. In case the paper is of interest, Janet Stemwedel's blog post about it will likely be useful also: http://scienceblogs.com/ethicsandscience/2008/10/tracking_the_lag_betwee n_promi.php Bill Hooker www.sennoma.net From garfield at CODEX.CIS.UPENN.EDU Thu Oct 9 11:47:45 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Thu, 9 Oct 2008 11:47:45 -0400 Subject: Woeginger, GJ (Woeginger, Gerhard J.) An axiomatic characterization of the Hirsch-index MATHEMATICAL SOCIAL SCIENCES, 56 (2): 224-232 SEP 2008 Message-ID: E-mail Address: gwoegi at win.tue.nl Author(s): Woeginger, GJ (Woeginger, Gerhard J.) Title: An axiomatic characterization of the Hirsch-index Source: MATHEMATICAL SOCIAL SCIENCES, 56 (2): 224-232 SEP 2008 Language: English Document Type: Article Author Keywords: axioms; ranking; index; citations; scientific impact measures Keywords Plus: H-INDEX; SCIENTISTS; RANKING Abstract: The Hirsch-index is a well-known index for measuring and comparing the output of scientific researchers. The main contribution of this article is an axiomatic characterization of the Hirsch-index in terms of three natural axioms. Furthermore, two other scientific impact indices (called the w-index and the maximum-index) are defined and characterized in terms of similar axioms. (C) 2008 Elsevier B.V. All rights reserved. Addresses: TU Eindhoven, Dept Math & Comp Sci, NL-5600 MB Eindhoven, Netherlands Reprint Address: Woeginger, GJ, TU Eindhoven, Dept Math & Comp Sci, POB 513, NL-5600 MB Eindhoven, Netherlands. E-mail Address: gwoegi at win.tue.nl Cited Reference Count: 12 Times Cited: 0 Publisher: ELSEVIER SCIENCE BV Publisher Address: PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS ISSN: 0165-4896 DOI: 10.1016/j.mathsocsci.2008.03.001 29-char Source Abbrev.: MATH SOC SCI ISO Source Abbrev.: Math. Soc. Sci. Source Item Page Count: 9 Subject Category: Mathematics, Interdisciplinary Applications; Social Sciences, Mathematical Methods ISI Document Delivery No.: 342UK ARROW KJ J POLIT ECON 58 : 328 1950 ARROW KJ SOCIAL CHOICE INDIVI : 1951 BALL P Index aims for fair ranking of scientists NATURE 436 : 900 DOI 10.1038/436900a 2005 BORNMANN L What do we know about the h index? JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 : 1381 DOI 10.1002/asi.20609 2007 BORNMANN L Does the h-index for ranking of scientists really work? SCIENTOMETRICS 65 : 391 DOI 10.1007/s11192-005-0281-4 2005 CRONIN B Using the h-index to rank influential information scientists JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 57 : 1275 DOI 10.1002/asi.20354 2006 HIRSCH JE Does the h index have predictive power? PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 104 : 19193 DOI 10.1073/pnas.0707962104 2007 HIRSCH JE An index to quantify an individual's scientific research output PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 102 : 16569 DOI 10.1073/pnas.0507655102 2005 MAY KO ECONOMETRICA 20 : 680 1952 MOULIN H AXIOMS COOPERATIVE D : 1988 OPPENHEIM C Using the h-index to rank influential British researchers in information science and librarianship JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 : 297 DOI 10.1002/asi.20460 2007 VANRAAN AFJ Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups SCIENTOMETRICS 67 : 491 DOI 10.1556/Scient.67.2006.3.10 2006 From garfield at CODEX.CIS.UPENN.EDU Thu Oct 9 12:03:53 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Thu, 9 Oct 2008 12:03:53 -0400 Subject: Burrell, QL (Burrell, Quentin L.) Extending Lotkaian informetrics INFORMATION PROCESSING & MANAGEMENT, 44 (5): 1794-1807 SEP 2008 Message-ID: E-mail Address: q.burrell at ibs.ac.im Author(s): Burrell, QL (Burrell, Quentin L.) Title: Extending Lotkaian informetrics Source: INFORMATION PROCESSING & MANAGEMENT, 44 (5): 1794-1807 SEP 2008 Language: English Document Type: Article Author Keywords: Lotkaian informetrics; Pareto type II distribution; statistical estimation methods; concentration measures Keywords Plus: SCIENTOMETRIC MEASUREMENT; EGGHES CONSTRUCTION; LORENZ CURVES; GINI INDEX; LAW; DISTRIBUTIONS; AMBIGUITY Abstract: The continuous version of the Lotka distribution, more generally referred to outside of informetrics as the Pareto distribution, has long enjoyed a central position in the theoretical development of informetrics despite several reported drawbacks in modelling empirical data distributions, most particularly that the inverse power form seems mainly to be evident only in the upper tails. We give a number of published examples graphically illustrating this shortcoming. In seeking to overcome this, we here draw attention to an intuitively reasonable generalization of the Pareto distribution, namely the Pareto type II distribution, of which we consider two versions. We describe its basic properties and some statistical features together with concentration aspects and argue that, at least in qualitative terms, it is better able to describe many observed informetric phenomena over the full range of the distribution. Suggestions for further investigations, including truncated and time-dependent versions, are also given. (c) 2008 Elsevier Ltd. All rights reserved. Addresses: Nunnery, Isle Man Int Business Sch, Douglas IM2 1QB, Isle Of Man, England Reprint Address: Burrell, QL, Nunnery, Isle Man Int Business Sch, Old Castletown Rd, Douglas IM2 1QB, Isle Of Man, England. E-mail Address: q.burrell at ibs.ac.im Cited Reference Count: 60 Times Cited: 0 Publisher: PERGAMON-ELSEVIER SCIENCE LTD Publisher Address: THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND ISSN: 0306-4573 DOI: 10.1016/j.ipm.2008.03.002 29-char Source Abbrev.: INFORM PROCESS MANAGE ISO Source Abbrev.: Inf. Process. Manage. Source Item Page Count: 14 Subject Category: Computer Science, Information Systems; Information Science & Library Science ISI Document Delivery No.: 342UA ARNOLD BC ENCY STAT SCI : 568 1985 ARNOLD BC INT STUDIES EC 10 : 1977 ATKINSON AB MEASUREMENT OF INEQUALITY JOURNAL OF ECONOMIC THEORY 2 : 244 1970 BAIN LJ INTRO PROBABILITY MA : 1992 BOOKSTEIN A Implications of ambiguity for scientometric measurement JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 52 : 74 2001 BOOKSTEIN A Informetric distributions .3. Ambiguity and randomness JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 48 : 2 1997 BOOKSTEIN A INFORMETRIC DISTRIBUTIONS .1. UNIFIED OVERVIEW JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 41 : 368 1990 BOOKSTEIN A INFORMETRIC DISTRIBUTIONS .2. RESILIENCE TO AMBIGUITY JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 41 : 376 1990 BOYACK K P ISSI 2007 CSIC MAD 1 : 124 2007 BURRELL QL Symmetry and other transformation features of Lorenz/Leimkuhler representations of informetric data INFORMATION PROCESSING & MANAGEMENT 41 : 1317 DOI 10.1016/j.ipm.2005.03.016 2005 BURRELL QL THE GINI INDEX AND THE LEIMKUHLER CURVE FOR BIBLIOMETRIC PROCESSES INFORMATION PROCESSING & MANAGEMENT 28 : 19 1992 BURRELL QL Egghe's construction of Lorenz curves resolved JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 : 2157 DOI 10.1002/asi.20674 2007 BURRELL QL On Egghe's version of continuous concentration theory JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 57 : 1406 DOI 10.1002/asi.20402 2006 BURRELL QL "Ambiguity" and scientometric measurement: A dissenting view JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 52 : 1075 2001 BURRELL QL P ISSI 10 1 : 129 2005 BURRELL QL Measuring concentration within and co-concentration between informetric distributions: An empirical study SCIENTOMETRICS 68 : 441 DOI 10.1007/s11192-006-0122-0 2006 BURRELL QL Modelling citation age data: Simple graphical methods from reliability theory SCIENTOMETRICS 55 : 273 2002 BURRELL QL THE BRADFORD DISTRIBUTION AND THE GINI INDEX SCIENTOMETRICS 21 : 181 1991 COHEN AC PARAMETRIC ESTIMATIO : 1988 CORVELLA ME PRACTICAL GUIDE HEAV : 3 1998 COTHEY V P ISSI 2005 10 INT C 1 : 212 2005 DEGROOT MH PROBABILITY STAT : 1986 DORFMAN R FORMULA FOR THE GINI COEFFICIENT REVIEW OF ECONOMICS AND STATISTICS 61 : 146 1979 EGGHE L INTRO INFORMETRICS Q : 1990 EGGHE L Zipfian and Lotkaian continuous concentration theory JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 56 : 935 DOI 10.1002/asi.20186 2005 EGGHE L POWER LAWS INFORM PR : 2005 GASTWIRTH JL GENERAL DEFINITION OF LORENTZ CURVE ECONOMETRICA 39 : 1037 1971 GOLDBERG G PARETO LAW PYRAMID D : 1967 HARRIS CM PARETO DISTRIBUTION AS A QUEUE SERVICE DISCIPLINE OPERATIONS RESEARCH 16 : 307 1968 JOHNSON NL CONTINUOUS UNIVARIAT 1 : 1994 KLEIBER C STAT SIZE DISTRIBUTI : 2003 LAFOUGE T The source-item coverage of the exponential function JOURNAL OF INFORMETRICS 1 : 59 2007 LAMBERT PJ DISTRIBUTION REDISTR : 2001 LAVALETTE D U350 INSERM I CUR RE : 1996 LETA J P ISSI 2007 11 INT C 1 : 480 2007 LOMAX KS BUSINESS FAILURES - ANOTHER EXAMPLE OF THE ANALYSIS OF FAILURE DATA JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION 49 : 847 1954 LORENZ MO Methods of measuring the concentration of wealth. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION 9 : 209 1905 LOTKA AJ J WASHINGTON ACADEMY 16 : 317 1926 MAGUIRE BA THE TIME INTERVALS BETWEEN INDUSTRIAL ACCIDENTS BIOMETRIKA 39 : 168 1952 NICHOLLS PT EMPIRICAL VALIDATION OF LOTKA LAW INFORMATION PROCESSING & MANAGEMENT 22 : 417 1986 NICHOLLS PT BIBLIOMETRIC MODELING PROCESSES AND THE EMPIRICAL VALIDITY OF LOTKA LAW JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 40 : 379 1989 NICHOLLS PT ESTIMATION OF ZIPF PARAMETERS JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 38 : 443 1987 PAO ML LOTKA LAW - A TESTING PROCEDURE INFORMATION PROCESSING & MANAGEMENT 21 : 305 1985 PAO ML AN EMPIRICAL-EXAMINATION OF LOTTKA LAW JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 37 : 26 1986 PARETO V COURS EC POLITIQUE : 1897 PARETO V RIV POLITICA EC 87 : 647 1997 PARETO V RIV POLITICA EC 87 : 691 1997 POPESCU II LAVALETTES NONLINEAR : 2002 POPESCU II ROM REP PHYS 49 : 3 1997 PRIME C P 8 INT C SCI INF 2 : 529 2001 ROUSSEAU B CYBERMETRICS 4 : 2000 ROUSSEAU R On Egghe's construction of Lorenz curves JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 : 1551 DOI 10.1002/asi.20615 2007 ROUSSEAU R A TABLE FOR ESTIMATING THE EXPONENT IN LOTKA LAW JOURNAL OF DOCUMENTATION 49 : 409 1993 SINGH SK ASA P BUS EC STAT SE : 551 1975 SINGH SK FUNCTION FOR SIZE DISTRIBUTION OF INCOMES ECONOMETRICA 44 : 963 1976 VANRAAN AFJ Reference-based publication networks with episodic memories SCIENTOMETRICS 63 : 549 DOI 10.1007/s11192-005-0227-x 2005 VINCI F GIORNALE EC RIV STAT 61 : 365 1921 VOSS J P ISSI 2005 10 INT C 1 : 221 2005 YITZHAKI S RES EC INEQ 8 : 13 1998 ZIPF G HUMAN BEHAV PRINCIPL : 1949 From garfield at CODEX.CIS.UPENN.EDU Thu Oct 9 12:12:20 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Thu, 9 Oct 2008 12:12:20 -0400 Subject: Blumle, A; Antes, G; Schumacher, M; Just, H; von Elm, E Clinical research projects at a German medical faculty: follow-up from ethical approval to publication and citation by others JOURNAL OF MEDICAL ETHICS, 34 (9): Art. No. e20 SEP 2008 Message-ID: E-mail Address: bluemle at cochrane.de Author(s): Blumle, A (Bluemle, A.); Antes, G (Antes, G.); Schumacher, M (Schumacher, M.); Just, H (Just, H.); von Elm, E (von Elm, E.) Title: Clinical research projects at a German medical faculty: follow-up from ethical approval to publication and citation by others Source: JOURNAL OF MEDICAL ETHICS, 34 (9): Art. No. e20 SEP 2008 Language: English Document Type: Article Keywords Plus: INTERNATIONAL-COMMITTEE; TRIAL REGISTRATION; RESEARCH PROTOCOLS; JOURNAL EDITORS; BIAS; STATEMENT; COHORT Abstract: Background: Only data of published study results are available to the scientific community for further use such as informing future research and synthesis of available evidence. If study results are reported selectively, reporting bias and distortion of summarised estimates of effect or harm of treatments can occur. The publication and citation of results of clinical research conducted in Germany was studied. Methods: The protocols of clinical research projects submitted to the research ethics committee of the University of Freiburg (Germany) in 2000 were analysed. Published full articles in several databases were searched and investigators contacted. Data on study and publication characteristics were extracted from protocols and corresponding publications. Results: 299 study protocols were included. The most frequent study design was randomised controlled trial (141; 47%), followed by uncontrolled studies (61; 20%), laboratory studies (30; 10%) and non-randomised studies (29; 10%). 182 (61%) were multicentre studies including 97 (53%) international collaborations. 152 of 299 (51%) had commercial (co-)funding and 46 (15%) non-commercial funding. 109 of the 225 completed protocols corresponded to at least one full publication (total 210 articles); the publication rate was 48%. 168 of 210 identified publications (80%) were cited in articles indexed in the ISI Web of Science. The median was 11 citations per publication (range 0-1151). Conclusions: Results of German clinical research projects conducted are largely underreported. Barriers to successful publication need to be identified and appropriate measures taken. Close monitoring of projects until publication and adequate support provided to investigators may help remedy the prevailing underreporting of research. Addresses: Univ Med Ctr Freiburg, Dept Med Biometry & Stat, Inst Med Biometry & Med Informat, D-79104 Freiburg, Germany; Univ Med Ctr, Res Eth Comm, Freiburg, Germany; Univ Bern, Inst Social & Prevent Med, Bern, Switzerland Reprint Address: Blumle, A, Univ Med Ctr Freiburg, Dept Med Biometry & Stat, Inst Med Biometry & Med Informat, Stefan Meier Str 26, D-79104 Freiburg, Germany. E-mail Address: bluemle at cochrane.de Cited Reference Count: 25 Times Cited: 0 Publisher: B M J PUBLISHING GROUP Publisher Address: BRITISH MED ASSOC HOUSE, TAVISTOCK SQUARE, LONDON WC1H 9JR, ENGLAND ISSN: 0306-6800 DOI: 10.1136/jme.2008.024521 29-char Source Abbrev.: J MED ETHICS ISO Source Abbrev.: J. Med. Ethics Source Item Page Count: 6 Subject Category: Ethics; Medical Ethics; Social Issues; Social Sciences, Biomedical ISI Document Delivery No.: 343CE *COCHR COLL COCHR CENTR REG CONT : *WHO B WORLD HEALTH ORGAN 84 : 10 2006 *WORLD MED ASS DECL HELS ETH PRINC : BLUMLE A Handsearching for controlled clinical trials in health care journals published in Germany DEUTSCHE MEDIZINISCHE WOCHENSCHRIFT 133 : 230 DOI 10.1055/s-2008-1017501 2008 CHALMERS I UNDERREPORTING RESEARCH IS SCIENTIFIC MISCONDUCT JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 263 : 1405 1990 CHAN AW Research protocols - Waiving confidentiality for the greater good BRITISH MEDICAL JOURNAL 332 : 1086 2006 DEANGELIS C Clinical trial registration: A statement from the International Committee of Medical Journal editors ANNALS OF INTERNAL MEDICINE 141 : 477 2004 DEANGELIS CD Is this clinical trial fully registered? A statement from the international committee of medical journal editors CROATIAN MEDICAL JOURNAL 46 : 499 2005 DECULLIER E Fate of biomedical research protocols and publication bias in France: retrospective cohort study BRITISH MEDICAL JOURNAL 331 : 19 DOI 10.1136/bmj.38488.385995.8F 2005 DICKERSIN K How important is publication bias? A synthesis of available data AIDS EDUCATION AND PREVENTION 9 : 15 1997 DICKERSIN K Development of the Cochrane Collaboration's CENTRAL register of controlled clinical trials EVALUATION & THE HEALTH PROFESSIONS 25 : 38 2002 DICKERSIN K FACTORS INFLUENCING PUBLICATION OF RESEARCH RESULTS - FOLLOW-UP OF APPLICATIONS SUBMITTED TO 2 INSTITUTIONAL REVIEW BOARDS JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 267 : 374 1992 EASTERBROOK PJ PUBLICATION BIAS IN CLINICAL RESEARCH LANCET 337 : 867 1991 EGGER M Meta-analysis - Bias in location and selection of studies BRITISH MEDICAL JOURNAL 316 : 61 1998 EGGER M SYSTEMATIC REV HLTH : 43 2001 LAINE C Clinical trial registration: Looking back and moving ahead CROATIAN MEDICAL JOURNAL 48 : 289 2007 MENZEL S Evaluation of clinical trials following an approval from a research ethics committee DEUTSCHE MEDIZINISCHE WOCHENSCHRIFT 132 : 2313 DOI 10.1055/s-2007-991648 2007 PEARN J PUBLICATION - AN ETHICAL IMPERATIVE BRITISH MEDICAL JOURNAL 310 : 1313 1995 PICH J Role of a research ethics committee in follow-up and publication of results LANCET 361 : 1015 2003 ROSENTHAL R THE FILE DRAWER PROBLEM AND TOLERANCE FOR NULL RESULTS PSYCHOLOGICAL BULLETIN 86 : 638 1979 SCHERER RW COCHRANE DB SYST REV : UNSP MR000005 2007 STERN JM Publication bias: evidence of delayed publication in a cohort study of clinical research projects BRITISH MEDICAL JOURNAL 315 : 640 1997 VONELM E Publication and non-publication of clinical trials: longitudinal study of applications submitted to a research ethics committee SWISS MEDICAL WEEKLY 138 : 197 2008 WELTON AJ Is recruitment more difficult with a placebo arm in randomised controlled trials? A quasirandomised, interview based study BRITISH MEDICAL JOURNAL 318 : 1114 1999 WHITTINGTON CJ Selective serotonin reuptake inhibitors in childhood depression: systematic review of published versus unpublished data LANCET 363 : 1341 2004 From Jessica.Shepherd at GUARDIAN.CO.UK Thu Oct 9 12:12:43 2008 From: Jessica.Shepherd at GUARDIAN.CO.UK (Jessica Shepherd) Date: Thu, 9 Oct 2008 17:12:43 +0100 Subject: Jessica Shepherd/Guardian/GNL is out of the office. Message-ID: I will be out of the office starting 09/10/2008 and will not return until 10/10/2008. I am out of the office today.If your message is urgent, please call me on 07957147308. Otherwise please contact Sharon Bainbridge on 020 72399943 or Stephanie Kerstein on 020 7239 9559. Many thanks. ------------------------------------------------------------------ Visit guardian.co.uk - the UK's most popular newspaper website http://guardian.co.uk http://observer.co.uk ------------------------------------------------------------------ The Newspaper Marketing Agency Opening Up Newspapers ------------------------------------------------------------------ Please consider the environment before printing this email This e-mail and all attachments are confidential and may also be privileged. If you are not the named recipient, please notify the sender and delete the e-mail and all attachments immediately. Do not disclose the contents to another person. You may not use the information for any purpose, or store, or copy, it in any way. Guardian News & Media Limited is not liable for any computer viruses or other material transmitted with or as part of this e-mail. You should employ virus checking software. Guardian News & Media Limited A member of Guardian Media Group PLC Registered Office Number 1 Scott Place, Manchester M3 3GG Registered in England Number 908396 From dwojick at HUGHES.NET Thu Oct 9 14:46:30 2008 From: dwojick at HUGHES.NET (David E. Wojick) Date: Thu, 9 Oct 2008 14:46:30 -0400 Subject: FW: Re: New ways of measuring research by Stevan Harnad In-Reply-To: <311174B69873F148881A743FCF1EE53703D4AA22@TSHUSPAPHIMBX02.ERF.THOMSON.COM> Message-ID: I can't speak for Valdez but I know him and his work and share some of his interests and concerns. The basic issue to me is one that we find throughout science. One the one hand we find lots of statistical analysis. But on the other we find the development of theoretical processes and mechanisms that explain the numbers. It is the discovery of these processes and mechanisms that the science of science presently lacks. Most of the people on this list are familiar with some of these issues. One, which Valdez alludes to, is the calculation of return on investment. We are pretty sure that science is valuable but how do we measure that value? We have many programs on which we have spent over $1 billion over the last 10 years. What has been the return to society? What is it likely to be in future? Has one program returned more than another? Why is this so hard to figure out? Another is the quality of research. Surely some research is better than others, some papers better than others, in several different ways. For that matter, what is the goal of science, or are there more than one? Which fields are achieving which goals, to what degree? Are some fields more productive than others? Are some speeding up, while others slow down? Economics has resolved many of these issues with models of rational behavior. Why can't the science of science do likewise? (It is okay if it can't as long as we know why it can't.) The point is that we know we are measuring something important but we don't know what it is. Most of the terms we use to talk about science lack an operational definition. In this sense the measurement of scientific activity is ahead of our understanding of this activity. We do not have a fundamental theory of the nature of science. We are like geology before plate tectonics, or epidemiology before the germ theory of disease, measuring what we do not understand. David Wojick > > Clearly a message of interest to the subscribers to Sig Metrics of >ASIST. Gene Garfield > >-----Original Message----- >From: American Scientist Open Access Forum >[mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG] On >Behalf Of Stevan Harnad >Sent: Wednesday, October 08, 2008 11:03 AM >To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG >Subject: Re: New ways of measuring research > >On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill > wrote: > >> the primary reason that I believe bibliometrics, innovation >> indices, patent analysis and econometric modeling are flawed is that >> they rely upon the counting of things (paper, money, people, etc.) >> without understanding the underlying motivations of the actors within >> the scientific ecosystem. > >There are two ways to evaluate: > >Subjectively (expert judgement, peer review, opinion polls) >or >Objectively: counting things > >The same is true of motives: you can assess them subjectively or >objectively. If objectively, you have to count things. > >That's metrics. > >Philosophers say "Show me someone who wishes to discard metaphysics, >and I'll show you a metaphysician with a rival (metaphysical) system." > >The metric equivalent is "Show me someone who wishes to discard >metrics (counting things), and I'll show you a metrician with a rival >(metric) system." > >Objective metrics, however, must be *validated*, and that usually >begins by initializing their weights based on their correlation with >existing (already validated, or face-valid) metrics and/or peer review >(expert judgment). > >Note also that there are a-priori evaluations (research funding >proposals, research findings submittedf or publication) and >a-posteriori evaluations (research performance assessment). > >> what,,, motivates scientists to collaborate? > >You can ask them (subjective), or you can count things >(co-authorships, co-citations, etc.) to infer what factors underlie >collaboration (objective). > >> Second, what science policy makers want is a set of decision support >> tools that supplement the existing gold standard (expert judgment) and >> provide options for the future. > >New metrics need to be validated against existing, already validated >(or face-valid) metrics which in turn have to be validated against the >"gold standard" (expert judgment. Once shown to be reliable and valid, >metrics can then predict on their own, especially jointly, with >suitable weights: > >The UK RAE 2008 offers an ideal opportunity to validate a wide >spectrum of old and new metrics, jointly, field by field, against >expert judgment: > >Harnad, S. (2007) Open Access Scientometrics and the UK Research >Assessment Exercise. In Proceedings of 11th Annual Meeting of the >International Society for Scientometrics and Informetrics 11(1), pp. >27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. >http://eprints.ecs.soton.ac.uk/13804/ > >Sample of candidate >OA-era metrics: > >Citations (C) >CiteRank >Co-citations >Downloads (D) >C/D Correlations >Hub/Authority index >Chronometrics: Latency/Longevity >Endogamy/Exogamy >Book citation index >Research funding >Students >Prizes >h-index >Co-authorships >Number of articles >Number of publishing years >Semiometrics (latent semantic indexing, text overlap, etc.) > >> policy makers need to understand the benefits and effectiveness of >their >> investment decisions in R&D. Currently, policy makers rely on big >> committee reviews, peer review, and their own best judgment to make >> those decisions. The current set of tools available don't provide >> policy makers with rigorous answers to the benefits/effectiveness >> questions... and they are too difficult to use and/or >> inexplicable to the normal policy maker. The result is the laundry >list >> of "metrics" or "indicators" that are contained in the "Gathering >Storm" >> or any of the innovation indices that I have seen to date. > >The difference between unvalidated and validated metrics is the >difference between night and day. > >The role of expert judgment will obviously remain primary in the case >of a-priori evaluations (specific research proposals and submissions >for publication) and a-posteriori evaluations (research performance >evaluation, impact studies) > >> Finally, I don't think we know enough about the functioning of the >> innovation system to begin making judgments about which >> metrics/indicators are reliable enough to provide guidance to policy >> makers. I believe that we must move to an ecosystem model of >innovation >> and that if you do that, then non-obvious indicators (relative >> competitiveness/openness of the system, embedded infrastructure, etc.) >> become much more important than the traditional metrics used by NSF, >> OECD, EU and others. In addition, the decision support tools will >> gravitate away from the static (econometric modeling, >> patent/bibliometric citations) and toward the dynamic (systems >modeling, >> visual analytics). > >I'm not sure what all these measures are, but assuming they are >countale metrics, they all need prior validation against validated or >face-valid criteria, fields by field, and preferably a large battery >of candidate metrics, validated jointly, initializing the weights of >each. > >OA will help provide us with a rich new spectrum of candidate metrics >and an open means of monitoring, validating, and fine-tuning them. > >Stevan Harnad -- "David E. Wojick, PhD" Senior Consultant for Innovation Office of Scientific and Technical Information US Department of Energy http://www.osti.gov/innovation/ 391 Flickertail Lane, Star Tannery, VA 22654 USA 540-858-3136 http://www.bydesign.com/powervision/resume.html provides my bio and past client list. http://www.bydesign.com/powervision/Mathematics_Philosophy_Science/ presents some of my own research on information structure and dynamics. From laylamichan at YAHOO.COM Thu Oct 9 15:02:37 2008 From: laylamichan at YAHOO.COM (=?iso-8859-1?Q?Layla_Mich=E1n?=) Date: Thu, 9 Oct 2008 12:02:37 -0700 Subject: ANALYSIS OF THE STATE OF SYSTEMATICS IN LATIN AMERICA Message-ID: Mich?n, L. et al. 2008. An?lisis de la sistem?tica actual en Latinoam?rica. Interciencia, 33(10): 754-761. Disponible en: http://www.interciencia.org/v33_10/index.html ANALYSIS OF THE STATE OF SYSTEMATICS IN LATIN AMERICA Layla Mich?n, Jane M. Russell, Antonio S?nchez Pereyra, Antonia Llorens Cruset and Carlos L?pez Beltr?n SUMMARY In order to have a regional vision of the development of systematics in Latin America during the last three decades, the results of a scientometric analysis based on 11185 documents on this theme published in 411 journals from 1976 to 2006 and obtained from the Periodica data base are presented. The current state of the discipline in the region is described, a detailed analysis about the articles, countries, main lines of study, taxonomic groups, topics, format, type of document, content and language is carried out, and the information is contextualized. The specialized production on systematics produced and published in local journals was notable and remained stable after the 80?s, mainly in Mexico, Brazil and Argentina. The contents have been published primarily in Spanish and mainly in the form of articles. They dealt mostly with descriptive taxonomy and were related to ecology, anatomy, histology and aquatic biology. The most represented groups were insects and angiosperms. A call is made for the urgent need of systematizing the literature about Latin American taxa. KEY WORDS / Latin America / Bibliometry / Scientometrics / Recent history / Publications / Systematics/ Taxonomy/ An?lisis de la sistem?tica actual en Latinoam?rica Layla Mich?n, Jane M. Russell, Antonio S?nchez Pereyra, Antonia Llorens Cruset y Carlos L?pez Beltr?n RESUMEN Para tener una visi?n regional del estado de desarrollo de la sistem?tica en Am?rica Latina durante las ?ltimas tres d?cadas, se presentan los resultados del an?lisis cienciom?trico de 11185 documentos publicados entre 1976 y 2006 en 411 revistas de la regi?n, obtenidos de la base de datos Peri?dica. Se describe el estado actual de la disciplina en el ?rea, se exponen an?lisis detallados sobre los art?culos, pa?ses, principales l?neas de estudios, grupos taxon?micos, temas, formato, tipo de documento, contenido, idioma y se contextualiza la informaci?n. La producci?n especializada sobre sistem?tica publicada en las revistas locales fue notable y se mantuvo estable a partir de los 80, centr?ndose principalmente en M?xico, Brasil y Argentina. Los contenidos fueron publicados en espa?ol primariamente y en forma de art?culos. Versaron especialmente sobre taxonom?a descriptiva y se relacionaron con la ecolog?a, anatom?a, histolog?a y biolog?a acu?tica. Los grupos m?s representados fueron los insectos y las angiospermas. Se concluye haciendo una llamada a la necesidad urgente de sistematizar la literatura de sistem?tica sobre taxones latinoamericanos. PALABRAS CLAVE / Am?rica Latina / Bibliometr?a / Ciencimetr?a / Historia reciente / Publicaciones / Sistem?tica / Taxonom?a / Dra. Layla Mich?n Aguirre Facultad de Ciencias, UNAM. Departamento de Biolog?a Evolutiva Av. Universidad 3000 Circuito Exterior S/N, C.P. 04510. Ciudad Universitaria, M?xico, D. F. Edificio "A" Segundo Piso Tel?fono 52 (55) 56 22 48 25 www.geocities.com/laylamichan http://laylamichanunam.blogspot.com/ ________________________________ ?Todo sobre Amor y Sexo! La gu?a completa para tu vida en Mujer de Hoy: http://mx.mujer.yahoo.com/ ________________________________ ?Todo sobre Amor y Sexo! La gu?a completa para tu vida en Mujer de Hoy: http://mx.mujer.yahoo.com/ ________________________________ ?Todo sobre Amor y Sexo! La gu?a completa para tu vida en Mujer de Hoy: http://mx.mujer.yahoo.com/ ?Todo sobre Amor y Sexo! La gu?a completa para tu vida en Mujer de Hoy. http://mx.mujer.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: analisis sist LA.pdf Type: application/pdf Size: 273836 bytes Desc: not available URL: From eugene.garfield at THOMSONREUTERS.COM Thu Oct 9 15:29:00 2008 From: eugene.garfield at THOMSONREUTERS.COM (Eugene Garfield) Date: Thu, 9 Oct 2008 15:29:00 -0400 Subject: : New ways of measuring the economic and other impacts of research In-Reply-To: Message-ID: Interest in methods of measuring the return on investment in research is not new. Ed Mansfield ( now deceased) at Penn was active in this area as were Zvi Griliches and others at Harvard. About six years we established an annual award at Research!America which will again take place next week in Washington. You can find information at http://www.researchamerica.org/economicimpact_award And http://www.researchamerica.org/event_detail/id:32 The awards committee of R!A welcomes nominations of papers which contribute to demonstrating the economic impact of basic research, prevention, etc. David Wojick and others are correct to point out the difficulties in measuring these impacts but that should not prevent us from seeking solutions. -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David E. Wojick Sent: Thursday, October 09, 2008 2:47 PM To: SIGMETRICS at LISTSERV.UTK.EDU Subject: Re: [SIGMETRICS] FW: Re: New ways of measuring research by Stevan Harnad I can't speak for Valdez but I know him and his work and share some of his interests and concerns. The basic issue to me is one that we find throughout science. One the one hand we find lots of statistical analysis. But on the other we find the development of theoretical processes and mechanisms that explain the numbers. It is the discovery of these processes and mechanisms that the science of science presently lacks. Most of the people on this list are familiar with some of these issues. One, which Valdez alludes to, is the calculation of return on investment. We are pretty sure that science is valuable but how do we measure that value? We have many programs on which we have spent over $1 billion over the last 10 years. What has been the return to society? What is it likely to be in future? Has one program returned more than another? Why is this so hard to figure out? Another is the quality of research. Surely some research is better than others, some papers better than others, in several different ways. For that matter, what is the goal of science, or are there more than one? Which fields are achieving which goals, to what degree? Are some fields more productive than others? Are some speeding up, while others slow down? Economics has resolved many of these issues with models of rational behavior. Why can't the science of science do likewise? (It is okay if it can't as long as we know why it can't.) The point is that we know we are measuring something important but we don't know what it is. Most of the terms we use to talk about science lack an operational definition. In this sense the measurement of scientific activity is ahead of our understanding of this activity. We do not have a fundamental theory of the nature of science. We are like geology before plate tectonics, or epidemiology before the germ theory of disease, measuring what we do not understand. David Wojick > > Clearly a message of interest to the subscribers to Sig Metrics of >ASIST. Gene Garfield > >-----Original Message----- >From: American Scientist Open Access Forum >[mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG] On >Behalf Of Stevan Harnad >Sent: Wednesday, October 08, 2008 11:03 AM >To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG >Subject: Re: New ways of measuring research > >On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill > wrote: > >> the primary reason that I believe bibliometrics, innovation >> indices, patent analysis and econometric modeling are flawed is that >> they rely upon the counting of things (paper, money, people, etc.) >> without understanding the underlying motivations of the actors within >> the scientific ecosystem. > >There are two ways to evaluate: > >Subjectively (expert judgement, peer review, opinion polls) >or >Objectively: counting things > >The same is true of motives: you can assess them subjectively or >objectively. If objectively, you have to count things. > >That's metrics. > >Philosophers say "Show me someone who wishes to discard metaphysics, >and I'll show you a metaphysician with a rival (metaphysical) system." > >The metric equivalent is "Show me someone who wishes to discard >metrics (counting things), and I'll show you a metrician with a rival >(metric) system." > >Objective metrics, however, must be *validated*, and that usually >begins by initializing their weights based on their correlation with >existing (already validated, or face-valid) metrics and/or peer review >(expert judgment). > >Note also that there are a-priori evaluations (research funding >proposals, research findings submittedf or publication) and >a-posteriori evaluations (research performance assessment). > >> what,,, motivates scientists to collaborate? > >You can ask them (subjective), or you can count things >(co-authorships, co-citations, etc.) to infer what factors underlie >collaboration (objective). > >> Second, what science policy makers want is a set of decision support >> tools that supplement the existing gold standard (expert judgment) and >> provide options for the future. > >New metrics need to be validated against existing, already validated >(or face-valid) metrics which in turn have to be validated against the >"gold standard" (expert judgment. Once shown to be reliable and valid, >metrics can then predict on their own, especially jointly, with >suitable weights: > >The UK RAE 2008 offers an ideal opportunity to validate a wide >spectrum of old and new metrics, jointly, field by field, against >expert judgment: > >Harnad, S. (2007) Open Access Scientometrics and the UK Research >Assessment Exercise. In Proceedings of 11th Annual Meeting of the >International Society for Scientometrics and Informetrics 11(1), pp. >27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. >http://eprints.ecs.soton.ac.uk/13804/ > >Sample of candidate >OA-era metrics: > >Citations (C) >CiteRank >Co-citations >Downloads (D) >C/D Correlations >Hub/Authority index >Chronometrics: Latency/Longevity >Endogamy/Exogamy >Book citation index >Research funding >Students >Prizes >h-index >Co-authorships >Number of articles >Number of publishing years >Semiometrics (latent semantic indexing, text overlap, etc.) > >> policy makers need to understand the benefits and effectiveness of >their >> investment decisions in R&D. Currently, policy makers rely on big >> committee reviews, peer review, and their own best judgment to make >> those decisions. The current set of tools available don't provide >> policy makers with rigorous answers to the benefits/effectiveness >> questions... and they are too difficult to use and/or >> inexplicable to the normal policy maker. The result is the laundry >list >> of "metrics" or "indicators" that are contained in the "Gathering >Storm" >> or any of the innovation indices that I have seen to date. > >The difference between unvalidated and validated metrics is the >difference between night and day. > >The role of expert judgment will obviously remain primary in the case >of a-priori evaluations (specific research proposals and submissions >for publication) and a-posteriori evaluations (research performance >evaluation, impact studies) > >> Finally, I don't think we know enough about the functioning of the >> innovation system to begin making judgments about which >> metrics/indicators are reliable enough to provide guidance to policy >> makers. I believe that we must move to an ecosystem model of >innovation >> and that if you do that, then non-obvious indicators (relative >> competitiveness/openness of the system, embedded infrastructure, etc.) >> become much more important than the traditional metrics used by NSF, >> OECD, EU and others. In addition, the decision support tools will >> gravitate away from the static (econometric modeling, >> patent/bibliometric citations) and toward the dynamic (systems >modeling, >> visual analytics). > >I'm not sure what all these measures are, but assuming they are >countale metrics, they all need prior validation against validated or >face-valid criteria, fields by field, and preferably a large battery >of candidate metrics, validated jointly, initializing the weights of >each. > >OA will help provide us with a rich new spectrum of candidate metrics >and an open means of monitoring, validating, and fine-tuning them. > >Stevan Harnad -- "David E. Wojick, PhD" Senior Consultant for Innovation Office of Scientific and Technical Information US Department of Energy http://www.osti.gov/innovation/ 391 Flickertail Lane, Star Tannery, VA 22654 USA 540-858-3136 http://www.bydesign.com/powervision/resume.html provides my bio and past client list. http://www.bydesign.com/powervision/Mathematics_Philosophy_Science/ presents some of my own research on information structure and dynamics. From rfmatheus at YAHOO.COM.BR Thu Oct 9 17:23:10 2008 From: rfmatheus at YAHOO.COM.BR (Renato Matheus) Date: Thu, 9 Oct 2008 14:23:10 -0700 Subject: Robert Zoellick challenge and global economic crisis Message-ID: Dear list members and social network researchers, ? I've just created a facebook group to debate Mr. Robert Zoellick, president of The World Bank, challenge. He proposed the creation of a new multilateral economic collaboration network to deal with problems that are beyond present G7 structure. ? He said: ?We need a Facebook for multilateral economic diplomacy.? (Source: http://www.worldbank.org.pl/WBSITE/EXTERNAL/COUNTRIES/ECAEXT/POLANDEXTN/0,,contentMDK:21929863~pagePK:1497618~piPK:217854~theSitePK:304795,00.html?cid=3001). ? The main goal of this new facebook group (http://www.facebook.com/group.php?gid=38002907602) is to debate how scientific research on social network analysis could support a "network for frequent interaction" in order to provide data, network analysis and networking sites and other applications to improve economic relationships in a global networked economy. The focus is on scientific information and social network approach. ? I'd be glad to have researchers interested in social network economic analysis as members. ? Thank you, Congratulations, Renato Fabiano Matheus PhD Candidate on Social Network Analyst System Analyst at Central Bank of Brazil http://rfmatheus.com.br/content/category/1/15/104/ ? ? ----- Mensagem original ---- De: Eugene Garfield Para: SIGMETRICS at LISTSERV.UTK.EDU Enviadas: Quarta-feira, 8 de Outubro de 2008 15:57:00 Assunto: [SIGMETRICS] FW: New ways of measuring research Clearly of potential interest to citation analysts. EG -----Original Message----- From: American Scientist Open Access Forum [mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG] On Behalf Of Bill Hooker Sent: Wednesday, October 08, 2008 12:56 AM To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG Subject: Re: New ways of measuring research The following recent paper: Despina G. Contopoulos-Ioannidis, George A. Alexiou, Theodore C. Gouvias, and John P. A. Ioannidis, "Life Cycle of Translational Research for Medical Interventions," Science (5 September 2008) Vol. 321, no. 5894, 1298 - 1299. (link: http://www.sciencemag.org/cgi/content/summary/321/5894/1298) may be of interest to those following this thread.? Alma mentions measurements of ROI, and the paper is an attempt to measure the time between an initial discovery and a highly-cited clinical trial showing effective intervention.? One can imagine expanding this study to encompass clinical trials that did not have a positive outcome, or automating the process in order to cover a representative sample of *all* reported trials, so as to paint a better picture of actual ROI. In case the paper is of interest, Janet Stemwedel's blog post about it will likely be useful also: http://scienceblogs.com/ethicsandscience/2008/10/tracking_the_lag_betwee n_promi.php Bill Hooker www.sennoma.net Novos endere?os, o Yahoo! que voc? conhece. Crie um email novo com a sua cara @ymail.com ou @rocketmail.com. http://br.new.mail.yahoo.com/addresses From loet at LEYDESDORFF.NET Fri Oct 10 03:36:53 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 10 Oct 2008 09:36:53 +0200 Subject: FW: Re: New ways of measuring research by Stevan Harnad In-Reply-To: Message-ID: Dear David, There is theorizing in science & technology studies, the (evolutionary) economics of innovation, etc., but it has drifted so far apart from the scientometrics enterprise that one cannot recognize the relevance of each other's work on both sides. The Science-of-Science-Policy program of the NSF may aim at bringing the two sides together, but processes of codification and self-organization at the field level cannot be organized away; they tend to be reproduced. I go to 4S conferences, etc., and there is hostility against scientometrics; and I go to scientometrics conferences and the atmosphere is almost anti-theoretical. (There are good exceptions on both sides!) Similarly with referee comments which one receives when submitting on either side, including the advice not to try to bridge the gap but instead to submit on the other side. See also my paper (with Peter van den Besselaar): "Scientometrics and Communication Theory: Towards Theoretically Informed Indicators," Scientometrics 38(1) (1997) 155-174. Ever since, new developments in the theoretical domain like Luhmann's sociology of communication provides opportunities to conceptualize processes like codification in terms which are both socially relevant and perhaps amenable to measurement. I have tried to work on this relation in: "Scientific Communication and Cognitive Codification: Social Systems Theory and the Sociology of Scientific Knowledge," European Journal of Social Theory 10(3), 375-388, 2007 (available from my website). The problem, in my opinion, is that we have only indicators for information processing and not for the processing of meaning or even beyond that knowledge. The indicators of a knowledge-based economy program of the OECD has not been so successful (Godin, 2006), but led mainly to a rearrangement of existing indicators. With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David E. Wojick > Sent: Thursday, October 09, 2008 8:46 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] FW: Re: New ways of measuring > research by Stevan Harnad > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > I can't speak for Valdez but I know him and his work and > share some of his interests and concerns. The basic issue to > me is one that we find throughout science. One the one hand > we find lots of statistical analysis. But on the other we > find the development of theoretical processes and mechanisms > that explain the numbers. It is the discovery of these > processes and mechanisms that the science of science presently lacks. > > Most of the people on this list are familiar with some of > these issues. One, which Valdez alludes to, is the > calculation of return on investment. We are pretty sure that > science is valuable but how do we measure that value? We have > many programs on which we have spent over $1 billion over the > last 10 years. What has been the return to society? What is > it likely to be in future? Has one program returned more than > another? Why is this so hard to figure out? > > Another is the quality of research. Surely some research is > better than others, some papers better than others, in > several different ways. For that matter, what is the goal of > science, or are there more than one? Which fields are > achieving which goals, to what degree? Are some fields more > productive than others? Are some speeding up, while others > slow down? Economics has resolved many of these issues with > models of rational behavior. Why can't the science of science > do likewise? (It is okay if it can't as long as we know why it can't.) > > The point is that we know we are measuring something > important but we don't know what it is. Most of the terms we > use to talk about science lack an operational definition. In > this sense the measurement of scientific activity is ahead of > our understanding of this activity. We do not have a > fundamental theory of the nature of science. We are like > geology before plate tectonics, or epidemiology before the > germ theory of disease, measuring what we do not understand. > > David Wojick > > > > > Clearly a message of interest to the subscribers to Sig Metrics of > >ASIST. Gene Garfield > > > >-----Original Message----- > >From: American Scientist Open Access Forum > >[mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMA > XI.ORG] On > >Behalf Of Stevan Harnad > >Sent: Wednesday, October 08, 2008 11:03 AM > >To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG > >Subject: Re: New ways of measuring research > > > >On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill > > wrote: > > > >> the primary reason that I believe bibliometrics, innovation > >> indices, patent analysis and econometric modeling are > flawed is that > >> they rely upon the counting of things (paper, money, people, etc.) > >> without understanding the underlying motivations of the > actors within > >> the scientific ecosystem. > > > >There are two ways to evaluate: > > > >Subjectively (expert judgement, peer review, opinion polls) > >or > >Objectively: counting things > > > >The same is true of motives: you can assess them subjectively or > >objectively. If objectively, you have to count things. > > > >That's metrics. > > > >Philosophers say "Show me someone who wishes to discard metaphysics, > >and I'll show you a metaphysician with a rival > (metaphysical) system." > > > >The metric equivalent is "Show me someone who wishes to discard > >metrics (counting things), and I'll show you a metrician with a rival > >(metric) system." > > > >Objective metrics, however, must be *validated*, and that usually > >begins by initializing their weights based on their correlation with > >existing (already validated, or face-valid) metrics and/or > peer review > >(expert judgment). > > > >Note also that there are a-priori evaluations (research funding > >proposals, research findings submittedf or publication) and > >a-posteriori evaluations (research performance assessment). > > > >> what,,, motivates scientists to collaborate? > > > >You can ask them (subjective), or you can count things > >(co-authorships, co-citations, etc.) to infer what factors underlie > >collaboration (objective). > > > >> Second, what science policy makers want is a set of > decision support > >> tools that supplement the existing gold standard (expert > judgment) and > >> provide options for the future. > > > >New metrics need to be validated against existing, already validated > >(or face-valid) metrics which in turn have to be validated > against the > >"gold standard" (expert judgment. Once shown to be reliable > and valid, > >metrics can then predict on their own, especially jointly, with > >suitable weights: > > > >The UK RAE 2008 offers an ideal opportunity to validate a wide > >spectrum of old and new metrics, jointly, field by field, against > >expert judgment: > > > >Harnad, S. (2007) Open Access Scientometrics and the UK Research > >Assessment Exercise. In Proceedings of 11th Annual Meeting of the > >International Society for Scientometrics and Informetrics 11(1), pp. > >27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. > >http://eprints.ecs.soton.ac.uk/13804/ > > > >Sample of candidate > >OA-era metrics: > > > >Citations (C) > >CiteRank > >Co-citations > >Downloads (D) > >C/D Correlations > >Hub/Authority index > >Chronometrics: Latency/Longevity > >Endogamy/Exogamy > >Book citation index > >Research funding > >Students > >Prizes > >h-index > >Co-authorships > >Number of articles > >Number of publishing years > >Semiometrics (latent semantic indexing, text overlap, etc.) > > > >> policy makers need to understand the benefits and effectiveness of > >their > >> investment decisions in R&D. Currently, policy makers rely on big > >> committee reviews, peer review, and their own best judgment to make > >> those decisions. The current set of tools available don't provide > >> policy makers with rigorous answers to the benefits/effectiveness > >> questions... and they are too difficult to use and/or > >> inexplicable to the normal policy maker. The result is the laundry > >list > >> of "metrics" or "indicators" that are contained in the "Gathering > >Storm" > >> or any of the innovation indices that I have seen to date. > > > >The difference between unvalidated and validated metrics is the > >difference between night and day. > > > >The role of expert judgment will obviously remain primary in the case > >of a-priori evaluations (specific research proposals and submissions > >for publication) and a-posteriori evaluations (research performance > >evaluation, impact studies) > > > >> Finally, I don't think we know enough about the functioning of the > >> innovation system to begin making judgments about which > >> metrics/indicators are reliable enough to provide guidance > to policy > >> makers. I believe that we must move to an ecosystem model of > >innovation > >> and that if you do that, then non-obvious indicators (relative > >> competitiveness/openness of the system, embedded > infrastructure, etc.) > >> become much more important than the traditional metrics > used by NSF, > >> OECD, EU and others. In addition, the decision support tools will > >> gravitate away from the static (econometric modeling, > >> patent/bibliometric citations) and toward the dynamic (systems > >modeling, > >> visual analytics). > > > >I'm not sure what all these measures are, but assuming they are > >countale metrics, they all need prior validation against validated or > >face-valid criteria, fields by field, and preferably a large battery > >of candidate metrics, validated jointly, initializing the weights of > >each. > > > >OA will help provide us with a rich new spectrum of candidate metrics > >and an open means of monitoring, validating, and fine-tuning them. > > > >Stevan Harnad > > -- > > "David E. Wojick, PhD" > Senior Consultant for Innovation > Office of Scientific and Technical Information > US Department of Energy > http://www.osti.gov/innovation/ > 391 Flickertail Lane, Star Tannery, VA 22654 USA > 540-858-3136 > > http://www.bydesign.com/powervision/resume.html provides my > bio and past client list. > http://www.bydesign.com/powervision/Mathematics_Philosophy_Sci > ence/ presents some of my own research on information > structure and dynamics. > From amsciforum at GMAIL.COM Fri Oct 10 05:41:02 2008 From: amsciforum at GMAIL.COM (Stevan Harnad) Date: Fri, 10 Oct 2008 05:41:02 -0400 Subject: Open Access and the Skewness of Science: It Can't Be Cream All the Way Down Message-ID: Open Access and the Skewness of Science: It Can't Be Cream All the Way Down http://openaccess.eprints.org/index.php?/archives/474-guid.html Young NS, Ioannidis JPA, Al-Ubaydli O (2008) Why Current Publication Practices May Distort Science. PLoS Medicine Vol. 5, No. 10, e201 doi:10.1371/journal.pmed.0050201 http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371%2Fjournal.pmed.0020124 SUMMARY: "The current system of publication in biomedical research provides a distorted view of the reality of scientific data that are generated in the laboratory and clinic. This system can be studied by applying principles from the field of economics. The "winner's curse," a more general statement of publication bias, suggests that the small proportion of results chosen for publication are unrepresentative of scientists' repeated samplings of the real world. The self-correcting mechanism in science is retarded by the extreme imbalance between the abundance of supply (the output of basic science laboratories and clinical investigations) and the increasingly limited venues for publication (journals with sufficiently high impact). This system would be expected intrinsically to lead to the misallocation of resources. The scarcity of available outlets is artificial, based on the costs of printing in an electronic age and a belief that selectivity is equivalent to quality. Science is subject to great uncertainty: we cannot be confident now which efforts will ultimately yield worthwhile achievements. However, the current system abdicates to a small number of intermediates an authoritative prescience to anticipate a highly unpredictable future. In considering society's expectations and our own goals as scientists, we believe that there is a moral imperative to reconsider how scientific data are judged and disseminated." There are reasons to be skeptical about the conclusions of this PLoS article. It says that science is compromised by insufficient "high impact" journals to publish in. The truth is that just about everything gets published somewhere among the planet's25,000 peer reviewed journals, just not all in the top journals, which are, by definition, reserved for the top articles -- and not all articles can be top articles. The triage (peer review) is not perfect, so sometimes an article will appear lower (or higher) in the journal quality hierarchy than it ought to. But now that funders and universities are mandating Open Access, all research, top, middle and low will be accessible to everyone. This will correct any access inequities and it will also help remedy quality misassignment (inasmuch as lower quality journals may have fewer subscribers, and users may be less likely to consult lower quality journals). But it will not change the fact that 80% of citations (and presumably usage) goes to the top 20% of articles, though it may flatten this "skewness of science" (Seglen) somewhat. Stevan Harnad American Scientist Open Access Forum From jni at DB.DK Fri Oct 10 07:22:04 2008 From: jni at DB.DK (Jeppe Nicolaisen) Date: Fri, 10 Oct 2008 13:22:04 +0200 Subject: SV: [SIGMETRICS] FW: Re: New ways of measuring research by Stevan Harnad In-Reply-To: A Message-ID: Dear Loet, You wrote: "I go to scientometrics conferences and the atmosphere is almost anti-theoretical. (There are good exceptions [...]!)" I completely agree with you. Although there are good exceptions from this tendency, it is nevertheless common to find this anti-theoretical stance even in our most esteemed journals and books. The best example is probably from the special issue of Scientometrics on theories of citing: "I think the current state of our field calls for more empirical and practical work, and all this theorising should wait till a very large body - beyond a threshold - of empirical knowledge is built" (Arunachalam, 1998, p. 142). One has to respect Arunachalam for being so honest about it. Most of his "fellow anti-theorists" are not that outspoken. But it is my impression that a vast majority of the colleagues in our field tend to agree with Arunachalam on this. They want to get on with the counting and measuring business and leave all this "theorizing" for later (or for the theorists). I know of course that you instantly recognized the theoretical and philosophical problems with the quote above. Others are referred to my article in ARIST, 41 (2007). However, the point of my letter is not just to point fingers :-) , but to start a discussion about what to do about it. The hostility against Scientometrics from other fields that you write about is definitely not something we want. A stronger theoretically anchored Scientometrics is definitely worth working for. But how? Sometimes I can't help thinking whether we have the right people working for us as editors, conference chairs, program chairs, etc. For instance, will the upcoming ISSI conference help to strengthen the theoretical foundation of our field? According to the objectives of the conference "The ISSI 2009 Conference will provide an international open forum for scientists, research managers and authorities, information and communication related professionals to debate the current status and advancements of Scientometrics theory and applications ...". This leaves some hope, of course. But then again, just because such a forum for Scientometrics theory is provided doesn't necessarily take us anywhere. It depends, of course, on the people attending and their willingness to engage in theoretical discussions. The editors, conference chairs, program chairs, etc. of our field play an important role as gatekeepers. They are the ones who can set the agenda and turn the field (gradually) toward a more theoretically orientation. It's a huge responsibility. Are they able to live up to it? Have a nice weekend! Best wishes, Jeppe Nicolaisen Associate Professor, PhD Royal School of Library and Information Science Copenhagen, Denmark -----Oprindelig meddelelse----- Fra: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] P? vegne af Loet Leydesdorff Sendt: 10. oktober 2008 09:37 Til: SIGMETRICS at LISTSERV.UTK.EDU Emne: Re: [SIGMETRICS] FW: Re: New ways of measuring research by Stevan Harnad Dear David, There is theorizing in science & technology studies, the (evolutionary) economics of innovation, etc., but it has drifted so far apart from the scientometrics enterprise that one cannot recognize the relevance of each other's work on both sides. The Science-of-Science-Policy program of the NSF may aim at bringing the two sides together, but processes of codification and self-organization at the field level cannot be organized away; they tend to be reproduced. I go to 4S conferences, etc., and there is hostility against scientometrics; and I go to scientometrics conferences and the atmosphere is almost anti-theoretical. (There are good exceptions on both sides!) Similarly with referee comments which one receives when submitting on either side, including the advice not to try to bridge the gap but instead to submit on the other side. See also my paper (with Peter van den Besselaar): "Scientometrics and Communication Theory: Towards Theoretically Informed Indicators," Scientometrics 38(1) (1997) 155-174. Ever since, new developments in the theoretical domain like Luhmann's sociology of communication provides opportunities to conceptualize processes like codification in terms which are both socially relevant and perhaps amenable to measurement. I have tried to work on this relation in: "Scientific Communication and Cognitive Codification: Social Systems Theory and the Sociology of Scientific Knowledge," European Journal of Social Theory 10(3), 375-388, 2007 (available from my website). The problem, in my opinion, is that we have only indicators for information processing and not for the processing of meaning or even beyond that knowledge. The indicators of a knowledge-based economy program of the OECD has not been so successful (Godin, 2006), but led mainly to a rearrangement of existing indicators. With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David E. Wojick > Sent: Thursday, October 09, 2008 8:46 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] FW: Re: New ways of measuring > research by Stevan Harnad > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > I can't speak for Valdez but I know him and his work and > share some of his interests and concerns. The basic issue to > me is one that we find throughout science. One the one hand > we find lots of statistical analysis. But on the other we > find the development of theoretical processes and mechanisms > that explain the numbers. It is the discovery of these > processes and mechanisms that the science of science presently lacks. > > Most of the people on this list are familiar with some of > these issues. One, which Valdez alludes to, is the > calculation of return on investment. We are pretty sure that > science is valuable but how do we measure that value? We have > many programs on which we have spent over $1 billion over the > last 10 years. What has been the return to society? What is > it likely to be in future? Has one program returned more than > another? Why is this so hard to figure out? > > Another is the quality of research. Surely some research is > better than others, some papers better than others, in > several different ways. For that matter, what is the goal of > science, or are there more than one? Which fields are > achieving which goals, to what degree? Are some fields more > productive than others? Are some speeding up, while others > slow down? Economics has resolved many of these issues with > models of rational behavior. Why can't the science of science > do likewise? (It is okay if it can't as long as we know why it can't.) > > The point is that we know we are measuring something > important but we don't know what it is. Most of the terms we > use to talk about science lack an operational definition. In > this sense the measurement of scientific activity is ahead of > our understanding of this activity. We do not have a > fundamental theory of the nature of science. We are like > geology before plate tectonics, or epidemiology before the > germ theory of disease, measuring what we do not understand. > > David Wojick > > > > > Clearly a message of interest to the subscribers to Sig Metrics of > >ASIST. Gene Garfield > > > >-----Original Message----- > >From: American Scientist Open Access Forum > >[mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMA > XI.ORG] On > >Behalf Of Stevan Harnad > >Sent: Wednesday, October 08, 2008 11:03 AM > >To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG > >Subject: Re: New ways of measuring research > > > >On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill > > wrote: > > > >> the primary reason that I believe bibliometrics, innovation > >> indices, patent analysis and econometric modeling are > flawed is that > >> they rely upon the counting of things (paper, money, people, etc.) > >> without understanding the underlying motivations of the > actors within > >> the scientific ecosystem. > > > >There are two ways to evaluate: > > > >Subjectively (expert judgement, peer review, opinion polls) > >or > >Objectively: counting things > > > >The same is true of motives: you can assess them subjectively or > >objectively. If objectively, you have to count things. > > > >That's metrics. > > > >Philosophers say "Show me someone who wishes to discard metaphysics, > >and I'll show you a metaphysician with a rival > (metaphysical) system." > > > >The metric equivalent is "Show me someone who wishes to discard > >metrics (counting things), and I'll show you a metrician with a rival > >(metric) system." > > > >Objective metrics, however, must be *validated*, and that usually > >begins by initializing their weights based on their correlation with > >existing (already validated, or face-valid) metrics and/or > peer review > >(expert judgment). > > > >Note also that there are a-priori evaluations (research funding > >proposals, research findings submittedf or publication) and > >a-posteriori evaluations (research performance assessment). > > > >> what,,, motivates scientists to collaborate? > > > >You can ask them (subjective), or you can count things > >(co-authorships, co-citations, etc.) to infer what factors underlie > >collaboration (objective). > > > >> Second, what science policy makers want is a set of > decision support > >> tools that supplement the existing gold standard (expert > judgment) and > >> provide options for the future. > > > >New metrics need to be validated against existing, already validated > >(or face-valid) metrics which in turn have to be validated > against the > >"gold standard" (expert judgment. Once shown to be reliable > and valid, > >metrics can then predict on their own, especially jointly, with > >suitable weights: > > > >The UK RAE 2008 offers an ideal opportunity to validate a wide > >spectrum of old and new metrics, jointly, field by field, against > >expert judgment: > > > >Harnad, S. (2007) Open Access Scientometrics and the UK Research > >Assessment Exercise. In Proceedings of 11th Annual Meeting of the > >International Society for Scientometrics and Informetrics 11(1), pp. > >27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. > >http://eprints.ecs.soton.ac.uk/13804/ > > > >Sample of candidate > >OA-era metrics: > > > >Citations (C) > >CiteRank > >Co-citations > >Downloads (D) > >C/D Correlations > >Hub/Authority index > >Chronometrics: Latency/Longevity > >Endogamy/Exogamy > >Book citation index > >Research funding > >Students > >Prizes > >h-index > >Co-authorships > >Number of articles > >Number of publishing years > >Semiometrics (latent semantic indexing, text overlap, etc.) > > > >> policy makers need to understand the benefits and effectiveness of > >their > >> investment decisions in R&D. Currently, policy makers rely on big > >> committee reviews, peer review, and their own best judgment to make > >> those decisions. The current set of tools available don't provide > >> policy makers with rigorous answers to the benefits/effectiveness > >> questions... and they are too difficult to use and/or > >> inexplicable to the normal policy maker. The result is the laundry > >list > >> of "metrics" or "indicators" that are contained in the "Gathering > >Storm" > >> or any of the innovation indices that I have seen to date. > > > >The difference between unvalidated and validated metrics is the > >difference between night and day. > > > >The role of expert judgment will obviously remain primary in the case > >of a-priori evaluations (specific research proposals and submissions > >for publication) and a-posteriori evaluations (research performance > >evaluation, impact studies) > > > >> Finally, I don't think we know enough about the functioning of the > >> innovation system to begin making judgments about which > >> metrics/indicators are reliable enough to provide guidance > to policy > >> makers. I believe that we must move to an ecosystem model of > >innovation > >> and that if you do that, then non-obvious indicators (relative > >> competitiveness/openness of the system, embedded > infrastructure, etc.) > >> become much more important than the traditional metrics > used by NSF, > >> OECD, EU and others. In addition, the decision support tools will > >> gravitate away from the static (econometric modeling, > >> patent/bibliometric citations) and toward the dynamic (systems > >modeling, > >> visual analytics). > > > >I'm not sure what all these measures are, but assuming they are > >countale metrics, they all need prior validation against validated or > >face-valid criteria, fields by field, and preferably a large battery > >of candidate metrics, validated jointly, initializing the weights of > >each. > > > >OA will help provide us with a rich new spectrum of candidate metrics > >and an open means of monitoring, validating, and fine-tuning them. > > > >Stevan Harnad > > -- > > "David E. Wojick, PhD" > Senior Consultant for Innovation > Office of Scientific and Technical Information > US Department of Energy > http://www.osti.gov/innovation/ > 391 Flickertail Lane, Star Tannery, VA 22654 USA > 540-858-3136 > > http://www.bydesign.com/powervision/resume.html provides my > bio and past client list. > http://www.bydesign.com/powervision/Mathematics_Philosophy_Sci > ence/ presents some of my own research on information > structure and dynamics. > From loet at LEYDESDORFF.NET Fri Oct 10 08:11:31 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 10 Oct 2008 14:11:31 +0200 Subject: SV: [SIGMETRICS] FW: Re: New ways of measuring research by Stevan Harnad In-Reply-To: <73573C2DCB0154408D790B1E7EDB0C52E1931F@ka-exch01.db.dk> Message-ID: Dear Jeppe, I did not mean this personally to anybody, and I doubt that more organization of interfacing (by ISSI or others) really helps because the problems are structural and encoded in our (quantitative and qualitative) literatures. The atmosphere on the qualitative side is often more problematic than in our community, insofar as I can see. I thought that we just should be reflexive about these problems. Best, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jeppe Nicolaisen > Sent: Friday, October 10, 2008 1:22 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] SV: [SIGMETRICS] FW: Re: New ways of > measuring research by Stevan Harnad > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear Loet, > > You wrote: "I go to scientometrics conferences and the > atmosphere is almost anti-theoretical. (There are good > exceptions [...]!)" > > I completely agree with you. Although there are good > exceptions from this tendency, it is nevertheless common to > find this anti-theoretical stance even in our most esteemed > journals and books. The best example is probably from the > special issue of Scientometrics on theories of citing: > > "I think the current state of our field calls for more > empirical and practical work, and all this theorising should > wait till a very large body - beyond a threshold - of > empirical knowledge is built" (Arunachalam, 1998, p. 142). > > One has to respect Arunachalam for being so honest about it. > Most of his "fellow anti-theorists" are not that outspoken. > But it is my impression that a vast majority of the > colleagues in our field tend to agree with Arunachalam on > this. They want to get on with the counting and measuring > business and leave all this "theorizing" for later (or for > the theorists). > > I know of course that you instantly recognized the > theoretical and philosophical problems with the quote above. > Others are referred to my article in ARIST, 41 (2007). > However, the point of my letter is not just to point fingers > :-) , but to start a discussion about what to do about it. > The hostility against Scientometrics from other fields that > you write about is definitely not something we want. A > stronger theoretically anchored Scientometrics is definitely > worth working for. But how? Sometimes I can't help thinking > whether we have the right people working for us as editors, > conference chairs, program chairs, etc. For instance, will > the upcoming ISSI conference help to strengthen the > theoretical foundation of our field? According to the > objectives of the conference "The ISSI 2009 Conference will > provide an international open forum for scientists, research > managers and authorities, information and communication > related professionals to debate the current status and advanceme! > nts of Scientometrics theory and applications ...". This > leaves some hope, of course. But then again, just because > such a forum for Scientometrics theory is provided doesn't > necessarily take us anywhere. It depends, of course, on the > people attending and their willingness to engage in > theoretical discussions. The editors, conference chairs, > program chairs, etc. of our field play an important role as > gatekeepers. They are the ones who can set the agenda and > turn the field (gradually) toward a more theoretically > orientation. It's a huge responsibility. Are they able to > live up to it? > > Have a nice weekend! > > Best wishes, > > Jeppe Nicolaisen > Associate Professor, PhD > Royal School of Library and Information Science > Copenhagen, Denmark > > -----Oprindelig meddelelse----- > Fra: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] P? vegne af Loet Leydesdorff > Sendt: 10. oktober 2008 09:37 > Til: SIGMETRICS at LISTSERV.UTK.EDU > Emne: Re: [SIGMETRICS] FW: Re: New ways of measuring research > by Stevan Harnad > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear David, > > There is theorizing in science & technology studies, the > (evolutionary) > economics of innovation, etc., but it has drifted so far > apart from the > scientometrics enterprise that one cannot recognize the > relevance of each > other's work on both sides. The Science-of-Science-Policy > program of the NSF > may aim at bringing the two sides together, but processes of > codification > and self-organization at the field level cannot be organized > away; they tend > to be reproduced. > > I go to 4S conferences, etc., and there is hostility against > scientometrics; > and I go to scientometrics conferences and the atmosphere is almost > anti-theoretical. (There are good exceptions on both sides!) > Similarly with > referee comments which one receives when submitting on either side, > including the advice not to try to bridge the gap but instead > to submit on > the other side. See also my paper (with Peter van den Besselaar): > "Scientometrics and Communication Theory: Towards > Theoretically Informed > Indicators," Scientometrics 38(1) (1997) 155-174. > > Ever since, new developments in the theoretical domain like Luhmann's > sociology of communication provides opportunities to > conceptualize processes > like codification in terms which are both socially relevant > and perhaps > amenable to measurement. I have tried to work on this relation in: > "Scientific Communication and Cognitive Codification: Social > Systems Theory > and the Sociology of Scientific Knowledge," European Journal of Social > Theory 10(3), 375-388, 2007 (available from my website). > > The problem, in my opinion, is that we have only indicators > for information > processing and not for the processing of meaning or even beyond that > knowledge. The indicators of a knowledge-based economy > program of the OECD > has not been so successful (Godin, 2006), but led mainly to a > rearrangement > of existing indicators. > > With best wishes, > > > Loet > > ________________________________ > > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David E. Wojick > > Sent: Thursday, October 09, 2008 8:46 PM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] FW: Re: New ways of measuring > > research by Stevan Harnad > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > I can't speak for Valdez but I know him and his work and > > share some of his interests and concerns. The basic issue to > > me is one that we find throughout science. One the one hand > > we find lots of statistical analysis. But on the other we > > find the development of theoretical processes and mechanisms > > that explain the numbers. It is the discovery of these > > processes and mechanisms that the science of science > presently lacks. > > > > Most of the people on this list are familiar with some of > > these issues. One, which Valdez alludes to, is the > > calculation of return on investment. We are pretty sure that > > science is valuable but how do we measure that value? We have > > many programs on which we have spent over $1 billion over the > > last 10 years. What has been the return to society? What is > > it likely to be in future? Has one program returned more than > > another? Why is this so hard to figure out? > > > > Another is the quality of research. Surely some research is > > better than others, some papers better than others, in > > several different ways. For that matter, what is the goal of > > science, or are there more than one? Which fields are > > achieving which goals, to what degree? Are some fields more > > productive than others? Are some speeding up, while others > > slow down? Economics has resolved many of these issues with > > models of rational behavior. Why can't the science of science > > do likewise? (It is okay if it can't as long as we know why > it can't.) > > > > The point is that we know we are measuring something > > important but we don't know what it is. Most of the terms we > > use to talk about science lack an operational definition. In > > this sense the measurement of scientific activity is ahead of > > our understanding of this activity. We do not have a > > fundamental theory of the nature of science. We are like > > geology before plate tectonics, or epidemiology before the > > germ theory of disease, measuring what we do not understand. > > > > David Wojick > > > > > > > > Clearly a message of interest to the subscribers to Sig Metrics of > > >ASIST. Gene Garfield > > > > > >-----Original Message----- > > >From: American Scientist Open Access Forum > > >[mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMA > > XI.ORG] On > > >Behalf Of Stevan Harnad > > >Sent: Wednesday, October 08, 2008 11:03 AM > > >To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG > > >Subject: Re: New ways of measuring research > > > > > >On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill > > > wrote: > > > > > >> the primary reason that I believe bibliometrics, innovation > > >> indices, patent analysis and econometric modeling are > > flawed is that > > >> they rely upon the counting of things (paper, money, > people, etc.) > > >> without understanding the underlying motivations of the > > actors within > > >> the scientific ecosystem. > > > > > >There are two ways to evaluate: > > > > > >Subjectively (expert judgement, peer review, opinion polls) > > >or > > >Objectively: counting things > > > > > >The same is true of motives: you can assess them subjectively or > > >objectively. If objectively, you have to count things. > > > > > >That's metrics. > > > > > >Philosophers say "Show me someone who wishes to discard > metaphysics, > > >and I'll show you a metaphysician with a rival > > (metaphysical) system." > > > > > >The metric equivalent is "Show me someone who wishes to discard > > >metrics (counting things), and I'll show you a metrician > with a rival > > >(metric) system." > > > > > >Objective metrics, however, must be *validated*, and that usually > > >begins by initializing their weights based on their > correlation with > > >existing (already validated, or face-valid) metrics and/or > > peer review > > >(expert judgment). > > > > > >Note also that there are a-priori evaluations (research funding > > >proposals, research findings submittedf or publication) and > > >a-posteriori evaluations (research performance assessment). > > > > > >> what,,, motivates scientists to collaborate? > > > > > >You can ask them (subjective), or you can count things > > >(co-authorships, co-citations, etc.) to infer what factors underlie > > >collaboration (objective). > > > > > >> Second, what science policy makers want is a set of > > decision support > > >> tools that supplement the existing gold standard (expert > > judgment) and > > >> provide options for the future. > > > > > >New metrics need to be validated against existing, already > validated > > >(or face-valid) metrics which in turn have to be validated > > against the > > >"gold standard" (expert judgment. Once shown to be reliable > > and valid, > > >metrics can then predict on their own, especially jointly, with > > >suitable weights: > > > > > >The UK RAE 2008 offers an ideal opportunity to validate a wide > > >spectrum of old and new metrics, jointly, field by field, against > > >expert judgment: > > > > > >Harnad, S. (2007) Open Access Scientometrics and the UK Research > > >Assessment Exercise. In Proceedings of 11th Annual Meeting of the > > >International Society for Scientometrics and Informetrics > 11(1), pp. > > >27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. > > >http://eprints.ecs.soton.ac.uk/13804/ > > > > > >Sample of candidate > > >OA-era metrics: > > > > > >Citations (C) > > >CiteRank > > >Co-citations > > >Downloads (D) > > >C/D Correlations > > >Hub/Authority index > > >Chronometrics: Latency/Longevity > > >Endogamy/Exogamy > > >Book citation index > > >Research funding > > >Students > > >Prizes > > >h-index > > >Co-authorships > > >Number of articles > > >Number of publishing years > > >Semiometrics (latent semantic indexing, text overlap, etc.) > > > > > >> policy makers need to understand the benefits and > effectiveness of > > >their > > >> investment decisions in R&D. Currently, policy makers > rely on big > > >> committee reviews, peer review, and their own best > judgment to make > > >> those decisions. The current set of tools available > don't provide > > >> policy makers with rigorous answers to the benefits/effectiveness > > >> questions... and they are too difficult to use and/or > > >> inexplicable to the normal policy maker. The result is > the laundry > > >list > > >> of "metrics" or "indicators" that are contained in the "Gathering > > >Storm" > > >> or any of the innovation indices that I have seen to date. > > > > > >The difference between unvalidated and validated metrics is the > > >difference between night and day. > > > > > >The role of expert judgment will obviously remain primary > in the case > > >of a-priori evaluations (specific research proposals and > submissions > > >for publication) and a-posteriori evaluations (research performance > > >evaluation, impact studies) > > > > > >> Finally, I don't think we know enough about the > functioning of the > > >> innovation system to begin making judgments about which > > >> metrics/indicators are reliable enough to provide guidance > > to policy > > >> makers. I believe that we must move to an ecosystem model of > > >innovation > > >> and that if you do that, then non-obvious indicators (relative > > >> competitiveness/openness of the system, embedded > > infrastructure, etc.) > > >> become much more important than the traditional metrics > > used by NSF, > > >> OECD, EU and others. In addition, the decision support > tools will > > >> gravitate away from the static (econometric modeling, > > >> patent/bibliometric citations) and toward the dynamic (systems > > >modeling, > > >> visual analytics). > > > > > >I'm not sure what all these measures are, but assuming they are > > >countale metrics, they all need prior validation against > validated or > > >face-valid criteria, fields by field, and preferably a > large battery > > >of candidate metrics, validated jointly, initializing the > weights of > > >each. > > > > > >OA will help provide us with a rich new spectrum of > candidate metrics > > >and an open means of monitoring, validating, and fine-tuning them. > > > > > >Stevan Harnad > > > > -- > > > > "David E. Wojick, PhD" > > Senior Consultant for Innovation > > Office of Scientific and Technical Information > > US Department of Energy > > http://www.osti.gov/innovation/ > > 391 Flickertail Lane, Star Tannery, VA 22654 USA > > 540-858-3136 > > > > http://www.bydesign.com/powervision/resume.html provides my > > bio and past client list. > > http://www.bydesign.com/powervision/Mathematics_Philosophy_Sci > > ence/ presents some of my own research on information > > structure and dynamics. > > > From jni at DB.DK Fri Oct 10 11:39:04 2008 From: jni at DB.DK (Jeppe Nicolaisen) Date: Fri, 10 Oct 2008 17:39:04 +0200 Subject: SV: [SIGMETRICS] SV: [SIGMETRICS] FW: Re: New ways of measuring research by Stevan Harnad Message-ID: Dear Loet, Neither did I mean this personally to anybody. The anti-theoretical stance is widespread in our field. And I believe the example I gave is just one example of this widespread stance. I think you are right about the atmosphere on the qualitative side being more often more problematic than in our community. That is if you by "problematic" mean something like less polite or even less friendly. Perhaps that is actually a sign of good health! Perhaps that is because they are discussing their theories or absolute presuppositions much more than we do. As R.G. Collingwood writes in his famous classic "Metaphysics" (Oxford, UK: The Clarendon Press, 1940) "people are apt to be ticklish in their absolute presuppositions": "If you were talking to a pathologist about a certain disease and asked him 'What is the cause of the event E which you say sometimes happens in this disease?' he will reply 'The cause of E is C'; and if he were in a communicative mood he might go on to say 'That was established by So-and-so, in a piece of research that is now regarded as classical.' You might go on to ask: 'I suppose before So-and-so found out what the cause of E was, he was quite sure it had a cause?' The answer would be 'Quite sure, of course.' If you now say 'Why?' he will probably answer 'Because everything that happens has a cause.' If you are importunate enough to ask 'But how do you know that everything that happens has a cause?' he will probably blow right up in your face, because you have put your finger on one of his absolute presuppositions, and people are apt to be ticklish in their absolute presuppositions. But if he keeps his temper and gives you a civil and candid answer, it will be to the following effect. 'That is a thing we take for granted in my job. We don't question it. We don't try to verify it. It isn't a thing anybody has discovered, like microbes or the circulation of the blood. It is a thing we just take for granted'" (Collingwood, 1940, p. 31). Best wishes, Jeppe ________________________________ Fra: ASIS&T Special Interest Group on Metrics p? vegne af Loet Leydesdorff Sendt: fr 10-10-2008 14:11 Til: SIGMETRICS at LISTSERV.UTK.EDU Emne: Re: [SIGMETRICS] SV: [SIGMETRICS] FW: Re: New ways of measuring research by Stevan Harnad Dear Jeppe, I did not mean this personally to anybody, and I doubt that more organization of interfacing (by ISSI or others) really helps because the problems are structural and encoded in our (quantitative and qualitative) literatures. The atmosphere on the qualitative side is often more problematic than in our community, insofar as I can see. I thought that we just should be reflexive about these problems. Best, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jeppe Nicolaisen > Sent: Friday, October 10, 2008 1:22 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] SV: [SIGMETRICS] FW: Re: New ways of > measuring research by Stevan Harnad > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear Loet, > > You wrote: "I go to scientometrics conferences and the > atmosphere is almost anti-theoretical. (There are good > exceptions [...]!)" > > I completely agree with you. Although there are good > exceptions from this tendency, it is nevertheless common to > find this anti-theoretical stance even in our most esteemed > journals and books. The best example is probably from the > special issue of Scientometrics on theories of citing: > > "I think the current state of our field calls for more > empirical and practical work, and all this theorising should > wait till a very large body - beyond a threshold - of > empirical knowledge is built" (Arunachalam, 1998, p. 142). > > One has to respect Arunachalam for being so honest about it. > Most of his "fellow anti-theorists" are not that outspoken. > But it is my impression that a vast majority of the > colleagues in our field tend to agree with Arunachalam on > this. They want to get on with the counting and measuring > business and leave all this "theorizing" for later (or for > the theorists). > > I know of course that you instantly recognized the > theoretical and philosophical problems with the quote above. > Others are referred to my article in ARIST, 41 (2007). > However, the point of my letter is not just to point fingers > :-) , but to start a discussion about what to do about it. > The hostility against Scientometrics from other fields that > you write about is definitely not something we want. A > stronger theoretically anchored Scientometrics is definitely > worth working for. But how? Sometimes I can't help thinking > whether we have the right people working for us as editors, > conference chairs, program chairs, etc. For instance, will > the upcoming ISSI conference help to strengthen the > theoretical foundation of our field? According to the > objectives of the conference "The ISSI 2009 Conference will > provide an international open forum for scientists, research > managers and authorities, information and communication > related professionals to debate the current status and advanceme! > nts of Scientometrics theory and applications ...". This > leaves some hope, of course. But then again, just because > such a forum for Scientometrics theory is provided doesn't > necessarily take us anywhere. It depends, of course, on the > people attending and their willingness to engage in > theoretical discussions. The editors, conference chairs, > program chairs, etc. of our field play an important role as > gatekeepers. They are the ones who can set the agenda and > turn the field (gradually) toward a more theoretically > orientation. It's a huge responsibility. Are they able to > live up to it? > > Have a nice weekend! > > Best wishes, > > Jeppe Nicolaisen > Associate Professor, PhD > Royal School of Library and Information Science > Copenhagen, Denmark > > -----Oprindelig meddelelse----- > Fra: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] P? vegne af Loet Leydesdorff > Sendt: 10. oktober 2008 09:37 > Til: SIGMETRICS at LISTSERV.UTK.EDU > Emne: Re: [SIGMETRICS] FW: Re: New ways of measuring research > by Stevan Harnad > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear David, > > There is theorizing in science & technology studies, the > (evolutionary) > economics of innovation, etc., but it has drifted so far > apart from the > scientometrics enterprise that one cannot recognize the > relevance of each > other's work on both sides. The Science-of-Science-Policy > program of the NSF > may aim at bringing the two sides together, but processes of > codification > and self-organization at the field level cannot be organized > away; they tend > to be reproduced. > > I go to 4S conferences, etc., and there is hostility against > scientometrics; > and I go to scientometrics conferences and the atmosphere is almost > anti-theoretical. (There are good exceptions on both sides!) > Similarly with > referee comments which one receives when submitting on either side, > including the advice not to try to bridge the gap but instead > to submit on > the other side. See also my paper (with Peter van den Besselaar): > "Scientometrics and Communication Theory: Towards > Theoretically Informed > Indicators," Scientometrics 38(1) (1997) 155-174. > > Ever since, new developments in the theoretical domain like Luhmann's > sociology of communication provides opportunities to > conceptualize processes > like codification in terms which are both socially relevant > and perhaps > amenable to measurement. I have tried to work on this relation in: > "Scientific Communication and Cognitive Codification: Social > Systems Theory > and the Sociology of Scientific Knowledge," European Journal of Social > Theory 10(3), 375-388, 2007 (available from my website). > > The problem, in my opinion, is that we have only indicators > for information > processing and not for the processing of meaning or even beyond that > knowledge. The indicators of a knowledge-based economy > program of the OECD > has not been so successful (Godin, 2006), but led mainly to a > rearrangement > of existing indicators. > > With best wishes, > > > Loet > > ________________________________ > > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David E. Wojick > > Sent: Thursday, October 09, 2008 8:46 PM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] FW: Re: New ways of measuring > > research by Stevan Harnad > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > I can't speak for Valdez but I know him and his work and > > share some of his interests and concerns. The basic issue to > > me is one that we find throughout science. One the one hand > > we find lots of statistical analysis. But on the other we > > find the development of theoretical processes and mechanisms > > that explain the numbers. It is the discovery of these > > processes and mechanisms that the science of science > presently lacks. > > > > Most of the people on this list are familiar with some of > > these issues. One, which Valdez alludes to, is the > > calculation of return on investment. We are pretty sure that > > science is valuable but how do we measure that value? We have > > many programs on which we have spent over $1 billion over the > > last 10 years. What has been the return to society? What is > > it likely to be in future? Has one program returned more than > > another? Why is this so hard to figure out? > > > > Another is the quality of research. Surely some research is > > better than others, some papers better than others, in > > several different ways. For that matter, what is the goal of > > science, or are there more than one? Which fields are > > achieving which goals, to what degree? Are some fields more > > productive than others? Are some speeding up, while others > > slow down? Economics has resolved many of these issues with > > models of rational behavior. Why can't the science of science > > do likewise? (It is okay if it can't as long as we know why > it can't.) > > > > The point is that we know we are measuring something > > important but we don't know what it is. Most of the terms we > > use to talk about science lack an operational definition. In > > this sense the measurement of scientific activity is ahead of > > our understanding of this activity. We do not have a > > fundamental theory of the nature of science. We are like > > geology before plate tectonics, or epidemiology before the > > germ theory of disease, measuring what we do not understand. > > > > David Wojick > > > > > > > > Clearly a message of interest to the subscribers to Sig Metrics of > > >ASIST. Gene Garfield > > > > > >-----Original Message----- > > >From: American Scientist Open Access Forum > > >[mailto:AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMA > > XI.ORG] On > > >Behalf Of Stevan Harnad > > >Sent: Wednesday, October 08, 2008 11:03 AM > > >To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG > > >Subject: Re: New ways of measuring research > > > > > >On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill > > > wrote: > > > > > >> the primary reason that I believe bibliometrics, innovation > > >> indices, patent analysis and econometric modeling are > > flawed is that > > >> they rely upon the counting of things (paper, money, > people, etc.) > > >> without understanding the underlying motivations of the > > actors within > > >> the scientific ecosystem. > > > > > >There are two ways to evaluate: > > > > > >Subjectively (expert judgement, peer review, opinion polls) > > >or > > >Objectively: counting things > > > > > >The same is true of motives: you can assess them subjectively or > > >objectively. If objectively, you have to count things. > > > > > >That's metrics. > > > > > >Philosophers say "Show me someone who wishes to discard > metaphysics, > > >and I'll show you a metaphysician with a rival > > (metaphysical) system." > > > > > >The metric equivalent is "Show me someone who wishes to discard > > >metrics (counting things), and I'll show you a metrician > with a rival > > >(metric) system." > > > > > >Objective metrics, however, must be *validated*, and that usually > > >begins by initializing their weights based on their > correlation with > > >existing (already validated, or face-valid) metrics and/or > > peer review > > >(expert judgment). > > > > > >Note also that there are a-priori evaluations (research funding > > >proposals, research findings submittedf or publication) and > > >a-posteriori evaluations (research performance assessment). > > > > > >> what,,, motivates scientists to collaborate? > > > > > >You can ask them (subjective), or you can count things > > >(co-authorships, co-citations, etc.) to infer what factors underlie > > >collaboration (objective). > > > > > >> Second, what science policy makers want is a set of > > decision support > > >> tools that supplement the existing gold standard (expert > > judgment) and > > >> provide options for the future. > > > > > >New metrics need to be validated against existing, already > validated > > >(or face-valid) metrics which in turn have to be validated > > against the > > >"gold standard" (expert judgment. Once shown to be reliable > > and valid, > > >metrics can then predict on their own, especially jointly, with > > >suitable weights: > > > > > >The UK RAE 2008 offers an ideal opportunity to validate a wide > > >spectrum of old and new metrics, jointly, field by field, against > > >expert judgment: > > > > > >Harnad, S. (2007) Open Access Scientometrics and the UK Research > > >Assessment Exercise. In Proceedings of 11th Annual Meeting of the > > >International Society for Scientometrics and Informetrics > 11(1), pp. > > >27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. > > >http://eprints.ecs.soton.ac.uk/13804/ > > > > > >Sample of candidate > > >OA-era metrics: > > > > > >Citations (C) > > >CiteRank > > >Co-citations > > >Downloads (D) > > >C/D Correlations > > >Hub/Authority index > > >Chronometrics: Latency/Longevity > > >Endogamy/Exogamy > > >Book citation index > > >Research funding > > >Students > > >Prizes > > >h-index > > >Co-authorships > > >Number of articles > > >Number of publishing years > > >Semiometrics (latent semantic indexing, text overlap, etc.) > > > > > >> policy makers need to understand the benefits and > effectiveness of > > >their > > >> investment decisions in R&D. Currently, policy makers > rely on big > > >> committee reviews, peer review, and their own best > judgment to make > > >> those decisions. The current set of tools available > don't provide > > >> policy makers with rigorous answers to the benefits/effectiveness > > >> questions... and they are too difficult to use and/or > > >> inexplicable to the normal policy maker. The result is > the laundry > > >list > > >> of "metrics" or "indicators" that are contained in the "Gathering > > >Storm" > > >> or any of the innovation indices that I have seen to date. > > > > > >The difference between unvalidated and validated metrics is the > > >difference between night and day. > > > > > >The role of expert judgment will obviously remain primary > in the case > > >of a-priori evaluations (specific research proposals and > submissions > > >for publication) and a-posteriori evaluations (research performance > > >evaluation, impact studies) > > > > > >> Finally, I don't think we know enough about the > functioning of the > > >> innovation system to begin making judgments about which > > >> metrics/indicators are reliable enough to provide guidance > > to policy > > >> makers. I believe that we must move to an ecosystem model of > > >innovation > > >> and that if you do that, then non-obvious indicators (relative > > >> competitiveness/openness of the system, embedded > > infrastructure, etc.) > > >> become much more important than the traditional metrics > > used by NSF, > > >> OECD, EU and others. In addition, the decision support > tools will > > >> gravitate away from the static (econometric modeling, > > >> patent/bibliometric citations) and toward the dynamic (systems > > >modeling, > > >> visual analytics). > > > > > >I'm not sure what all these measures are, but assuming they are > > >countale metrics, they all need prior validation against > validated or > > >face-valid criteria, fields by field, and preferably a > large battery > > >of candidate metrics, validated jointly, initializing the > weights of > > >each. > > > > > >OA will help provide us with a rich new spectrum of > candidate metrics > > >and an open means of monitoring, validating, and fine-tuning them. > > > > > >Stevan Harnad > > > > -- > > > > "David E. Wojick, PhD" > > Senior Consultant for Innovation > > Office of Scientific and Technical Information > > US Department of Energy > > http://www.osti.gov/innovation/ > > 391 Flickertail Lane, Star Tannery, VA 22654 USA > > 540-858-3136 > > > > http://www.bydesign.com/powervision/resume.html provides my > > bio and past client list. > > http://www.bydesign.com/powervision/Mathematics_Philosophy_Sci > > ence/ presents some of my own research on information > > structure and dynamics. > > > From laylamichan at YAHOO.COM Fri Oct 10 12:38:04 2008 From: laylamichan at YAHOO.COM (=?iso-8859-1?Q?Layla_Mich=E1n?=) Date: Fri, 10 Oct 2008 09:38:04 -0700 Subject: ANALYSIS OF THE STATE OF SYSTEMATICS IN LATIN AMERICA Message-ID: Mich?n, L. et al. 2008. An?lisis de la sistem?tica actual en Latinoam?rica. Interciencia, 33(10): 754-761. Disponible en: http://www.interciencia.org/v33_10/index.html ANALYSIS OF THE STATE OF SYSTEMATICS IN LATIN AMERICA Layla Mich?n, Jane M. Russell, Antonio S?nchez Pereyra, Antonia Llorens Cruset and Carlos L?pez Beltr?n SUMMARY In order to have a regional vision of the development of systematics in Latin America during the last three decades, the results of a scientometric analysis based on 11185 documents on this theme published in 411 journals from 1976 to 2006 and obtained from the Periodica data base are presented. The current state of the discipline in the region is described, a detailed analysis about the articles, countries, main lines of study, taxonomic groups, topics, format, type of document, content and language is carried out, and the information is contextualized. The specialized production on systematics produced and published in local journals was notable and remained stable after the 80?s, mainly in Mexico, Brazil and Argentina. The contents have been published primarily in Spanish and mainly in the form of articles. They dealt mostly with descriptive taxonomy and were related to ecology, anatomy, histology and aquatic biology. The most represented groups were insects and angiosperms. A call is made for the urgent need of systematizing the literature about Latin American taxa. KEY WORDS / Latin America / Bibliometry / Scientometrics / Recent history / Publications / Systematics/ Taxonomy/ An?lisis de la sistem?tica actual en Latinoam?rica Layla Mich?n, Jane M. Russell, Antonio S?nchez Pereyra, Antonia Llorens Cruset y Carlos L?pez Beltr?n RESUMEN Para tener una visi?n regional del estado de desarrollo de la sistem?tica en Am?rica Latina durante las ?ltimas tres d?cadas, se presentan los resultados del an?lisis cienciom?trico de 11185 documentos publicados entre 1976 y 2006 en 411 revistas de la regi?n, obtenidos de la base de datos Peri?dica. Se describe el estado actual de la disciplina en el ?rea, se exponen an?lisis detallados sobre los art?culos, pa?ses, principales l?neas de estudios, grupos taxon?micos, temas, formato, tipo de documento, contenido, idioma y se contextualiza la informaci?n. La producci?n especializada sobre sistem?tica publicada en las revistas locales fue notable y se mantuvo estable a partir de los 80, centr?ndose principalmente en M?xico, Brasil y Argentina. Los contenidos fueron publicados en espa?ol primariamente y en forma de art?culos. Versaron especialmente sobre taxonom?a descriptiva y se relacionaron con la ecolog?a, anatom?a, histolog?a y biolog?a acu?tica. Los grupos m?s representados fueron los insectos y las angiospermas. Se concluye haciendo una llamada a la necesidad urgente de sistematizar la literatura de sistem?tica sobre taxones latinoamericanos. PALABRAS CLAVE / Am?rica Latina / Bibliometr?a / Ciencimetr?a / Historia reciente / Publicaciones / Sistem?tica / Taxonom?a / Saludos cordiales, Dra. Layla Mich?n Aguirre Facultad de Ciencias, UNAM. Departamento de Biolog?a Evolutiva Av. Universidad 3000 Circuito Exterior S/N, C.P. 04510. Ciudad Universitaria, M?xico, D. F. Edificio "A" Segundo Piso Tel?fono 52 (55) 56 22 48 25 www.geocities.com/laylamichan http://laylamichanunam.blogspot.com/ ________________________________ ?Todo sobre Amor y Sexo! La gu?a completa para tu vida en Mujer de Hoy: http://mx.mujer.yahoo.com/ ________________________________ ?Todo sobre Amor y Sexo! La gu?a completa para tu vida en Mujer de Hoy: http://mx.mujer.yahoo.com/ ________________________________ ?Todo sobre Amor y Sexo! La gu?a completa para tu vida en Mujer de Hoy: http://mx.mujer.yahoo.com/ ?Todo sobre Amor y Sexo! La gu?a completa para tu vida en Mujer de Hoy. http://mx.mujer.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: analisis sist LA.pdf Type: application/pdf Size: 273836 bytes Desc: not available URL: From loet at LEYDESDORFF.NET Sat Oct 11 07:46:32 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sat, 11 Oct 2008 13:46:32 +0200 Subject: Implicit media frames: Automated analysis of public debate on artificial sweeteners Message-ID: Implicit media frames: Automated analysis of public debate on artificial sweeteners Iina Hellsten, James Dawson, & Loet Leydesdorff The framing of issues in the mass media plays a crucial role in the public understanding of science and technology. This article contributes to research concerned with diachronic analysis of media frames by making an analytical distinction between implicit and explicit media frames, and by introducing an automated method for analysing diachronic changes of implicit frames. In particular, we apply a semantic maps method to a case study on the newspaper debate about artificial sweeteners, published in The New York Times (NYT) between 1980 and 2006. Our results show that the analysis of semantic changes enables us to filter out the dynamics of implicit frames, and to detect emerging metaphors in public debates. Theoretically, we discuss the relation between implicit frames in public debates and codification of information in scientific discourses, and suggest further avenues for research interested in the automated analysis of frame changes and trends in public debates. > _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam. Tel. +31-20-525 6598; fax: +31-842239111 loet at leydesdorff.net ; http://www.leydesdorff.net/ Visiting Professor 2007-2010, ISTIC, Beijing; Honorary Fellow 2007-2010, SPRU, University of Sussex Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated, 385 pp.; US$ 18.95; The Self-Organization of the Knowledge-Based Society ; The Challenge of Scientometrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From loet at LEYDESDORFF.NET Sun Oct 12 11:19:32 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sun, 12 Oct 2008 17:19:32 +0200 Subject: routines for co-word analysis further extended Message-ID: Dear colleagues: The routines ti.exe (at http://www.leydesdorff.net/software/ti/index.htm) and fulltext.exe (at http://www.leydesdorff.net/software/fulltext/index.htm) now additionally provide as output a file "words.dbf" (readable in Excel) which contains for all words the following summations: 1. A variable named "Chi_Sq" which provides Chi-square contributions for each of the variables (that is, words); these are defined for word(i) as ?(i)?2 = (Observed(ij) - Expected(ij))^2 / Expected(ij). In other words, the sum of the contributions over the column for the variable in each row (Mogoutov et al., 2008); 2. A variable named "ObsExp" which provides the sum of absolute values |Observed - Expected| for the word as a variable summed over the column; 3. A variable named "TfIdf" which use Salton & McGill's (1983: 63) TermFrequency-InverseDocumentFrequency measure (but without Salton's additional + 1; Magerman et al., 2007) defined as follows: WEIGHT(ik) = FREQ(ik) * [log2 (n) - log2 (DOCFREQ(k))]. This function assigns a high degree of importance to terms occurring in only a few documents in the collection; 4. The word frequency within the set. These statistics provide the researcher with opportunities to refine the list of words to be considered. References: Magerman, T., Van Looy, B., & Song, X. (2007). Exploring the feasibility and accuracy of Latent Semantic Analysis based text mining techniques to detect similarity between patent documents and scientific publications. Paper presented at the 6th Triple Helix Conference, 16-19 May 2007, Singapore. Mogoutov, A., Cambrosio, A., Keating, P., & Mustar, P. (2008). Biomedical innovation at the laboratory, clinical and commercial interface: A new method for mapping research projects, publications and patents in the field of microarrays. Journal of Informetrics (In print); doi:10.1016/j.joi.2008.06.005. ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam. Tel. +31-20-525 6598; fax: +31-842239111 loet at leydesdorff.net From Jonathan at LEVITT.NET Sun Oct 12 19:23:03 2008 From: Jonathan at LEVITT.NET (=?windows-1252?Q?Jonathan_Levitt_and_Mike_Thelwall?=) Date: Sun, 12 Oct 2008 19:23:03 -0400 Subject: Is multidisciplinary research more highly cited? A macro-level study Message-ID: Levitt, J.M. and Thelwall, M. (2008). Is multidisciplinary research more highly cited? A macro-level study. Journal of the American Society for Information Science and Technology, 59(12), 1973-1984. Inter-disciplinary collaboration is a major goal in research policy. This study uses citation analysis to examine diverse subjects in the Web of Science and Scopus to ascertain whether, in general, research published in journals classified in more than one subject is more highly cited than research published in journals classified in a single subject. For each subject the study divides the journals into two disjoint sets called Multi and Mono: Multi consists of all journals in the subject and at least one other subject, whereas Mono consists of all journals in the subject and in no other subject. The main findings are: (a) For social science subject categories in both the Web of Science and Scopus, the average citation levels of articles in Mono and Multi are very similar, and (b) For Scopus subject categories within Life Sciences, Health Sciences, and Physical Sciences, the average citation level of Mono articles is roughly twice that of Multi articles. Hence one cannot assume that, in general, multi- disciplinary research will be more highly cited, and the converse is probably true for many areas of science. A policy implication is that, at least in the sciences, multi-disciplinary researchers should not be evaluated by citations on the same basis as mono-disciplinary researchers. Reported in the Times Higher Education (REF could penalise those working across disciplines, http://www.timeshighereducation.co.uk/story.asp? sectioncode=26&storycode=403796&c=2) Jonathan Levitt and Mike Thelwall From loet at LEYDESDORFF.NET Mon Oct 13 03:00:51 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Mon, 13 Oct 2008 09:00:51 +0200 Subject: Is multidisciplinary research more highly cited? A macro-level study In-Reply-To: Message-ID: Dear Jonathan and Mike, Thank you for this empirical confirmation of what everybody already guessed. I would hypothesize that a determining factor for citation rates is the density of the network of citations in which one publishes. (This is also the philosophy behind the Leiden-normalization.) Allocation mechanisms on the basis of citations would then discourage risk-taking, and encourage remaining sheltered in the center of established specialties. Normalization in the analysis would further reinforce this because the nearly decomposable systems (for evolutionary reasons) are more dense in the center and more unwoven in the periphery. Normalization tends to cut on fuzziness and highlight the centroids in the clouds for the sake of clarity in the representation. (Of course, journals like Science and Nature are a different cup of coffee.) It seems to me that this may be one of the main messages of this field to science-policy makers: one cannot have it both ways. A focus on interdisciplinarity may lead to innovation and "creative destruction" (Schumpeter), but innovation will usually fail. It is entrepreneurial in style. If one wants high rankings in an organization controled by professional reputations (h-index?), one should think twice before taking risks (as everyone on tenure-track knows). Best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan > Levitt and Mike Thelwall > Sent: Monday, October 13, 2008 1:23 AM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] Is multidisciplinary research more > highly cited? A macro-level study > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Levitt, J.M. and Thelwall, M. (2008). Is multidisciplinary > research more > highly cited? A macro-level study. Journal of the American > Society for > Information Science and Technology, 59(12), 1973-1984. > > Inter-disciplinary collaboration is a major goal in research > policy. This > study uses citation analysis to examine diverse subjects in > the Web of > Science and Scopus to ascertain whether, in general, research > published in > journals classified in more than one subject is more highly > cited than > research published in journals classified in a single > subject. For each > subject the study divides the journals into two disjoint sets > called Multi > and Mono: Multi consists of all journals in the subject and > at least one > other subject, whereas Mono consists of all journals in the > subject and in > no other subject. The main findings are: (a) For social > science subject > categories in both the Web of Science and Scopus, the average > citation > levels of articles in Mono and Multi are very similar, and > (b) For Scopus > subject categories within Life Sciences, Health Sciences, and > Physical > Sciences, the average citation level of Mono articles is > roughly twice that > of Multi articles. Hence one cannot assume that, in general, multi- > disciplinary research will be more highly cited, and the converse is > probably true for many areas of science. A policy implication > is that, at > least in the sciences, multi-disciplinary researchers should not be > evaluated by citations on the same basis as mono-disciplinary > researchers. > > Reported in the Times Higher Education (REF could penalise > those working > across disciplines, > http://www.timeshighereducation.co.uk/story.asp? > sectioncode=26&storycode=403796&c=2) > > Jonathan Levitt and Mike Thelwall > From jonathan.adams at EVIDENCE.CO.UK Mon Oct 13 04:39:55 2008 From: jonathan.adams at EVIDENCE.CO.UK (Jonathan Adams) Date: Mon, 13 Oct 2008 09:39:55 +0100 Subject: Is multidisciplinary research more highly cited? A macro-level study Message-ID: People may also be interested in Evidence Ltd's earlier study on interdisciplinary research for the Higher Education Funding Council for England, as part of the background development of the Research Excellence Framework. http://www.hefce.ac.uk/pubs/rdreports/2007/rd19_07/ This report is a background document to HEFCE's consultation on proposals for the Research Excellence Framework ('Research Excellence Framework: consultation on the assessment and funding of higher education research post-2008', HEFCE 2007/34). We set out to develop an acceptable indicator of 'interdisciplinarity' at the article level, rather than looking at articles in multidisciplinary journals. The report therefore considers how interdisciplinary research could be defined for the purposes of bibliometric analysis, and whether such research is systematically cited less often and could potentially be disadvantaged in a bibliometrics-based system of research assessment. The Evidence study found that there is no strong case for research outputs to be treated differently for the purposes of research assessment on the grounds of interdisciplinarity, but advises that bibliometric analysis of such outputs should be carried out carefully to ensure they are treated appropriately. Of course, the way in which we categorise more or less interdisciplinary research is important. We exposed the methodology to senior researchers who had worked on previous RAE panels. They agreed that our working definition seemed to make intuitive sense in the context of their (various) fields - which include social science. Regards Jonathan Adams Evidence Ltd 103 Clarendon Road, Leeds LS2 9DF, UK t/ +44 (0) 113 384 5680 Registered in England No. 4036650 VAT Registration: GB 758 4671 85 Please note that Evidence Ltd does not enter into any form of contract via this medium, nor is our staff authorised to do so on our behalf. -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan Levitt and Mike Thelwall Sent: 13 October 2008 00:23 To: SIGMETRICS at LISTSERV.UTK.EDU Subject: [SIGMETRICS] Is multidisciplinary research more highly cited? A macro-level study Levitt, J.M. and Thelwall, M. (2008). Is multidisciplinary research more highly cited? A macro-level study. Journal of the American Society for Information Science and Technology, 59(12), 1973-1984. Inter-disciplinary collaboration is a major goal in research policy. This study uses citation analysis to examine diverse subjects in the Web of Science and Scopus to ascertain whether, in general, research published in journals classified in more than one subject is more highly cited than research published in journals classified in a single subject. For each subject the study divides the journals into two disjoint sets called Multi and Mono: Multi consists of all journals in the subject and at least one other subject, whereas Mono consists of all journals in the subject and in no other subject. The main findings are: (a) For social science subject categories in both the Web of Science and Scopus, the average citation levels of articles in Mono and Multi are very similar, and (b) For Scopus subject categories within Life Sciences, Health Sciences, and Physical Sciences, the average citation level of Mono articles is roughly twice that of Multi articles. Hence one cannot assume that, in general, multi- disciplinary research will be more highly cited, and the converse is probably true for many areas of science. A policy implication is that, at least in the sciences, multi-disciplinary researchers should not be evaluated by citations on the same basis as mono-disciplinary researchers. Reported in the Times Higher Education (REF could penalise those working across disciplines, http://www.timeshighereducation.co.uk/story.asp? sectioncode=26&storycode=403796&c=2) Jonathan Levitt and Mike Thelwall From loet at LEYDESDORFF.NET Mon Oct 13 05:04:56 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Mon, 13 Oct 2008 11:04:56 +0200 Subject: Is multidisciplinary research more highly cited? A macro-level study In-Reply-To: <592818C3D92EAE4791031E3E622927B30146EA@evidence1.Evidence.local> Message-ID: Dear Jonathan, If both of you are right -- which I have no reason to doubt -- than the aggregation at the journal level would make the difference. In that case, this would be a nice example of the so-called ecological fallacy: what one can see at the level of the wood, one cannot see at the level of individual trees, and vice versa (Robertson, 1950). Given the urgency of the matter in the UK, it may be worth testing this hypothesis: if you would reorganize your sample in terms of the journals involved, would then ...? Or, in other words, would interdisciplinary articles in disciplinary journals do better than average? Best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Jonathan Adams > Sent: Monday, October 13, 2008 10:40 AM > To: SIGMETRICS at listserv.utk.edu > Subject: [SIGMETRICS] Is multidisciplinary research more > highly cited? A macro-level study > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > People may also be interested in Evidence Ltd's earlier study on > interdisciplinary research for the Higher Education Funding > Council for > England, as part of the background development of the Research > Excellence Framework. > http://www.hefce.ac.uk/pubs/rdreports/2007/rd19_07/ > This report is a background document to HEFCE's consultation on > proposals for the Research Excellence Framework ('Research Excellence > Framework: consultation on the assessment and funding of higher > education research post-2008', HEFCE 2007/34). > We set out to develop an acceptable indicator of 'interdisciplinarity' > at the article level, rather than looking at articles in > multidisciplinary journals. The report therefore considers how > interdisciplinary research could be defined for the purposes of > bibliometric analysis, and whether such research is > systematically cited > less often and could potentially be disadvantaged in a > bibliometrics-based system of research assessment. > The Evidence study found that there is no strong case for research > outputs to be treated differently for the purposes of research > assessment on the grounds of interdisciplinarity, but advises that > bibliometric analysis of such outputs should be carried out > carefully to > ensure they are treated appropriately. > Of course, the way in which we categorise more or less > interdisciplinary > research is important. We exposed the methodology to senior > researchers > who had worked on previous RAE panels. They agreed that our working > definition seemed to make intuitive sense in the context of their > (various) fields - which include social science. > Regards > Jonathan Adams > > Evidence Ltd > 103 Clarendon Road, Leeds LS2 9DF, UK > t/ +44 (0) 113 384 5680 > > Registered in England No. 4036650 > VAT Registration: GB 758 4671 85 > > Please note that Evidence Ltd does not enter into any form of contract > via this medium, nor is our staff authorised to do so on our behalf. > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan Levitt and > Mike Thelwall > Sent: 13 October 2008 00:23 > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] Is multidisciplinary research more > highly cited? A > macro-level study > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Levitt, J.M. and Thelwall, M. (2008). Is multidisciplinary > research more > > highly cited? A macro-level study. Journal of the American > Society for > Information Science and Technology, 59(12), 1973-1984. > > Inter-disciplinary collaboration is a major goal in research policy. > This > study uses citation analysis to examine diverse subjects in > the Web of > Science and Scopus to ascertain whether, in general, research > published > in > journals classified in more than one subject is more highly > cited than > research published in journals classified in a single > subject. For each > subject the study divides the journals into two disjoint sets called > Multi > and Mono: Multi consists of all journals in the subject and > at least one > > other subject, whereas Mono consists of all journals in the > subject and > in > no other subject. The main findings are: (a) For social > science subject > categories in both the Web of Science and Scopus, the average > citation > levels of articles in Mono and Multi are very similar, and (b) For > Scopus > subject categories within Life Sciences, Health Sciences, and > Physical > Sciences, the average citation level of Mono articles is roughly twice > that > of Multi articles. Hence one cannot assume that, in general, multi- > disciplinary research will be more highly cited, and the converse is > probably true for many areas of science. A policy implication is that, > at > least in the sciences, multi-disciplinary researchers should not be > evaluated by citations on the same basis as mono-disciplinary > researchers. > > Reported in the Times Higher Education (REF could penalise > those working > > across disciplines, > http://www.timeshighereducation.co.uk/story.asp? > sectioncode=26&storycode=403796&c=2) > > Jonathan Levitt and Mike Thelwall > From jonathan.adams at EVIDENCE.CO.UK Mon Oct 13 05:39:03 2008 From: jonathan.adams at EVIDENCE.CO.UK (Jonathan Adams) Date: Mon, 13 Oct 2008 10:39:03 +0100 Subject: Is multidisciplinary research more highly cited? A macro-level study Message-ID: I think Michel Zitt drew attention to this when he pointed out that the 'zoom' with which you look at an item makes a difference to its citation calibration. Context is all! And I agree: there are further tests to elaborate the findings and the initial hypotheses that emerge. So, we need a sponsor! Jonathan Adams Evidence Ltd 103 Clarendon Road, Leeds LS2 9DF, UK t/ +44 (0) 113 384 5680 -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff Sent: 13 October 2008 10:05 Dear Jonathan, If both of you are right -- which I have no reason to doubt -- than the aggregation at the journal level would make the difference. In that case, this would be a nice example of the so-called ecological fallacy: what one can see at the level of the wood, one cannot see at the level of individual trees, and vice versa (Robertson, 1950). Given the urgency of the matter in the UK, it may be worth testing this hypothesis: if you would reorganize your sample in terms of the journals involved, would then ...? Or, in other words, would interdisciplinary articles in disciplinary journals do better than average? Best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Jonathan Adams > Sent: Monday, October 13, 2008 10:40 AM > To: SIGMETRICS at listserv.utk.edu > Subject: [SIGMETRICS] Is multidisciplinary research more > highly cited? A macro-level study > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > People may also be interested in Evidence Ltd's earlier study on > interdisciplinary research for the Higher Education Funding > Council for > England, as part of the background development of the Research > Excellence Framework. > http://www.hefce.ac.uk/pubs/rdreports/2007/rd19_07/ > This report is a background document to HEFCE's consultation on > proposals for the Research Excellence Framework ('Research Excellence > Framework: consultation on the assessment and funding of higher > education research post-2008', HEFCE 2007/34). > We set out to develop an acceptable indicator of 'interdisciplinarity' > at the article level, rather than looking at articles in > multidisciplinary journals. The report therefore considers how > interdisciplinary research could be defined for the purposes of > bibliometric analysis, and whether such research is > systematically cited > less often and could potentially be disadvantaged in a > bibliometrics-based system of research assessment. > The Evidence study found that there is no strong case for research > outputs to be treated differently for the purposes of research > assessment on the grounds of interdisciplinarity, but advises that > bibliometric analysis of such outputs should be carried out > carefully to > ensure they are treated appropriately. > Of course, the way in which we categorise more or less > interdisciplinary > research is important. We exposed the methodology to senior > researchers > who had worked on previous RAE panels. They agreed that our working > definition seemed to make intuitive sense in the context of their > (various) fields - which include social science. > Regards > Jonathan Adams > > Evidence Ltd > 103 Clarendon Road, Leeds LS2 9DF, UK > t/ +44 (0) 113 384 5680 > > Registered in England No. 4036650 > VAT Registration: GB 758 4671 85 > > Please note that Evidence Ltd does not enter into any form of contract > via this medium, nor is our staff authorised to do so on our behalf. > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan Levitt and > Mike Thelwall > Sent: 13 October 2008 00:23 > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] Is multidisciplinary research more > highly cited? A > macro-level study > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Levitt, J.M. and Thelwall, M. (2008). Is multidisciplinary > research more > > highly cited? A macro-level study. Journal of the American > Society for > Information Science and Technology, 59(12), 1973-1984. > > Inter-disciplinary collaboration is a major goal in research policy. > This > study uses citation analysis to examine diverse subjects in > the Web of > Science and Scopus to ascertain whether, in general, research > published > in > journals classified in more than one subject is more highly > cited than > research published in journals classified in a single > subject. For each > subject the study divides the journals into two disjoint sets called > Multi > and Mono: Multi consists of all journals in the subject and > at least one > > other subject, whereas Mono consists of all journals in the > subject and > in > no other subject. The main findings are: (a) For social > science subject > categories in both the Web of Science and Scopus, the average > citation > levels of articles in Mono and Multi are very similar, and (b) For > Scopus > subject categories within Life Sciences, Health Sciences, and > Physical > Sciences, the average citation level of Mono articles is roughly twice > that > of Multi articles. Hence one cannot assume that, in general, multi- > disciplinary research will be more highly cited, and the converse is > probably true for many areas of science. A policy implication is that, > at > least in the sciences, multi-disciplinary researchers should not be > evaluated by citations on the same basis as mono-disciplinary > researchers. > > Reported in the Times Higher Education (REF could penalise > those working > > across disciplines, > http://www.timeshighereducation.co.uk/story.asp? > sectioncode=26&storycode=403796&c=2) > > Jonathan Levitt and Mike Thelwall > From loet at LEYDESDORFF.NET Mon Oct 13 06:20:46 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Mon, 13 Oct 2008 12:20:46 +0200 Subject: Is multidisciplinary research more highly cited? A macro-level study In-Reply-To: <592818C3D92EAE4791031E3E622927B3014703@evidence1.Evidence.local> Message-ID: Dear Jonathan, I mean something simple using your data. You could for example plot the citation scores of your "interdisciplinary" papers in decreasing order of being-citedness (one expects a Lotka-type distribution), and then add as a second curve the impact factors of the journals. One could then get an impression whether the noted effect could maily be caused by the upper end of the curve (interdisciplinary articles cited more than average in disciplinary journals) or the long tail of "interdisciplinary papers" in the "carbage can". But I see immediately that one may have to differentiate this for fields of science and then it easily becomes more complex. Nevertheless, I would be interested to know in which journals the top-100 (in terms of citation rates) of your "interdisciplinary" papers were published. Is that in the report? Best, Loet On Mon, Oct 13, 2008 at 11:39 AM, Jonathan Adams < jonathan.adams at evidence.co.uk> wrote: > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > I think Michel Zitt drew attention to this when he pointed out that the > 'zoom' with which you look at an item makes a difference to its citation > calibration. Context is all! > And I agree: there are further tests to elaborate the findings and the > initial hypotheses that emerge. So, we need a sponsor! > Jonathan Adams > > Evidence Ltd > 103 Clarendon Road, Leeds LS2 9DF, UK > t/ +44 (0) 113 384 5680 > > > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff > Sent: 13 October 2008 10:05 > > Dear Jonathan, > > If both of you are right -- which I have no reason to doubt -- than the > aggregation at the journal level would make the difference. In that > case, > this would be a nice example of the so-called ecological fallacy: what > one > can see at the level of the wood, one cannot see at the level of > individual > trees, and vice versa (Robertson, 1950). > > Given the urgency of the matter in the UK, it may be worth testing this > hypothesis: if you would reorganize your sample in terms of the journals > involved, would then ...? Or, in other words, would interdisciplinary > articles in disciplinary journals do better than average? > > Best wishes, > > > Loet > > ________________________________ > > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Jonathan Adams > > Sent: Monday, October 13, 2008 10:40 AM > > To: SIGMETRICS at listserv.utk.edu > > Subject: [SIGMETRICS] Is multidisciplinary research more > > highly cited? A macro-level study > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > People may also be interested in Evidence Ltd's earlier study on > > interdisciplinary research for the Higher Education Funding > > Council for > > England, as part of the background development of the Research > > Excellence Framework. > > http://www.hefce.ac.uk/pubs/rdreports/2007/rd19_07/ > > This report is a background document to HEFCE's consultation on > > proposals for the Research Excellence Framework ('Research Excellence > > Framework: consultation on the assessment and funding of higher > > education research post-2008', HEFCE 2007/34). > > We set out to develop an acceptable indicator of 'interdisciplinarity' > > at the article level, rather than looking at articles in > > multidisciplinary journals. The report therefore considers how > > interdisciplinary research could be defined for the purposes of > > bibliometric analysis, and whether such research is > > systematically cited > > less often and could potentially be disadvantaged in a > > bibliometrics-based system of research assessment. > > The Evidence study found that there is no strong case for research > > outputs to be treated differently for the purposes of research > > assessment on the grounds of interdisciplinarity, but advises that > > bibliometric analysis of such outputs should be carried out > > carefully to > > ensure they are treated appropriately. > > Of course, the way in which we categorise more or less > > interdisciplinary > > research is important. We exposed the methodology to senior > > researchers > > who had worked on previous RAE panels. They agreed that our working > > definition seemed to make intuitive sense in the context of their > > (various) fields - which include social science. > > Regards > > Jonathan Adams > > > > Evidence Ltd > > 103 Clarendon Road, Leeds LS2 9DF, UK > > t/ +44 (0) 113 384 5680 > > > > Registered in England No. 4036650 > > VAT Registration: GB 758 4671 85 > > > > Please note that Evidence Ltd does not enter into any form of contract > > via this medium, nor is our staff authorised to do so on our behalf. > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan Levitt and > > Mike Thelwall > > Sent: 13 October 2008 00:23 > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: [SIGMETRICS] Is multidisciplinary research more > > highly cited? A > > macro-level study > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Levitt, J.M. and Thelwall, M. (2008). Is multidisciplinary > > research more > > > > highly cited? A macro-level study. Journal of the American > > Society for > > Information Science and Technology, 59(12), 1973-1984. > > > > Inter-disciplinary collaboration is a major goal in research policy. > > This > > study uses citation analysis to examine diverse subjects in > > the Web of > > Science and Scopus to ascertain whether, in general, research > > published > > in > > journals classified in more than one subject is more highly > > cited than > > research published in journals classified in a single > > subject. For each > > subject the study divides the journals into two disjoint sets called > > Multi > > and Mono: Multi consists of all journals in the subject and > > at least one > > > > other subject, whereas Mono consists of all journals in the > > subject and > > in > > no other subject. The main findings are: (a) For social > > science subject > > categories in both the Web of Science and Scopus, the average > > citation > > levels of articles in Mono and Multi are very similar, and (b) For > > Scopus > > subject categories within Life Sciences, Health Sciences, and > > Physical > > Sciences, the average citation level of Mono articles is roughly twice > > that > > of Multi articles. Hence one cannot assume that, in general, multi- > > disciplinary research will be more highly cited, and the converse is > > probably true for many areas of science. A policy implication is that, > > at > > least in the sciences, multi-disciplinary researchers should not be > > evaluated by citations on the same basis as mono-disciplinary > > researchers. > > > > Reported in the Times Higher Education (REF could penalise > > those working > > > > across disciplines, > > http://www.timeshighereducation.co.uk/story.asp? > > sectioncode=26&storycode=403796&c=2) > > > > Jonathan Levitt and Mike Thelwall > > > -- Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ --------------------------------------- Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated, 385 pp.; US$ 18.95; -------------- next part -------------- An HTML attachment was scrubbed... URL: From katy at INDIANA.EDU Mon Oct 13 09:54:25 2008 From: katy at INDIANA.EDU (Katy Borner) Date: Mon, 13 Oct 2008 09:54:25 -0400 Subject: Complex Systems tenure-track position at IUB Message-ID: Dear all, Please feel free to contact Filippo Menczer, Luis Rocha, Alessandro Flammini (all three cc'd), or me if you have any questions regarding the position below. Best regards, k The School of Informatics at Indiana University in Bloomington seeks a tenure-track appointment at any level in the area of Complex Systems. Applicants should have a Ph.D. in any relevant field such as Computational, Systems, Cognitive, Physical, Mathematical, Economic, and Information Sciences; research interests in Complex Systems; and a well-established record or demonstrable potential for excellence in research and teaching. The School of Informatics at Indiana University is the first of its kind and among the largest in the country, with a faculty of more than 80 full time members, 150 doctoral students, 200 masters students, and a large number of undergraduates majoring in either Informatics or Computer Science. ComputerWorld ranked Informatics as a "top-ten program to watch" thanks to its excellence and leadership in academic programs, interdisciplinary research, placement, and outreach. The Complex Systems group (http://cx.informatics.indiana.edu) currently has 10 faculty members with strong ties to several other units at Indiana University that pursue research in the field of Complex Systems, including Cognitive Science, Psychology, Physics, the Biocomplexity Institute, and the School of Library and Information Science. The research sub-areas in the group include complex networks, artificial life and robotics, computational intelligence, bio-inspired computing, computational biology and epidemiology, large scale data modeling and simulation, and Web applications. We are particularly interested in strengthening our emphasis on modeling the dynamics of complex information networks, social networks and media, and their broad societal implications. We also encourage applicants whose interests link to other areas of strength of the School of Informatics including Computer Science, HCI, Social Informatics, Security, Life Science, and Music. Successful applicants are expected to conduct an independent research program, establish scientific collaborations with present members of the group, participate in joint grant efforts, teach relevant graduate and undergraduate courses, and mentor graduate students. We have excellent work conditions including low teaching loads, attractive salaries, and world-class computing, networking, and library facilities. Located on the wooded, rolling hills of southern Indiana, Bloomington is a culturally thriving college town with moderate cost of living. It is renowned for its top-ranked music school, performing and fine arts, historic campus, food tourism, cycling traditions, active lifestyle, and natural beauty. Applicants should submit a curriculum vitae, a statement of research and teaching, and the names of three references (six for associate and full professors) using the online application form. Supporting materials can also be sent to: Chair, Faculty Search Committee School of Informatics 919 E. 10th Street Bloomington, IN 47408 E-mail: hiring at informatics.indiana.edu Review of applicants will begin immediately and continue until the position is filled. Indiana University is an Equal Opportunity/Affirmative Action employer. Applications from women and minorities are strongly encouraged. http://www.informatics.indiana.edu/hiring/cx-2009.asp -- Katy Borner, Victor H. Yngve Associate Professor of Information Science Director of the Cyberinfrastructure for Network Science Center http://cns.slis.indiana.edu/ School of Library and Information Science, Indiana University 10th Street & Jordan Avenue Phone: (812) 855-3256 Fax: -6166 Wells Library 021 E-mail: katy at indiana.edu Bloomington, IN 47405, USA WWW: ella.slis.indiana.edu/~katy Mapping Science exhibit is currently on display at - National Science Foundation, 10th Floor, 4201 Wilson Boulevard, Arlington, VA, permanent display. - National Research Council in Ottawa, Canada, April 3-Aug. 29, 2008. - National Science Library of the Chinese Academy of Sciences, Beijing, China, May 17-Nov. 15, 2008. http://scimaps.org From Jonathan at LEVITT.NET Tue Oct 14 21:27:15 2008 From: Jonathan at LEVITT.NET (=?windows-1252?Q?Jonathan_Levitt?=) Date: Tue, 14 Oct 2008 21:27:15 -0400 Subject: Is multidisciplinary research more highly cited? A macro-level study Message-ID: Dear Loet and Jonathan, Thank you very much for your interesting responses to our posting. I am replying without first consulting Mike, as he is currently away. In our investigation of science categories (for articles in Scopus published in 1995) we found that the citation levels of Multi-disciplinary articles were on average roughly half those of Mono-disciplinary articles. This finding (Table 8 of our paper), contrasts sharply with that of Evidence Ltd: ?If articles are indexed on their normalised citation impact, there is no reason to suppose that those which appear to be more interdisciplinary will be in any way systematically disadvantaged? (Bibliometric analysis of interdisciplinary research, 2007, p. 57-58, http://www.hefce.ac.uk/pubs/rdreports/2007/rd19_07/rd19_07.doc). One possible reason for the contrasting findings is that Levitt & Thelwall use the subject designations of Scopus, whereas Evidence Ltd. uses the designations of Current Contents. In order to illustrate how these designations differ, let us identify the categories containing ?engineering?. Scopus has only two categories containing the word (Chemical Engineering; Engineering), whereas the Current Contents? subject area of ?Engineering, Computing and Technology? has ten categories (Chemical Engineering; Civil Engineering; Computer Science & Engineering; Electrical and Electronics Engineering; Engineering Management; Engineering Mathematics; Environmental Engineering & Energy; Geological, Petroleum & Mining Engineering; Materials Science & Engineering; Mechanical Engineering). Could the prevalence of closely allied subject categories in Current Contents at least in part account for the differences in our findings? A simple way of addressing this question is for Evidence Ltd. to clarify the extent to which their findings vary between subject areas (I think they use the subject areas of Agriculture, Biology and Environmental Sciences; Clinical Medicine; Engineering, Computing and Technology; Life Sciences; Physical, Chemical and Earth Sciences). This data would help support or refute the conjecture that their findings could have been shaped by subject areas that have many closely allied subject categories. Evidence Ltd. wrote ?We would conclude that there is no reason to single out interdisciplinary research for differential treatment under a metrics- based system of assessment? (Bibliometric analysis of interdisciplinary research, 2007, p. 58). Given that the findings of Levitt & Thelwall and Evidence Ltd. differ strongly and the urgency of the matter in the UK, I agree with both Loet and Jonathan on the importance of further investigation. Please excuse any typing errors; I am visually challenged and at this late hour cannot ask a sighted person to proof-read my posting. Best wishes, Jonathan. Date: Mon, 13 Oct 2008 12:20:46 +0200 Reply-To: ASIS&T Special Interest Group on Metrics Sender: ASIS&T Special Interest Group on Metrics From: Loet Leydesdorff Subject: Re: Is multidisciplinary research more highly cited? A macro-level study In-Reply-To: <592818C3D92EAE4791031E3E622927B3014703 at evidence1.Evidence.local> Content-Type: multipart/alternative; Dear Jonathan, I mean something simple using your data. You could for example plot the citation scores of your "interdisciplinary" papers in decreasing order of being-citedness (one expects a Lotka-type distribution), and then add as a second curve the impact factors of the journals. One could then get an impression whether the noted effect could maily be caused by the upper end of the curve (interdisciplinary articles cited more than average in disciplinary journals) or the long tail of "interdisciplinary papers" in the "carbage can". But I see immediately that one may have to differentiate this for fields of science and then it easily becomes more complex. Nevertheless, I would be interested to know in which journals the top-100 (in terms of citation rates) of your "interdisciplinary" papers were published. Is that in the report? Best, Loet On Mon, Oct 13, 2008 at 11:39 AM, Jonathan Adams < jonathan.adams at evidence.co.uk> wrote: > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > I think Michel Zitt drew attention to this when he pointed out that the > 'zoom' with which you look at an item makes a difference to its citation > calibration. Context is all! > And I agree: there are further tests to elaborate the findings and the > initial hypotheses that emerge. So, we need a sponsor! > Jonathan Adams > > Evidence Ltd > 103 Clarendon Road, Leeds LS2 9DF, UK > t/ +44 (0) 113 384 5680 > > > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff > Sent: 13 October 2008 10:05 > > Dear Jonathan, > > If both of you are right -- which I have no reason to doubt -- than the > aggregation at the journal level would make the difference. In that > case, > this would be a nice example of the so-called ecological fallacy: what > one > can see at the level of the wood, one cannot see at the level of > individual > trees, and vice versa (Robertson, 1950). > > Given the urgency of the matter in the UK, it may be worth testing this > hypothesis: if you would reorganize your sample in terms of the journals > involved, would then ...? Or, in other words, would interdisciplinary > articles in disciplinary journals do better than average? > > Best wishes, > > > Loet > > ________________________________ > > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Jonathan Adams > > Sent: Monday, October 13, 2008 10:40 AM > > To: SIGMETRICS at listserv.utk.edu > > Subject: [SIGMETRICS] Is multidisciplinary research more > > highly cited? A macro-level study > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > People may also be interested in Evidence Ltd's earlier study on > > interdisciplinary research for the Higher Education Funding > > Council for > > England, as part of the background development of the Research > > Excellence Framework. > > http://www.hefce.ac.uk/pubs/rdreports/2007/rd19_07/ > > This report is a background document to HEFCE's consultation on > > proposals for the Research Excellence Framework ('Research Excellence > > Framework: consultation on the assessment and funding of higher > > education research post-2008', HEFCE 2007/34). > > We set out to develop an acceptable indicator of 'interdisciplinarity' > > at the article level, rather than looking at articles in > > multidisciplinary journals. The report therefore considers how > > interdisciplinary research could be defined for the purposes of > > bibliometric analysis, and whether such research is > > systematically cited > > less often and could potentially be disadvantaged in a > > bibliometrics-based system of research assessment. > > The Evidence study found that there is no strong case for research > > outputs to be treated differently for the purposes of research > > assessment on the grounds of interdisciplinarity, but advises that > > bibliometric analysis of such outputs should be carried out > > carefully to > > ensure they are treated appropriately. > > Of course, the way in which we categorise more or less > > interdisciplinary > > research is important. We exposed the methodology to senior > > researchers > > who had worked on previous RAE panels. They agreed that our working > > definition seemed to make intuitive sense in the context of their > > (various) fields - which include social science. > > Regards > > Jonathan Adams > > > > Evidence Ltd > > 103 Clarendon Road, Leeds LS2 9DF, UK > > t/ +44 (0) 113 384 5680 > > > > Registered in England No. 4036650 > > VAT Registration: GB 758 4671 85 > > > > Please note that Evidence Ltd does not enter into any form of contract > > via this medium, nor is our staff authorised to do so on our behalf. > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan Levitt and > > Mike Thelwall > > Sent: 13 October 2008 00:23 > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: [SIGMETRICS] Is multidisciplinary research more > > highly cited? A > > macro-level study > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Levitt, J.M. and Thelwall, M. (2008). Is multidisciplinary > > research more > > > > highly cited? A macro-level study. Journal of the American > > Society for > > Information Science and Technology, 59(12), 1973-1984. > > > > Inter-disciplinary collaboration is a major goal in research policy. > > This > > study uses citation analysis to examine diverse subjects in > > the Web of > > Science and Scopus to ascertain whether, in general, research > > published > > in > > journals classified in more than one subject is more highly > > cited than > > research published in journals classified in a single > > subject. For each > > subject the study divides the journals into two disjoint sets called > > Multi > > and Mono: Multi consists of all journals in the subject and > > at least one > > > > other subject, whereas Mono consists of all journals in the > > subject and > > in > > no other subject. The main findings are: (a) For social > > science subject > > categories in both the Web of Science and Scopus, the average > > citation > > levels of articles in Mono and Multi are very similar, and (b) For > > Scopus > > subject categories within Life Sciences, Health Sciences, and > > Physical > > Sciences, the average citation level of Mono articles is roughly twice > > that > > of Multi articles. Hence one cannot assume that, in general, multi- > > disciplinary research will be more highly cited, and the converse is > > probably true for many areas of science. A policy implication is that, > > at > > least in the sciences, multi-disciplinary researchers should not be > > evaluated by citations on the same basis as mono-disciplinary > > researchers. > > > > Reported in the Times Higher Education (REF could penalise > > those working > > > > across disciplines, > > http://www.timeshighereducation.co.uk/story.asp? > > sectioncode=26&storycode=403796&c=2) > > > > Jonathan Levitt and Mike Thelwall From katy at INDIANA.EDU Tue Oct 14 22:38:17 2008 From: katy at INDIANA.EDU (Katy Borner) Date: Tue, 14 Oct 2008 22:38:17 -0400 Subject: Scientific Publications 3.0 starts today In-Reply-To: <28254865.1222860917528.JavaMail.Administrator@WEB3> Message-ID: Started on 10/1. Interesting concept. Highly relevant discussion. k news at interdisciplines.org wrote: > Dear All, > > > Welcome to the online conference on Scientific Publications 3.0. which starts today at http://www.interdisciplines.org/liquidpub > > This conference is part of the LiquidPublication project, a European project of designing a new platform for scientific publications ( http://wiki.liquidpub.org ). Today you will find on line the first paper by David Koepsell on how technology and the Open Source movement can save science, a very provocative paper in which David argues that the social Web can now bring us back to the real meaning and value of scientific research, by ending the "publish or perish" pressure. I hope that you will have the time in the next 15 days to have a look at the paper and react with useful suggestions about your own experience as researchers and possible solutions. > > The website is extremely user-friendly: just click on the title of the paper to open it. Then, if you want to discuss it, start a new discussion at the right of the paper or join an ongoing discussion. As invited discussants, you have a direct access to the forum: you will be asked for your e-mail (the one at which you have been contacted ) and a password (your family name in lower case). Your e-mail won't be shared with others. Please, contact me if you experience any problem with the site: origgi at ehess.fr or gloria.origgi at gmail.com or by phone: +331 43202250 > > Hope to read you soon on www.interdisciplines.org/liquidpub > > Gloria Origgi (CNRS -Institut Nicod) > Judith Simon (Stanford University) > > ________________________________________________ > > To unsubscribe from this newsletter, follow the link below: > Pour vous d?sabonner, veuillez cliquer ci-dessous : > http://www.interdisciplines.org/unsubscribe/symposium/17/email/katy%40indiana%2Eedu > > ? 2008 interdisciplines.org > > > > -- Katy Borner, Victor H. Yngve Associate Professor of Information Science Director of the Cyberinfrastructure for Network Science Center School of Library and Information Science, Indiana University 10th Street & Jordan Avenue Phone: (812) 855-3256 Fax: -6166 Wells Library 021 E-mail: katy at indiana.edu Bloomington, IN 47405, USA WWW: ella.slis.indiana.edu/~katy Mapping Science exhibit is currently on display at - National Science Foundation, 10th Floor, 4201 Wilson Boulevard, Arlington, VA, permanent display. - National Research Council in Ottawa, Canada, April 3-Aug. 29, 2008. - National Science Library of the Chinese Academy of Sciences, Beijing, China, May 17-Nov. 15, 2008. http://scimaps.org From isidro at CINDOC.CSIC.ES Wed Oct 15 11:16:04 2008 From: isidro at CINDOC.CSIC.ES (Isidro F. Aguillo) Date: Wed, 15 Oct 2008 17:16:04 +0200 Subject: New paper published in the e-journal Cybermetrics Message-ID: *The h-index of a conglomerate * Ronald Rousseau, Raf Guns, Yuxian Liu Cybermetrics, 12 (2008): Issue 1, Paper 2 _http://www.cindoc.csic.es/cybermetrics/articles/v12i1p2.html http://www.cindoc.csic.es/cybermetrics/articles/v12i1p2.pdf_ Abstract Conglomerates are generalized frameworks for informetric research. In this article, the h-index of a conglomerate is defined and it is shown how this construction generalizes the standard definition of the h-index. It is further shown how non-trivial constructions, such as Prathap?s h-indices, fit well into a conglomerate framework. An example illustrates the use of the conglomerate framework. -- **************************** Isidro F. Aguillo Laboratorio de Cibermetr?a Cybermetrics Lab CCHS - CSIC Joaquin Costa, 22 28002 Madrid. Spain isidro @ cindoc.csic.es +34-91-5635482 ext 313 **************************** From Chris.Armbruster at EUI.EU Wed Oct 15 11:24:57 2008 From: Chris.Armbruster at EUI.EU (Armbruster, Chris) Date: Wed, 15 Oct 2008 17:24:57 +0200 Subject: Anybody know isast.org and QQML 2009? Message-ID: I cannot make out if this is very sophisticated spam or the real thing..... Apologies if I cause any offense, but I got an invitation that looks like spam. http://isast.org/home.html QQML2009 Qualitative and Quantitative Methods in Libraries International Conference Chania, Crete, Greece: 26-29 of May 2009 Qualitative and Quantitative Methods (QQM) are proved more and more popular tools for Librarians, because of their usefulness to the everyday professional life. QQM aim to the assessment and improvement of the services, to the measurement of the functional effectiveness and efficiency. QQM are the mean to make decisions on fund allocation and financial alternatives. Librarians use also QQM in order to determine why and when their users appreciate their services. This is the start point of the innovation involvement and the ongoing procedure of the excellent performance.Systematic development of quality management in libraries requires a detailed framework, including the quality management standards, the measurement indicators, the self-appraisal schedules and the operational rules. These standards are practice-oriented tools and a benchmarking result. Their basic function is to express responsibly the customer (library user) -supplier (library services) relationship and provide a systematic approach to the continuous change onto excellence.The indoor and outdoor relationships of libraries are dependent of their communication and marketing capabilities, challenges, opportunities and implementation programmes. From ricardo.arencibia at CNIC.EDU.CU Thu Oct 16 12:50:32 2008 From: ricardo.arencibia at CNIC.EDU.CU (MSc. Ricardo Arencibia Jorge) Date: Thu, 16 Oct 2008 12:50:32 -0400 Subject: New article on H-index Message-ID: Dear colleagues. I am sending the link to a new paper on h index. The sample studied is a group of highly productive Cuban neuroscientists PRODUCTIVITY AND VISIBILITY OF CUBAN NEUROSCIENTISTS: BIBLIOMETRIC STUDY OF THE PERIOD 2001-2005 Dorta-Contreras AJ, Arencibia-Jorge R, Marti-Lahera Y, Araujo-Ruiz JA. Neurosciences have an important place inside the scientific development of Ibero American countries, and particularly in Cuba. The objective of the current work is to analyze the productivity and visibility of Cuban neuroscientists in the period 2001-2005, and the value of H index as evaluation tool. Web of Science and Scopus were the databases used as information sources. The 24 most productive Cuban neuroscientists in Web of Science were identified, and their scientific production in Scopus was retrieved. For each author, in each database, the following indicators were calculated: total number of published authors, total number of cited articles, proportion of cited articles, total number of citations received, average of citations received by article, and H index. Some variations in the calculated indicators were observed in Scopus with respect to Web of Science. The wide coverage of this database exerted influence on the increment of scientist?s productivity, as well as on the increment of H index values. The possible incorporation of citation analysis in the processes of evaluation and analysis of the scientific activity in Cuba was considered, in order to evaluate the advances in the Neurosciences field. [REV NEUROL 2008; 47: 355-60] Key words. Bibliometric indicators. Cuba. Impact. Neurosciences. Scientific productivity. Full text. http://www.neurologia.com/sec/resumen.php?or=web&i=e&id=2008373 Friendly greetings, MSc. Ricardo Arencibia Jorge Network of Scientometric Studies for Higher Education National Scientific Research Center, Havana, Cuba. ricardo.arencibia at cnic.edu.cu http://www.directorioexit.info/consulta.php?directorio=exit&campo=ID&texto=478 -------------- next part -------------- An HTML attachment was scrubbed... URL: From garfield at CODEX.CIS.UPENN.EDU Fri Oct 17 10:45:58 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Fri, 17 Oct 2008 10:45:58 -0400 Subject: Dang, Y; Zhang, YL; Suakkaphong, N; Larson, C; Chen, HC An integrated approach to mapping worldwide bioterrorism research capabilities ISI 2008: 2008 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SECURITY INFORMATICS 212-214, 2008 Message-ID: Email Address: ydang at email.arizona.edu Author(s): Dang, Y (Dang, Yan); Zhang, YL (Zhang, Yulei); Suakkaphong, N (Suakkaphong, Nichalin); Larson, C (Larson, Cathy); Chen, HC (Chen, Hsinchun) Title: An integrated approach to mapping worldwide bioterrorism research capabilities Editor(s): Yang, CC; Chen, WH; Hsu, WL; Wu, TC; Zeng, D Source: ISI 2008: 2008 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SECURITY INFORMATICS 212-214, 2008 Language: English Document Type: Article Conference Title: IEEE International Conference on Intelligence and Security Informatics (ISI 2008) Conference Date: JUN 17-20, 2008 Conference Location: Taipei, TAIWAN Conference Sponsors: Cent Police Univ, Natl Taiwan Univ Sci & Technol, Academia Sinica, Inst Informat Sci, Natl Taiwan Univ, IEEE Intelligent Transportat Syst Soc, IEEE Syst, Man, & Cybernte Soc, Hewlett Packard Taiwan Ltd, IBM Taiwan Corp, SolventoSOFT Tech Corp, Thales Transport & Secur, Noetech Photoelect Inc, Alpha Pricing Co Ltd, Natl Secur Agcy, Minist Interior, Minist Econ Affairs, Coast Guard Adm, Off Homeland Secur, Natl Immigrat Bur, Telecom Technol Ctr, Police Res Assoc, Minist Educ, Natl Sci Council, Sci & Technol Advisory Grp Execut Yuan, Res, Dev & Evaluat Commiss, Natl Police Agcy, Criminal Invest Bur, Investigat Bur, Minist Justice, Chung Shan Inst Sci & Technol, Taiwan Stock Exchang Corp, Natl Sci Fdn, Henry C Lee Forens Sci Fdn Author Keywords: biodefense; bioterrorism research literature; knowledge mapping KeyWords Plus: TERRORISM RESEARCH; KNOWLEDGE Abstract: Biomedical research used for defense purposes may also be applied to biological weapons development. To mitigate risk, the U.S. Government has attempted to monitor and regulate biomedical research labs, especially those that study bioterrorism agents/diseases. However, monitoring worldwide biomedical researchers and their work is still an issue. In this study, we developed an integrated approach to mapping worldwide bioterrorism research literature. By utilizing knowledge mapping techniques, we analyzed the productivity status, collaboration status, and emerging topics in bioterrorism domain. The analysis results provide insights into the research status of bioterrorism agents/diseases and thus allow a more comprehensive view, of bioterrorism researchers and ongoing work. Addresses: Univ Arizona, Dept Management Informat Syst, Tucson, AZ 85721 USA. Reprint Address: Dang, Y, Univ Arizona, Dept Management Informat Syst, Tucson, AZ 85721 USA. Cited Reference Count: 13 Publisher Name: IEEE Publisher Address: 345 E 47TH ST, NEW YORK, NY 10017 USA ISBN: 978-1-4244-2414-6 Source Item Page Count: 3 Subject Category: Computer Science, Artificial Intelligence; Computer Science, Information Systems; Computer Science, Theory & Methods ISI Document Delivery No.: BID97 BRUIJN B P EFMI WORKSH NAT LA : 1 2002 CHEN H MAPPING NANOTECHNOLO : COHEN AM A survey of current work in biomedical text mining BRIEF BIOINFORM 6 : 57 2005 HU X 2005 IEEE INT C INT 2005 KENNEDY LW DEV FDN POLICY RELEV : 2003 KOHANE IS The contributions of biomedical informatics to the fight against bioterrorism J AM MED INFORM ASSN 9 : 116 2002 MERARI A TERRORISM POLITICAL 3 : 88 1991 REID EF Mapping the contemporary terrorism research domain INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES 65 : 42 2007 REID EOF Evolution of a body of knowledge: An analysis of terrorism research INFORM PROCESS MANAG 33 : 91 1997 SHNEIDERMAN B P IEEE S VIS LANG WA : 1996 SILKE A TERRORISM POLITICAL 13 : 1 2001 SWANSON DR FISH OIL, RAYNAUDS SYNDROME, AND UNDISCOVERED PUBLIC KNOWLEDGE PERSPECT BIOL MED 30 : 7 1986 ZHU B Information visualization ANNU REV INFORM SCI 39 : 139 2005 From garfield at CODEX.CIS.UPENN.EDU Fri Oct 17 14:40:00 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Fri, 17 Oct 2008 14:40:00 -0400 Subject: Jacso, P (Jacso, Peter) Testing the calculation of a realistic h-index in Google Scholar, Scopus, and Web of Science for F. W. Lancaster LIBRARY TRENDS, 56 (4): 784-815 SPR 2008 Message-ID: Email Address: jacso at hawaii.edu Author(s): Jacso, P (Jacso, Peter) Title: Testing the calculation of a realistic h-index in Google Scholar, Scopus, and Web of Science for F. W. Lancaster Source: LIBRARY TRENDS, 56 (4): 784-815 SPR 2008 Language: English Document Type: Article Keywords Plus: SCIENTIFIC-RESEARCH OUTPUT; HIRSCH-TYPE INDEXES; BIBLIOMETRIC INDICATORS; CITATION ANALYSIS; JOURNALS; INFORMATION; DATABASES; RANKING; AUTHORS; COUNTS Abstract: This paper focuses on the practical limitations in the content and software of the databases that are used to calculate the h-index for assessing the publishing productivity and impact of researchers. To celebrate F. W. Lancaster's biological age of seventy-five, and "scientific age" of forty-five, this paper discusses the related features of Google Scholar, Scopus, and Web of Science (WoS), and demonstrates in the latter how a Much more realistic and fair h-index call be computed for F. W. Lancaster than the one produced automatically. Browsing and searching the cited reference index of the 1945-2007 edition of WoS, which in my estimate has over a hundred million 11 orphan references" that have no counterpart master records to be attached to, and "stray references" that cite papers which do have master records but cannot be identified by the matching algorithm because of errors of omission and commission in the references of the citing works, can bring tip hundreds of additional cited references given to works of an accomplished author but are ignored in the automatic process of calculating the h-index. The partially manual process doubled the h-index value for F. W. Lancaster front 13 to 26, which is a much more realistic value for an information scientist and Professor of his statute. Addresses: Univ Hawaii, Dept Informat & Comp Sci, Honolulu, HI 96822 USA Reprint Address: Jacso, P, Univ Hawaii, Dept Informat & Comp Sci, Honolulu, HI 96822 USA. Cited Reference Count: 71 Times Cited: 1 Publisher: JOHNS HOPKINS UNIV PRESS Publisher Address: JOURNALS PUBLISHING DIVISION, 2715 NORTH CHARLES ST, BALTIMORE, MD 21218-4363 USA ISSN: 0024-2594 29-char Source Abbrev.: LIBR TRENDS ISO Source Abbrev.: Libr. Trends Source Item Page Count: 32 Subject Category: Information Science & Library Science ISI Document Delivery No.: 345HW ASHKANASY NM Playing the citations game JOURNAL OF ORGANIZATIONAL BEHAVIOR 28 : 643 DOI 10.1002/job.476 2007 BANKS MG An extension of the Hirsch index: Indexing scientific topics and compounds SCIENTOMETRICS 69 : 161 DOI 10.1007/s11192-006-0146-5 2006 BARENDSE W BIOMEDICAL DIGITAL L : UNSP 4 2007 BARILAN J An ego-centric citation analysis of the works of Michael O. Rabin based on multiple citation indexes INFORMATION PROCESSING & MANAGEMENT 42 : 1553 DOI 10.1016/j.ipm.2006.03.019 2006 BARILAN J ISSI NEWSLETTER 2 : 3 2006 BARILAN J Some measures for comparing citation databases JOURNAL OF INFORMETRICS 1 : 26 DOI 10.1016/j.joi.2006.08.001 2007 BATISTA PD Is it possible to compare researchers with different scientific interests? SCIENTOMETRICS 68 : 179 2006 BERGER M The problematic ratings game in modern science SOUTH AFRICAN JOURNAL OF SCIENCE 103 : 2 2007 BORNMANN L Does the h-index for ranking of scientists really work? SCIENTOMETRICS 65 : 391 DOI 10.1007/s11192-005-0281-4 2005 BOWDEN ME P 1998 C HIST HER SC : 1998 BRAUN T A Hirsch-type index for journals SCIENTOMETRICS 69 : 169 DOI 10.1007/s11192-006-0147-4 2006 BUTLER L Extending citation analysis to non-source items SCIENTOMETRICS 66 : 327 DOI 10.1007/s11192-006-0024-1 2006 CALLICOTT B INTERNET REFERENCE S 10 : 2005 COSTAS R The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level JOURNAL OF INFORMETRICS 1 : 193 DOI 10.1016/j.joi.2007.02.001 2007 CRONIN B Using the h-index to rank influential information scientists JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 57 : 1275 DOI 10.1002/asi.20354 2006 CRONIN B Comparative citation rankings of authors in monographic and journal literature: A study of sociology JOURNAL OF DOCUMENTATION 53 : 263 1997 EGGHE L Dynamic h-index: The Hirsch index in function of time JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 : 452 DOI 10.1002/asi.20473 2007 EGGHE L Item-time-dependent Lotkaian informetrics and applications to the calculation of the time-dependent h-index and g-index MATHEMATICAL AND COMPUTER MODELLING 45 : 864 DOI 10.1016/j.mcm.2006.08.006 2007 EGGHE L An informetric model for the Hirsch-index SCIENTOMETRICS 69 : 121 DOI 10.1007/s11192-006-0143-8 2006 EGGHE L Theory and practise of the g-index SCIENTOMETRICS 69 : 131 DOI 10.1007/s11192-006-0144-7 2006 GARFIELD E CITATION INDEXES FOR SCIENCE - NEW DIMENSION IN DOCUMENTATION THROUGH ASSOCIATION OF IDEAS SCIENCE 122 : 108 1955 GLANZEL W On the h-index - A mathematical approach to a new measure of publication activity and citation impact SCIENTOMETRICS 67 : 315 DOI 10.1556/Scient.67.2006.2.12 2006 HIRSCH JE An index to quantify an individual's scientific research output PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 102 : 16569 DOI 10.1073/pnas.0507655102 2005 IMPERIAL J Usefulness of Hirsch's h-index to evaluate scientific research in Spain SCIENTOMETRICS 71 : 271 DOI 10.1007/s11192-007-1665-4 2007 JACSO P Content evaluation of databases ANNUAL REVIEW OF INFORMATION SCIENCE AND TECHNOLOGY 32 : 231 1997 JACSO P ONLINE INFO IN PRESS 32 : 2008 JACSO P Savvy searching - Google Scholar revisited ONLINE INFORMATION REVIEW 32 : 102 DOI 10.1108/14684520810866010 2008 JACSO P The plausibility of computing the h-index of scholarly productivity and impact using reference-enhanced databases ONLINE INFORMATION REVIEW 32 : 266 DOI 10.1108/14694520810879872 2008 JACSO P The dimensions of cited reference enhanced database subsets ONLINE INFORMATION REVIEW 31 : 694 DOI 10.1108/14684520710832360 2007 JACSO P Software issues related to cited references ONLINE INFORMATION REVIEW 31 : 892 DOI 10.1108/14684520710841838 2007 JACSO P Dubious hit counts and cuckoo's eggs ONLINE INFORMATION REVIEW 30 : 188 DOI 10.1108/14694520610659201 2006 JACSO P Deflated, inflated and phantom citation counts ONLINE INFORMATION REVIEW 30 : 297 DOI 10.1108/14684520610675816 2006 JACSO P Citation-enhanced indexing/abstracting databases ONLINE INFORMATION REVIEW 28 : 235 DOI 10.1108/14684520410543689 2004 JACSO P Citedness scores for filtering information and ranking search results ONLINE INFORMATION REVIEW 28 : 371 DOI 10.1108/14684520410564307 2004 JACSO P WEB SCI PETERS DIGIT : 2007 JEANG K IMPACT FACTOR H INDE : ARTN 4 2007 JIN BH The R- and AR-indices: Complementing the h-index CHINESE SCIENCE BULLETIN 52 : 855 DOI 10.1007/s11434-007-0145-9 2007 KELLY CD The h index and career assessment by numbers TRENDS IN ECOLOGY & EVOLUTION 21 : 167 DOI 10.1016/j.tree.2006.01.005 2006 KOUSHA K Google Scholar citations and Google Web/URL citations: A multi-discipline exploratory analysis JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 : 1055 DOI 10.1002/asi.20584 2007 LANCASTER FW BIBLIOMETRIC METHODS : 1991 LANCASTER FW P 1998 C HIST HER SC : 1999 LIANG LM h-index sequence and h-index matrix: Constructions and applications SCIENTOMETRICS 69 : 153 DOI 10.1007/s11192-006-0145-6 2006 LINE MB INFLUENCE OF THE TYPE OF SOURCES USED ON THE RESULTS OF CITATION ANALYSES JOURNAL OF DOCUMENTATION 35 : 265 1979 LINE MB J DOC 29 : 72 1973 MAYR P An exploratory study of Google Scholar ONLINE INFORMATION REVIEW 31 : 814 DOI 10.1108/146945-90710941794 2007 MEHO LI Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 : 2105 DOI 10.1002/asi.20677 2007 MENEGHINI R Articles with authors affiliated to Brazilian institutions published from 1994 to 2003 with 100 or more citations: II - Identification of thematic nuclei of excellence in Brazilian science ANAIS DA ACADEMIA BRASILEIRA DE CIENCIAS 78 : 855 2006 MULLEN LB Google scholar and the library web site: The early response by ARL libraries COLLEGE & RESEARCH LIBRARIES 67 : 106 2006 NEUHAUS C PORTAL-LIBR ACAD 6 : 133 2006 NORRIS M Comparing alternatives to the Web of Science for coverage of the social sciences' literature JOURNAL OF INFORMETRICS 1 : 161 DOI 10.1016/j.joi.2006.12.001 2007 OELRICH B A Bibliometric evaluation of publications in Urological journals among European Union countries between 2000-2005 EUROPEAN UROLOGY 52 : 1238 DOI 10.1016/j.eururo.2007.06.050 2007 OLDEN JD How do ecological journals stack-up? Ranking of scientific quality according to the h index ECOSCIENCE 14 : 370 2007 OPPENHEIM C Using the h-index to rank influential British researchers in information science and librarianship JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 : 297 DOI 10.1002/asi.20460 2007 PACKER AL Articles with authors affiliated to Brazilian institutions published from 1994 to 2003 with 100 or more citations: I - The weight of international collaboration and the role of the networks ANAIS DA ACADEMIA BRASILEIRA DE CIENCIAS 78 : 841 2006 POMERANTZ J Google Scholar and 100 percent availability of information INFORMATION TECHNOLOGY AND LIBRARIES 25 : 52 2006 PRATHAP G Hirsch-type indices for ranking institutions' scientific research output CURRENT SCIENCE 91 : 1439 2006 PURVIS A The h index: playing the numbers game TRENDS IN ECOLOGY & EVOLUTION 21 : 422 DOI 10.1016/j.tree.2006.05.014 2006 ROBINSON ML Putting Google Scholar to the test: a preliminary study PROGRAM-ELECTRONIC LIBRARY AND INFORMATION SYSTEMS 41 : 71 DOI 10.1108/00330330710724908 2007 ROUSSEAU R The influence of missing publications on the Hirsch index JOURNAL OF INFORMETRICS 1 : 2 2007 SAAD G Exploring the h-index at the author and journal levels using bibliometric data of productive consumer scholars and business-related journals respectively SCIENTOMETRICS 69 : 117 DOI 10.1007/s11192-006-0142-9 2006 SALGADO JF Scientific productivity and Hirsch's h index of Spanish social psychology: Convergence between productivity indexes and comparison with other areas PSICOTHEMA 19 : 179 2007 SCHREIBER M A case study of the Hirsch index for 26 non-prominent physicists ANNALEN DER PHYSIK 16 : 640 DOI 10.1002/andp.20071025 2007 SCHREIBER M EUROPHYSICS LETT 78 : 2007 SCHROEDER R Pointing users toward citation searching: Using Google scholar and Web of Science PORTAL-LIBRARIES AND THE ACADEMY 7 : 243 2007 SCHUBERT A A systematic analysis of Hirsch-type indices for journals JOURNAL OF INFORMETRICS 1 : 179 DOI 10.1016/j.joi.2006.12.002 2007 SCHUBERT A Successive h-indices SCIENTOMETRICS 70 : 201 DOI 10.1007/s11192-007-0112-x 2007 VANCLAY JK On the robustness of the h-index JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 : 1547 2007 VANCLAY JK Refining the h-index SCIENTIST 20 : 14 2006 VANRAAN AFJ Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups SCIENTOMETRICS 67 : 491 DOI 10.1556/Scient.67.2006.3.10 2006 VINKLER P Eminence of scientists in the light of the h-index and other scientometric indicators JOURNAL OF INFORMATION SCIENCE 33 : 481 DOI 10.1177/0165551506072165 2007 WHITE B NZ LIB INFORM MANAGE 50 : 11 2006 From garfield at CODEX.CIS.UPENN.EDU Fri Oct 17 15:47:55 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Fri, 17 Oct 2008 15:47:55 -0400 Subject: Qin, J (Qin, Jian) F. W. Lancaster: A bibliometric analysis LIBRARY TRENDS, 56 (4): 954-967 SPR 2008 Message-ID: Email Address: jqin at syr.edu Author(s): Qin, J (Qin, Jian) Title: F. W. Lancaster: A bibliometric analysis Source: LIBRARY TRENDS, 56 (4): 954-967 SPR 2008 Language: English Document Type: Article Keywords Plus: INFORMATION-SCIENCE FACULTY; RESEARCH PRODUCTIVITY; LIBRARY Abstract: F. W. Lancaster, as the most cited author during the 1970s to early 1990s, has broad intellectual influence in many fields of research in library and information science. This bibliometric study collected citation data For Lancaster's publications from 1972 to 2006 and analyzed the data in terms of the time and space and disciplinary breadth of his intellectual influence. The result shows that. Lancaster has established in extraordinary record of both productivity and citedness. Six of his works, according to the criteria for citation classic, have been cited so extensively over a longtime span that they qualify its citation classics in library and information science. Although much of the citation data, especially those in non-English publications, are not covered in citation databases, the bibliometric depiction nonetheless provides a good picture of Lancaster's contribution to and influence in library and information science. Addresses: Syracuse Univ, Sch Informat Studies, Syracuse, NY 13244 USA Reprint Address: Qin, J, Syracuse Univ, Sch Informat Studies, Syracuse, NY 13244 USA. Cited Reference Count: 8 Times Cited: 0 Publisher: JOHNS HOPKINS UNIV PRESS Publisher Address: JOURNALS PUBLISHING DIVISION, 2715 NORTH CHARLES ST, BALTIMORE, MD 21218-4363 USA ISSN: 0024-2594 29-char Source Abbrev.: LIBR TRENDS ISO Source Abbrev.: Libr. Trends Source Item Page Count: 14 Subject Category: Information Science & Library Science ISI Document Delivery No.: 345HW BATES MJ The role of publication type in the evaluation of LIS programs LIBRARY & INFORMATION SCIENCE RESEARCH 20 : 187 1998 BUDD JM Productivity of US library and information science faculty: The Hayes study revisited LIBRARY QUARTERLY 66 : 1 1996 COLE J WEB KNOWLEDGE FESTSC : 281 2001 GARFIELD E WHAT IS CITATION CLA : 2007 HAYES RM CITATION STATISTICS AS A MEASURE OF FACULTY RESEARCH PRODUCTIVITY JOURNAL OF EDUCATION FOR LIBRARIANSHIP 23 : 151 1983 JACSO P Testing the calculation of a realistic h-index in Google Scholar, Scopus, and Web of Science for F. W. Lancaster LIBRARY TRENDS 56 : 784 2008 MEHO LI Ranking the research productivity of library and information science faculty and schools: An evaluation of data sources and research methods JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 56 : 1314 DOI 10.1002/asi.20227 2005 QIN J Types and levels of collaboration in interdisciplinary research in the sciences JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 48 : 893 1997 From garfield at CODEX.CIS.UPENN.EDU Mon Oct 20 16:08:56 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Mon, 20 Oct 2008 16:08:56 -0400 Subject: Garcia-Garcia, P; Lopez-Munoz, F; Rubio, G; Martin-Agueda, B; Alamo, C Phytotherapy and psychiatry: Bibliometric study of the scientific literature from the last 20 years PHYTOMEDICINE, 15 (8): 566-576 AUG 2008 Message-ID: E-mail Address: frlopez at juste.net Author(s): Garcia-Garcia, P (Garcia-Garcia, P.); Lopez-Munoz, F (Lopez- Munoz, F.); Rubio, G (Rubio, G.); Martin-Agueda, B (Martin-Agueda, B.); Alamo, C (Alamo, C.) Title: Phytotherapy and psychiatry: Bibliometric study of the scientific literature from the last 20 years Source: PHYTOMEDICINE, 15 (8): 566-576 AUG 2008 Language: English Document Type: Article Author Keywords: phytotherapy; psychiatry; bibliometry Keywords Plus: SCIENCE-CITATION-INDEX; ST-JOHNS-WORT; CLINICAL-TRIALS; METAANALYSIS; DEPRESSION; BIOMEDICINE; MEDICINES; ANXIETY Abstract: In diverse areas of therapy, including psychiatry, increasing interest in herbal medicine has been shown in recent years. Plants have a wide range of traditional uses, but only a few have been approved therapeutically. Moreover, to our knowledge, no bibliometric analyses on medicinal plants used in psychiatry have been carried out to date. We performed a bibliometric study on scientific publication related to phytotherapy in the psychiatry area during the period 1986-2006. Using the platform Embase.com, including the EMBASE and MEDLINE databases, we selected those documents including the descriptors plant*, herb*, phytotherapy*, phytomedicine*, pharmacognosy*, and psychiatry* (with all diagnostic criteria). The plants' indications were selected according to the PDR for Herbal Medicines. As a bibliometric indicator of the production, Price's Law was applied. Another indicator included was the national participation index (PI) for overall scientific production. A total of 21,409 original documents were obtained. Our data confirm a fulfilment of Price's Law related to scientific production on medicinal plants in Psychiatry. This was observed after we made a linear fit (y = 135.08x-466.38; r = 0.92) and another fit to an exponential curve (y = 132.26e(0.1497x) ; r = 0.99). The plants most widely mentioned in the psychiatric literature were St. John's wort (Hypericum perforatum L.; n = 937) and ginkgo (Ginkgo biloba L.; n = 694). The countries with the highest percentages of documents were the United States (29.44%), Germany (9.41%) and Japan (8.75%), and those with highest proportional PI were India (IPa = 0.935) and China (IPa = 0.721). Productivity on medicinal plants in the psychiatry area increased during the period 1986-2006. Nevertheless, documents about therapeutic herbs in this medical field are still relatively few in number. (c) 2008 Elsevier GmbH. All rights reserved. Addresses: [Garcia-Garcia, P.; Lopez-Munoz, F.; Alamo, C.] Univ Alcala De Henares, Dept Pharmacol, Madrid 28027, Spain; [Rubio, G.] Univ Complutense, Dept Psychiat, Retiro Mental Hlth Serv, E-28040 Madrid, Spain; [Martin-Agueda, B.] Primary Care Head Off, Teaching & Res Unit, Guadalajara, Spain Reprint Address: Lopez-Munoz, F, Univ Alcala De Henares, Dept Pharmacol, C Juan Ignacio Luca Tena 8, Madrid 28027, Spain. E-mail Address: frlopez at juste.net Cited Reference Count: 48 Times Cited: 0 Publisher: ELSEVIER GMBH, URBAN & FISCHER VERLAG Publisher Address: OFFICE JENA, P O BOX 100537, 07705 JENA, GERMANY ISSN: 0944-7113 DOI: 10.1016/j.phymed.2008.04.014 29-char Source Abbrev.: PHYTOMEDICINE ISO Source Abbrev.: Phytomedicine Source Item Page Count: 11 Subject Category: Plant Sciences; Chemistry, Medicinal; Pharmacology & Pharmacy ISI Document Delivery No.: 347YN AM PSYCH ASS DIAGN STAT MAN MENT : 2000 *EUR SCI COOP PHYT MON SCI FDN HERB MED : 2003 *THOMP HEALTHC PDR HERB MED : 2004 AKHONDZADEH S Passionflower in the treatment of generalized anxiety: a pilot double- blind randomized controlled trial with oxazepam JOURNAL OF CLINICAL PHARMACY AND THERAPEUTICS 26 : 363 2001 BLUMENTHAL M COMPLETE GERMAN COMM : 1998 BORDONS M Evaluation of scientific activity through bibliometric indicators REVISTA ESPANOLA DE CARDIOLOGIA 52 : 790 1999 CAMI J MAPA BIBLIOMETRICO E : 2007 CAMI J Spanish scientific production in biomedicine and health sciences during the period 1990-1993 (Science Citation Index and Social Science Citation Index) and comparison to period 1986-1989 MEDICINA CLINICA 109 : 481 1997 CAMI J THE SPANISH SCIENTIFIC PRODUCTION IN BIOMEDICINE AND HEALTH-CARE - A STUDY THROUGH THE SCIENCE-CITATION-INDEX (1986-1989) MEDICINA CLINICA 101 : 721 1993 DONATH F Critical evaluation of the effect of valerian extract on sleep structure and sleep quality PHARMACOPSYCHIATRY 33 : 47 2000 ERNST E Herbal remedies for anxiety - a systematic review of controlled clinical trials PHYTOMEDICINE 13 : 205 DOI 10.1016/j.phymed.2004.11.006 2006 GARCIAGARCIA P Evolution of Spanish scientific production in international obstetrics and gynecology journals during the period 1986-2002 EUROPEAN JOURNAL OF OBSTETRICS GYNECOLOGY AND REPRODUCTIVE BIOLOGY 123 : 150 DOI 10.1016/j.ejogrb.2005.06.039 2005 GARFIELD E CITATION INDEXING IT : 1979 GERVAS JJ SCIENCE-CITATION-INDEX - POTENTIAL AND USE MEDICINA CLINICA 95 : 582 1990 GOMEZ I POLITICA CIENTIFICA 46 : 21 1996 KAMAGATE M Clinical trials using medicinal plants: Bibliographical review and methodological analysis THERAPIE 60 : 413 2005 KUMAR V PHYTOTHER RES 20 : 1023 2007 LEHRL S Evaluating scientific performances by impact factors - the right for equal chances STRAHLENTHERAPIE UND ONKOLOGIE 175 : 141 1999 LINDE K St John's wort for depression - Meta-analysis of randomised controlled trials BRITISH JOURNAL OF PSYCHIATRY 186 : 99 2005 LOPEZ JM MED CLIN-BARCELONA 98 : 142 1992 LOPEZMUNOZ F AN PSIQUIATR 11 : 68 1995 LOPEZMUNOZ F Bibliometric analysis of biomedical publications on SSRIs during 1980-2000 DEPRESSION AND ANXIETY 18 : 95 DOI 10.1002/da.10121 2003 LOPEZMUNOZ F FOCUS SEROTONIN UPTA : 191 2006 LOPEZMUNOZ F Bipolar disorder as an emerging pathology in the scientific literature: A bibliometric approach JOURNAL OF AFFECTIVE DISORDERS 92 : 161 DOI 10.1016/j.jad.2006.02.006 2006 LOPEZMUNOZ F J NEURAL TRANSM 114 : 99 2007 LOPEZMUNOZ F Scientific research on the pineal gland and melatonin: A bibliometric study for the period 1966-1994 JOURNAL OF PINEAL RESEARCH 20 : 115 1996 LOPEZMUNOZ F MINERV PSICHIATR 43 : 259 2002 LOPEZMUNOZ F A bibliometric study of the use of the classification and diagnostic systems in psychiatry over the last 25 years PSYCHOPATHOLOGY 41 : 214 DOI 10.1159/000125555 2008 LOPEZMUNOZ F REV ESP FARMACOECON 1 : 25 1995 LOPEZMUNOZ F REV NEUROL 24 : 417 1996 LOPEZPINERO JM MED CLIN-BARCELONA 98 : 384 1992 MALTRAS B FRONT CIEN TECHNOL 7 : 4 1995 MILLS S PRINCIPLES PRACTICE : 2000 MITCHELL PB MED TODAY 8 : 67 2007 MIYASAKA LS BIBLIOTECA COCHRANE : 2007 NEWMAN DJ Natural products as sources of new drugs over the period 1981-2002 JOURNAL OF NATURAL PRODUCTS 66 : 1022 DOI 10.1021/np0300961 2003 PITTLER M COCHRANE LIB : 2003 PRICE DJS LITTLE SCI BIG SCI : 1963 PRITCHARD A STATISTICAL BIBLIOGRAPHY OR BIBLIOMETRICS JOURNAL OF DOCUMENTATION 25 : 348 1969 RODER C Meta-analysis of effectiveness and tolerability of treatment of mild to moderate depression with St. John's Wort FORTSCHRITTE DER NEUROLOGIE PSYCHIATRIE 72 : 330 DOI 10.1055/s-2003-812513 2004 SARRIS J Herbal medicines in the treatment of psychiatric disorders: A systematic review PHYTOTHERAPY RESEARCH 21 : 703 DOI 10.1002/ptr.2187 2007 STEVINSON C SLEEP MED 1 : 91 2000 TERRADA ML ESPANA CIENCIA : 1991 VOGELER BK EUROPEAN J CLINICAL 55 : 567 1999 WALKER AF Herbal medicine: the science of the art PROCEEDINGS OF THE NUTRITION SOCIETY 65 : 145 DOI 10.1079/PNS2006487 2006 WERNEKE U Complementary medicines in psychiatry. Review of effectiveness and safety BRITISH JOURNAL OF PSYCHIATRY 188 : 109 2006 WHISKEY E A systematic review and meta-analysis of Hypericum perforatum in depression: a comprehensive clinical review INTERNATIONAL CLINICAL PSYCHOPHARMACOLOGY 16 : 239 2001 WHITE HD BIBLIOMETRICS ANNUAL REVIEW OF INFORMATION SCIENCE AND TECHNOLOGY 24 : 119 1989 From garfield at CODEX.CIS.UPENN.EDU Tue Oct 21 12:35:55 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Tue, 21 Oct 2008 12:35:55 -0400 Subject: Gibson, M; Spong, CY; Simonsen, SE; Martin, S; Scott, JR Author perception of peer review OBSTETRICS AND GYNECOLOGY, 112 (3): 646-651 SEP 2008 Message-ID: E-mail Address: mark.gibson at hsc.ulah.edu Author(s): Gibson, M (Gibson, Mark); Spong, CY (Spong, Catherine Y.); Simonsen, SE (Simonsen, Sara Ellis); Martin, S (Martin, Sheryl); Scott, JR (Scott, James R.) Title: Author perception of peer review Source: OBSTETRICS AND GYNECOLOGY, 112 (3): 646-651 SEP 2008 Language: English Document Type: Article Keywords Plus: RANDOMIZED CONTROLLED-TRIAL; QUALITY; MANUSCRIPTS; ACCEPTANCE; FEEDBACK; EDITORS; IMPACT Abstract: OBJECTIVE: To survey authors submitting manuscripts to a leading specialty journal regarding their assessment of editorial review. The study sought factors affecting authors' satisfaction and whether authors rated the journal review processes differently from the commentary provided by different reviewers. METHODS: Participation in an online survey was offered to 445 corresponding authors of research manuscripts submitted consecutively during a 7-month period. All manuscripts received full editorial review. The survey instrument asked authors to rate six aspects of editorial comments from each of two to four reviewers and three aspects of the review process. in addition, the survey queried overall satisfaction and likelihood of submission of future manuscripts based on review experience. RESULTS: Higher ratings for overall satisfaction with manuscript review were given by authors of accepted compared with rejected manuscripts (98% compared with 80%, P<.001). Authors rated processes for submission and review more highly than editorial commentary (88% compared with 69%, P<.001), and this difference was greater among authors of rejected manuscripts. The extent to which reviewers focused on important aspects of submitted manuscripts received the lowest ratings from authors. Authors' ratings of reviewers' comments differentiated between reviewers and did not correlate with ratings of reviews by the journal's senior editors. CONCLUSION: Author feedback was more favorable among authors of accepted manuscripts, and responses differentiated among aspects of editorial review and reviewers. Author feedback may provide a means for monitoring and improvement of processes for editorial review and reviewer commentary. Addresses: [Gibson, Mark] Univ Utah, Dept Obstet & Gynecol, Sch Med, Salt Lake City, UT 84132 USA; NICHHD, Pregnancy & Perinatol Branch, NIH, Bethesda, MD 20892 USA; Univ Utah, Dept Family & Prevent Med, Sch Med, Publ Hlth Program, Salt Lake City, UT 84132 USA Reprint Address: Gibson, M, Univ Utah, Dept Obstet & Gynecol, Sch Med, 30 N 1900 E,Room 2B200, Salt Lake City, UT 84132 USA. E-mail Address: mark.gibson at hsc.ulah.edu Cited Reference Count: 14 Times Cited: 0 Publisher: LIPPINCOTT WILLIAMS & WILKINS Publisher Address: 530 WALNUT ST, PHILADELPHIA, PA 19106-3621 USA ISSN: 0029-7844 29-char Source Abbrev.: OBSTET GYNECOL ISO Source Abbrev.: Obstet. Gynecol. Source Item Page Count: 6 Subject Category: Obstetrics & Gynecology ISI Document Delivery No.: 347YZ CALLAHAM M Journal prestige, publication bias, and other characteristics associated with citation of published studies in peer-reviewed journals JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 287 : 2847 2002 CALLAHAM ML Effect of attendance at a training session on peer reviewer quality and performance ANNALS OF EMERGENCY MEDICINE 32 : 318 1998 CALLAHAM ML Reliability of editors' subjective quality ratings of peer reviews of manuscripts JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 280 : 229 1998 CALLAHAN ML Effect of written feedback by editors on quality of reviews - Two tandomized trials JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 287 : 2781 2002 GARFIELD E The history and meaning of the journal impact factor JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 295 : 90 2006 GARFUNKEL JM EFFECT OF ACCEPTANCE OR REJECTION ON THE AUTHORS EVALUATION OF PEER-REVIEW OF MEDICAL MANUSCRIPTS JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 263 : 1376 1990 JEFFERSON T Editorial peer review for improving the quality of reports of biomedical studies COCHRANE DATABASE OF SYSTEMATIC REVIEWS : ARTN MR000016 2007 JEFFERSON T Measuring the quality of editorial peer review JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 287 : 2786 2002 JUSTICE AC Does masking author identity improve peer review quality? - A randomized controlled trial JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 280 : 240 1998 KORNGREEN A Peer-review system could gain from author feedback NATURE 438 : 282 DOI 10.1038/438282d 2005 LANDKROON AR Quality assessment of reviewers' reports using a simple instrument OBSTETRICS AND GYNECOLOGY 108 : 979 2006 SCHROTER S Effects of training on quality of peer review: randomised controlled trial BRITISH MEDICAL JOURNAL 328 : 673 2004 VANROOYEN S Effect of blinding and unmasking on the quality of peer review - A randomized trial JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 280 : 234 1998 WEBER EJ Author perception of peer review - Impact of review quality and acceptance on satisfaction JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 287 : 2790 2002 From amsciforum at GMAIL.COM Tue Oct 21 17:01:41 2008 From: amsciforum at GMAIL.COM (Stevan Harnad) Date: Tue, 21 Oct 2008 17:01:41 -0400 Subject: On Metrics and Metaphysics In-Reply-To: Message-ID: On Tue, Oct 21, 2008 at 1:23 PM, J.F.Rowland wrote: >>> HM: Literature - authors. There are many researchers studying >>> Shakespeare. A lesser-known local author will be lucky >>> to receive the attention of even one researcher. In a >>> metrics-based system, it seems reasonable to hypothesize >>> that this bias will increase, and the odds of studying local >>> culture decrease. >> >> SH: What bias? If a lesser-known researcher does good work, >> it will be used, and this will be reflected in the metrics. > > Stevan - You misunderstood Heather's point. She didn't say the researcher - > the author of the current research article in question - was little-known. > She said the literary author that (s)he was studying was little-known. > Therefore, not many researchers will be interested in that literary author, > so not many people will cite the article, however good it is. > > There is a real and valid point in Heather's message, and simply saying 'use > other metrics' is vague, to say the least. Please specify what metrics > might be used to provide a valid quality measure to the work of researchers > who study minority subjects which will excite interest, and therefore usage, > and citations, from only a few people worldwide. Fair question. Here is a quick and dirty solution, by way of illustration. (Many more are possible along analogous lines.) (1) Compare like with like. So we are only looking at literary studies. The subject matter can be narrowed down by Boolean means, including taxonomic descriptors. No comparing Shakespeare scholarship works with molecular biology. (2) Now, for illustrative purposes, let us suppose there are only two authors studied by literary scholars: William Shakespeare and Joe Bloggs. (3) Do an OA space search (remember, we are assuming universal OA) to find out how many litcrit papers there are on Shakespeare and on Bloggs: Let's say there are S and B papers, respectively. (4) In comparing the citation counts of Shakespeare scholars with Bloggs scholars, divide the Shakespeare citations by S and the Bloggs citations by B. Notice that I used two metrics, jointly: citation counts and a semiometric content-word count. This is just a simplified illustration. With universal OA there will be a vast palate of potential metrics to use to normalize comparisons of like with like. Call it metric "profiling." Separate this question from the question of how you would do it today, with just ISI web of science metrics and no universal OA. The answer is that you can't (though Google Scholar and Google Books might still allow you to do a quick and dirty approximation). Stevan Harnad PS the absolute number of researchers working on a topic is itself a metric, which may for some fields and some purposes be a valid and informative one, and for other fields not. Compare like with like and validate, initialize and weight metrics fields by field (or, for some purposes, even subfield by subfield). From loet at LEYDESDORFF.NET Wed Oct 22 02:55:05 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Wed, 22 Oct 2008 08:55:05 +0200 Subject: On Metrics and Metaphysics In-Reply-To: Message-ID: > Fair question. Here is a quick and dirty solution, by way of > illustration. > (Many more are possible along analogous lines.) > > (1) Compare like with like. So we are only looking at literary > studies. The subject matter can be narrowed down by Boolean means, > including taxonomic descriptors. No comparing Shakespeare scholarship > works with molecular biology. Dear Stevan and colleagues, Against any theoretical objection, one can always argue that it is possible to do things "quick and dirty". In my opinion, this approach can easily damage standards in this field. The delineation of fields and subfields, for example, cannot easily be discarded, particularly on the web. In other words, standards for the normalization are never to be taken lightly. Best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ From amsciforum at GMAIL.COM Wed Oct 22 06:12:03 2008 From: amsciforum at GMAIL.COM (Stevan Harnad) Date: Wed, 22 Oct 2008 06:12:03 -0400 Subject: On Metrics and Metaphysics In-Reply-To: <46DF45E5B45A4178853ED3103D646C98@loet> Message-ID: On Wed, Oct 22, 2008 at 2:55 AM, Loet Leydesdorff wrote: >> SH: Fair question. Here is a quick and dirty solution, by way of >> illustration. (Many more are possible along analogous lines.) >> >> (1) Compare like with like. So we are only looking at literary >> studies. The subject matter can be narrowed down by Boolean means, >> including taxonomic descriptors. No comparing Shakespeare scholarship >> works with molecular biology. > > LL: Against any theoretical objection, one can always argue that it is possible > to do things "quick and dirty". In my opinion, this approach can easily > damage standards in this field. > > The delineation of fields and subfields, for example, cannot easily be > discarded, particularly on the web. In other words, standards for the > normalization are never to be taken lightly. The question was: How can one create a metric that will detect the impact of piece of work on a low-volume topic, such as a literary study of a little-studied author. My reply was to give a "quick and dirty" approximation to show that it is possible in principle (normalize by volume). That does not mean that the quick and dirty approximation is the sole or optimal solution, just that a solution is possible. {Les Carr has since also made the point that metrics cannot be expected to be oracular: (LC: "Perhaps I might be permitted to throw the ball back in your court. How would *you* [or anyone] know that a paper in the (narrow but important) field has excited the interest of anyone worldwide? Or even excited the interest of "the right people"? Once you can answer that, to the satisfaction of the author and their community, then Stevan (for you challenged him in particular) might be able to devise a metric for measuring it. Or, indirectly, devise a test for whether a proposed battery of metrics will act as a reasonable proxy for the judgement of experts in the field."} Faut pas se faire plus royaliste que le roi... Stevan Harnad From loet at LEYDESDORFF.NET Wed Oct 22 07:33:05 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Wed, 22 Oct 2008 13:33:05 +0200 Subject: On Metrics and Metaphysics In-Reply-To: Message-ID: Dear Stevan, It seems to me that the expectation of the citation frequency is among other things a function of the local density of the citation network. A problem, however, remains how to define the locale: a journal, a theme, a patent class? "Quick and dirty" skips these problems, in my opinion. I agree that it may be pragmatical and shows that a solution is possible in principle. > {Les Carr has since also made the point that metrics cannot be > expected to be oracular: (LC: "Perhaps I might be permitted to throw > the ball back in your court. How > would *you* [or anyone] know that a paper in the (narrow but > important) field has excited the interest of anyone worldwide? Or even > excited the interest of "the right people"? Once you can answer that, > to the satisfaction of the author and their community, then Stevan > (for you challenged him in particular) might be able to devise a > metric for measuring it. Or, indirectly, devise a test for whether a > proposed battery of metrics will act as a reasonable proxy for the > judgement of experts in the field."} The problem seems to me in the inference from aggregated citing behavior to an expectation of being cited. The analyst transposed the citation-transaction matrix (Wouters, 1999). > Faut pas se faire plus royaliste que le roi... Noblesse oblige! > Stevan Harnad > Best, Loet From lutz.bornmann at GESS.ETHZ.CH Wed Oct 22 08:11:28 2008 From: lutz.bornmann at GESS.ETHZ.CH (Bornmann Lutz) Date: Wed, 22 Oct 2008 14:11:28 +0200 Subject: New paper on peer review In-Reply-To: A Message-ID: Dear colleagues: You might be interested in our new paper: Bornmann L, Wallon G, Ledin A (2008) Does the Committee Peer Review Select the Best Applicants for Funding? An Investigation of the Selection Process for Two European Molecular Biology Organization Programmes. PLoS ONE 3(10): e3480. doi:10.1371/journal.pone.0003480 You can download this Open Access paper from: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0003480 Abstract: Does peer review fulfill its declared objective of identifying the best science and the best scientists? In order to answer this question we analyzed the Long-Term Fellowship and the Young Investigator programmes of the European Molecular Biology Organization. Both programmes aim to identify and support the best post doctoral fellows and young group leaders in the life sciences. We checked the association between the selection decisions and the scientific performance of the applicants. Our study involved publication and citation data for 668 applicants to the Long-Term Fellowship programme from the year 1998 (130 approved, 538 rejected) and 297 applicants to the Young Investigator programme (39 approved and 258 rejected applicants) from the years 2001 and 2002. If quantity and impact of research publications are used as a criterion for scientific achievement, the results of (zero-truncated) negative binomial models show that the peer review process indeed selects scientists who perform on a higher level than the rejected ones subsequent to application. We determined the extent of errors due to over-estimation (type I errors) and under-estimation (type 2 errors) of future scientific performance. Our statistical analyses point out that between 26% and 48% of the decisions made to award or reject an application show one of both error types. Even though for a part of the applicants, the selection committee did not correctly estimate the applicant's future performance, the results show a statistically significant association between selection decisions and the applicants' scientific achievements, if quantity and impact of research publications are used as a criterion for scientific achievement. Regards, Lutz ------------------------------------------------------------------------------------------------------------- Dr. Lutz Bornmann ETH Zurich, D-GESS Professorship for Social Psychology and Research on Higher Education Zaehringerstr. 24 / ZAE CH-8092 Zurich Phone: 0041?44 632 48 25 Fax: 0041?44 632 12 83 Skype: lutz.bornmann http://www.psh.ethz.ch/ bornmann at gess.ethz.ch ? Download of publications: www.lutz-bornmann.de/Publications.htm From harnad at ECS.SOTON.AC.UK Wed Oct 22 08:40:27 2008 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Wed, 22 Oct 2008 08:40:27 -0400 Subject: On Metrics and Metaphysics In-Reply-To: Message-ID: On 22-Oct-08, at 7:33 AM, Loet Leydesdorff wrote: > It seems to me that the expectation of the citation frequency is > among other > things a function of the local density of the citation network. A > problem, > however, remains how to define the locale: a journal, a theme, a > patent > class? "Quick and dirty" skips these problems, in my opinion. I > agree that > it may be pragmatical and shows that a solution is possible in > principle. With open access, it is no longer univariate (i.e., not just citation counts) and it is definitely no longer journal-centric (author and article metrics, not journal JIFs, though journal JIFs can be among the metrics used). The point is that metrics need to be plural, diverse, and validated and weighted by field and subfield. To repeat: The "quick and dirty" example I gave was not meant to be used, but to show that solutions (many solutions) are possible in principle, and that their main features are that they are (1) multivariate, (2) field or even subfield-based, (3) require prior (joint) validation, field by field, against an already validated or face-valid criterion (such as peer evaluation), and, most important, they are (4) conditional on the provision of a full Open Access database on which to base them -- a condition that does not yet exist, but one for which we are now fighting (using the potential of multiple Open Access metrics as an incentive). > The problem seems to me in the inference from aggregated citing > behavior to > an expectation of being cited. The analyst transposed the > citation-transaction matrix (Wouters, 1999). The matrix I have in mind is not a citation matrix, but a matrix consisting of a rich and diverse set of metrics, including downloads, chronometrics (growth/decay of citations, downloads), co-citation metrics, co- authorship metrics, funding metrics, student metrics. patent metrics, link metrics, hub/authority metrics, endogamy/exogamy metrics, years of publication, total publications, tag metrics, comment metrics, semiometrics, and more -- all these harvested from the Open Access Research Web, once all articles are OA, citation-linked, and download-metered. The difference between this plurimetric world and the world of univariate citations will be like the difference between night and day. But we are still in the night... Stevan Harnad From amsciforum at GMAIL.COM Wed Oct 22 08:55:21 2008 From: amsciforum at GMAIL.COM (Stevan Harnad) Date: Wed, 22 Oct 2008 08:55:21 -0400 Subject: On Metrics and Metaphysics In-Reply-To: <01D11835-322A-40D5-BEC2-5AD3B0DEFC9E@ecs.soton.ac.uk> Message-ID: ---------- Forwarded message ---------- From: Leslie Carr Date: Wed, Oct 22, 2008 at 7:03 AM Subject: Re: On Metrics and Metaphysics To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM -- listserver.sigmaxi.org On 21 Oct 2008, at 18:23, J.F.Rowland wrote: > Stevan - You misunderstood Heather's point. She didn't say the > researcher - the author of the current research article in question - > was little-known. > She said the literary author that (s)he was studying was little-known. > Therefore, not many researchers will be interested in that literary > author, so not many people will cite the article, however good it is. I think that the arguments that Heather put forward are not fundamentally directed at metrics per se. They are arguments about the distinction between research impact and research importance; it is the researchers, the societies, the funding councils and governments who need to answer these policy questions. > There is a real and valid point in Heather's message, and simply > saying 'use other metrics' is vague, to say the least. Yes, it is, isn't it. When someone has a more concrete idea of what we are measuring (quality? excellence? importance? impact?) then doubtless we can be make a reasonable attempt to be more specific. > Please specify what metrics might be used to provide a valid > quality measure to the work of researchers who study minority > subjects which will excite interest, and therefore usage, > and citations, from only a few people worldwide. I'll take my turn in prolonging the confusion by remarking that bibliographic items are only one kind of evidence that can be observed and measured. Everyone in the UK is familiar with the RAE's "measures of esteem" which supplemented the bibliographic submissions. Invited lectures, committee memberships, journal editorships and the like are all leaving auditable trails of evidence on the web which we can measure and use to moderate citation-only statistics. -- les Carr From dwojick at HUGHES.NET Wed Oct 22 10:10:05 2008 From: dwojick at HUGHES.NET (David E. Wojick) Date: Wed, 22 Oct 2008 10:10:05 -0400 Subject: On Metrics and Metaphysics In-Reply-To: <2A99F288-25C8-4A65-8D61-9EEDEEE100D0@ecs.soton.ac.uk> Message-ID: Speaking metaphysically, I find the following problematic -- "The point is that metrics need to be plural, diverse, and validated and weighted by field and subfield." I would argue that the concepts of field and subfield are too vague and unscientific to be the basis for objective metrics. Science is an ever changing body of inquiry, with personal, methodological and subject matter clusters that change with scale and over time, sometimes very rapidly. No two papers are about exactly the same thing. Each paper is related to its neighbors in multiple ways. Each object of study can be looked at in multiple ways. Thus field and subfield are always artificial divisions of convenience imposed on an unbroken web of belief and inquiry. Moreover, these divisions can often be usefully made in multiple ways. If the science of science depends on these artificial divisions as a basis for measurement then we are in serious trouble. David Wojick DOE OSTI > >On 22-Oct-08, at 7:33 AM, Loet Leydesdorff wrote: > >> It seems to me that the expectation of the citation frequency is among other >> things a function of the local density of the citation network. A problem, >> however, remains how to define the locale: a journal, a theme, a patent >> class? "Quick and dirty" skips these problems, in my opinion. I agree that >> it may be pragmatical and shows that a solution is possible in principle. > >With open access, it is no longer univariate (i.e., not just citation counts) and >it is definitely no longer journal-centric (author and article metrics, not journal >JIFs, though journal JIFs can be among the metrics used). The point is that >metrics need to be plural, diverse, and validated and weighted by field and >subfield. > >To repeat: The "quick and dirty" example I gave was not meant to be used, >but to show that solutions (many solutions) are possible in principle, and >that their main features are that they are (1) multivariate, (2) field or even >subfield-based, (3) require prior (joint) validation, field by field, against an >already validated or face-valid criterion (such as peer evaluation), and, most > important, they are (4) conditional on the provision of a full Open Access >database on which to base them -- a condition that does not yet exist, but >one for which we are now fighting (using the potential of multiple Open Access >metrics as an incentive). > >> The problem seems to me in the inference from aggregated citing behavior to >> an expectation of being cited. The analyst transposed the >> citation-transaction matrix (Wouters, 1999). > >The matrix I have in mind is not a citation matrix, but a matrix consisting of a >rich and diverse set of metrics, including downloads, chronometrics >(growth/decay of citations, downloads), co-citation metrics, co- authorship >metrics, funding metrics, student metrics. patent metrics, link metrics, >hub/authority metrics, endogamy/exogamy metrics, years of publication, >total publications, tag metrics, comment metrics, semiometrics, and more -- >all these harvested from the Open Access Research Web, once all articles >are OA, citation-linked, and download-metered. > >The difference between this plurimetric world and the world of univariate >citations will be like the difference between night and day. But we are still in >the night... > >Stevan Harnad -- "David E. Wojick, PhD" Senior Consultant for Innovation Office of Scientific and Technical Information US Department of Energy http://www.osti.gov/innovation/ 391 Flickertail Lane, Star Tannery, VA 22654 USA 540-858-3136 http://www.bydesign.com/powervision/resume.html provides my bio and past client list. http://www.bydesign.com/powervision/Mathematics_Philosophy_Science/ presents some of my own research on information structure and dynamics. From harnad at ECS.SOTON.AC.UK Wed Oct 22 10:44:39 2008 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Wed, 22 Oct 2008 10:44:39 -0400 Subject: On Metrics and Metaphysics In-Reply-To: Message-ID: On 22-Oct-08, at 10:10 AM, David E. Wojick wrote: > Speaking metaphysically, I find the following problematic -- "The > point is that metrics need to be plural, diverse, and validated and > weighted by field and > subfield." > > I would argue that the concepts of field and subfield are too vague > and unscientific to be the basis for objective metrics. Science is > an ever changing body of inquiry, with personal, methodological and > subject matter clusters that change with scale and over time, > sometimes very rapidly. No two papers are about exactly the same > thing. Each paper is related to its neighbors in multiple ways. Each > object of study can be looked at in multiple ways. > > Thus field and subfield are always artificial divisions of > convenience imposed on an unbroken web of belief and inquiry. > Moreover, these divisions can often be usefully made in multiple > ways. If the science of science depends on these artificial > divisions as a basis for measurement then we are in serious trouble. Yes, field and subfield are artificial divisions of convenience. So are institutions, laboratories, research funders, research projects, research assessment exercises, etc. But we do make these artificial divisions, so we need metrics to provide objective ways to navigate within them. If the artificial divisions (such as disciplines) prove inconvenient, or have untoward consequences, we can artificially redivide them. The point is that we should not demand more of our metrics than we demand of the data to which we apply them. And that multiple metrics are much more promising than sticking to just one. The kind of radical relativism and subjectivism that is implied by: "No two papers are about exactly the same thing. Each paper is related to its neighbors in multiple ways. Each object of study can be looked at in multiple ways." would rule out not only metrics, but any sort of evaluation or comparison. By that token, we may as well toss a coin (or conduct an opinion poll) in deciding whom or what to fund, credit, or otherwise reward. 'the man who is ready to prove that metaphysics is wholly impossible... is a brother metaphysician with a rival theory.' Stevan Harnad > David Wojick > DOE OSTI > >> >> On 22-Oct-08, at 7:33 AM, Loet Leydesdorff wrote: >> >>> It seems to me that the expectation of the citation frequency is >>> among other >>> things a function of the local density of the citation network. A >>> problem, >>> however, remains how to define the locale: a journal, a theme, a >>> patent >>> class? "Quick and dirty" skips these problems, in my opinion. I >>> agree that >>> it may be pragmatical and shows that a solution is possible in >>> principle. >> >> With open access, it is no longer univariate (i.e., not just >> citation counts) and >> it is definitely no longer journal-centric (author and article >> metrics, not journal >> JIFs, though journal JIFs can be among the metrics used). The point >> is that >> metrics need to be plural, diverse, and validated and weighted by >> field and >> subfield. >> >> To repeat: The "quick and dirty" example I gave was not meant to >> be used, >> but to show that solutions (many solutions) are possible in >> principle, and >> that their main features are that they are (1) multivariate, (2) >> field or even >> subfield-based, (3) require prior (joint) validation, field by >> field, against an >> already validated or face-valid criterion (such as peer >> evaluation), and, most >> important, they are (4) conditional on the provision of a full >> Open Access >> database on which to base them -- a condition that does not yet >> exist, but >> one for which we are now fighting (using the potential of multiple >> Open Access >> metrics as an incentive). >> >>> The problem seems to me in the inference from aggregated citing >>> behavior to >>> an expectation of being cited. The analyst transposed the >>> citation-transaction matrix (Wouters, 1999). >> >> The matrix I have in mind is not a citation matrix, but a matrix >> consisting of a >> rich and diverse set of metrics, including downloads, chronometrics >> (growth/decay of citations, downloads), co-citation metrics, co- >> authorship >> metrics, funding metrics, student metrics. patent metrics, link >> metrics, >> hub/authority metrics, endogamy/exogamy metrics, years of >> publication, >> total publications, tag metrics, comment metrics, semiometrics, >> and more -- >> all these harvested from the Open Access Research Web, once all >> articles >> are OA, citation-linked, and download-metered. >> >> The difference between this plurimetric world and the world of >> univariate >> citations will be like the difference between night and day. But >> we are still in >> the night... >> >> Stevan Harnad > > -- > > "David E. Wojick, PhD" > Senior Consultant for Innovation > Office of Scientific and Technical Information > US Department of Energy > http://www.osti.gov/innovation/ > 391 Flickertail Lane, Star Tannery, VA 22654 USA > 540-858-3136 > > http://www.bydesign.com/powervision/resume.html provides my bio and > past client list. > http://www.bydesign.com/powervision/Mathematics_Philosophy_Science/ > presents some of my own research on information structure and > dynamics. From Christina.Pikas at JHUAPL.EDU Wed Oct 22 11:07:04 2008 From: Christina.Pikas at JHUAPL.EDU (Pikas, Christina K.) Date: Wed, 22 Oct 2008 11:07:04 -0400 Subject: Speaking of disciplines and normalization... Message-ID: See this news piece in Nature? Published online 20 October 2008 | Nature | doi:10.1038/news.2008.1169 (http://www.nature.com/news/2008/081020/full/news.2008.1169.html) News Is physics better than biology? Citation statistics now comparable across disciplines. Philip Ball Is the physics department at your university performing better than the biology department? Answering such questions objectively has been hard, because citation statistics and other bibliometric indicators can't be directly compared across disciplines. But now a team in Italy has found a way to do just that...... The full article isn't available at PNAS yet, but I *think* this ArXiv paper is the pre-print. Offered without commentary - I'll let you all react with shock and dismay (unless this has already appeared here, in which case, oops!) :) Universality of citation distributions: towards an objective measure of scientific impact Authors: Filippo Radicchi, Santo Fortunato, Claudio Castellano (Submitted on 5 Jun 2008) Abstract: We study the distributions of citations received by a single publication within several disciplines, spanning all fields of science. We show that the probability that a paper is cited $c$ times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator $c/c_0$ is considered, where $c_0$ is the average number of citations per paper for the discipline. In addition we show that the same universal behavior occurs when citation distributions of papers published in the same field, but in different years, are compared. These findings provide a strong validation of $c/c_0$ as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h-index suitable for comparing scientists working in different fields. Comments: 14 pages, 5 figures Subjects: Physics and Society (physics.soc-ph); Data Analysis, Statistics and Probability (physics.data-an) Cite as: arXiv:0806.0974v1 [physics.soc-ph] Christina Christina K. Pikas, MLS R.E. Gibson Library & Information Center The Johns Hopkins University Applied Physics Laboratory Voice 240.228.4812 (Washington), 443.778.4812 (Baltimore) Fax 443.778.5353 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwojick at HUGHES.NET Wed Oct 22 11:06:44 2008 From: dwojick at HUGHES.NET (David E. Wojick) Date: Wed, 22 Oct 2008 11:06:44 -0400 Subject: On Metrics and Metaphysics In-Reply-To: <0645BB0F-BBAD-493D-A399-1A6B56990BDC@ecs.soton.ac.uk> Message-ID: On the contrary Steve, I believe there are good objective measures to be had. It is just that field and subfield are not among them. The measures need to reflect the actual topological and metrical properties of the thing under study. In this case we are moslty talking about networks and diffusion processes. Neither is properly divisible into bounded lumps called fields. If there are sets they are fuzzy at best. In fact I was surprised to learn that scientometrics requires fields. Citation and co-authorship do not intrinsically require field attributes. As for multiple measures, it depends on what you are trying to measure. As I have said before there seems to be a lot of data driven measurement with no clear idea what is being measured. Science has objective properties that we are trying to understand. David Wojick >On 22-Oct-08, at 10:10 AM, David E. Wojick wrote: > >> Speaking metaphysically, I find the following problematic -- "The point is that metrics need to be plural, diverse, and validated and weighted by field and >> subfield." >> >> I would argue that the concepts of field and subfield are too vague and unscientific to be the basis for objective metrics. Science is an ever changing body of inquiry, with personal, methodological and subject matter clusters that change with scale and over time, sometimes very rapidly. No two papers are about exactly the same thing. Each paper is related to its neighbors in multiple ways. Each object of study can be looked at in multiple ways. >> >> Thus field and subfield are always artificial divisions of convenience imposed on an unbroken web of belief and inquiry. Moreover, these divisions can often be usefully made in multiple ways. If the science of science depends on these artificial divisions as a basis for measurement then we are in serious trouble. > >Yes, field and subfield are artificial divisions of convenience. So are institutions, laboratories, research funders, research projects, research assessment exercises, etc. But we do make these artificial divisions, so we need metrics to provide objective ways to navigate within them. If the artificial divisions (such as disciplines) prove inconvenient, or have untoward consequences, we can artificially redivide them. > >The point is that we should not demand more of our metrics than we demand of the data to which we apply them. And that multiple metrics are much more promising than sticking to just one. > >The kind of radical relativism and subjectivism that is implied by: > >"No two papers are about exactly the same thing. Each paper is related to its neighbors in multiple ways. Each object of study can be looked at in multiple ways." > >would rule out not only metrics, but any sort of evaluation or comparison. By that token, we may as well toss a coin (or conduct an opinion poll) in deciding whom or what to fund, credit, or otherwise reward. > >'the man who is ready to prove that metaphysics is wholly impossible... is a brother metaphysician with a rival theory.' > >Stevan Harnad > > >> David Wojick >> DOE OSTI >> >>> >>> On 22-Oct-08, at 7:33 AM, Loet Leydesdorff wrote: >>> >>>> It seems to me that the expectation of the citation frequency is >>>> among other >>>> things a function of the local density of the citation network. A >>>> problem, >>>> however, remains how to define the locale: a journal, a theme, a >>>> patent >>>> class? "Quick and dirty" skips these problems, in my opinion. I >>>> agree that >>>> it may be pragmatical and shows that a solution is possible in >>>> principle. >>> >>> With open access, it is no longer univariate (i.e., not just citation counts) and >>> it is definitely no longer journal-centric (author and article >>> metrics, not journal >>> JIFs, though journal JIFs can be among the metrics used). The point is that >>> metrics need to be plural, diverse, and validated and weighted by >>> field and >>> subfield. >>> >>> To repeat: The "quick and dirty" example I gave was not meant to be used, >>> but to show that solutions (many solutions) are possible in principle, and >>> that their main features are that they are (1) multivariate, (2) field or even >>> subfield-based, (3) require prior (joint) validation, field by field, against an >>> already validated or face-valid criterion (such as peer evaluation), and, most >>> important, they are (4) conditional on the provision of a full Open Access >>> database on which to base them -- a condition that does not yet exist, but >>> one for which we are now fighting (using the potential of multiple >>> Open Access >>> metrics as an incentive). >>> >>>> The problem seems to me in the inference from aggregated citing >>>> behavior to >>>> an expectation of being cited. The analyst transposed the >>>> citation-transaction matrix (Wouters, 1999). >>> >>> The matrix I have in mind is not a citation matrix, but a matrix >>> consisting of a >>> rich and diverse set of metrics, including downloads, chronometrics >>> (growth/decay of citations, downloads), co-citation metrics, co- authorship >>> metrics, funding metrics, student metrics. patent metrics, link metrics, >>> hub/authority metrics, endogamy/exogamy metrics, years of publication, >>> total publications, tag metrics, comment metrics, semiometrics, and more -- >>> all these harvested from the Open Access Research Web, once all articles >>> are OA, citation-linked, and download-metered. >>> >>> The difference between this plurimetric world and the world of >>> univariate >>> citations will be like the difference between night and day. But we are still in >>> the night... >>> >>> Stevan Harnad >> >> -- >> "David E. Wojick, PhD" >> Senior Consultant for Innovation >> Office of Scientific and Technical Information >> US Department of Energy >> http://www.osti.gov/innovation/ >> 391 Flickertail Lane, Star Tannery, VA 22654 USA >> 540-858-3136 >> >> http://www.bydesign.com/powervision/resume.html provides my bio and past client list. >> http://www.bydesign.com/powervision/Mathematics_Philosophy_Science/ presents some of my own research on information structure and dynamics. From amsciforum at GMAIL.COM Wed Oct 22 11:33:34 2008 From: amsciforum at GMAIL.COM (Stevan Harnad) Date: Wed, 22 Oct 2008 11:33:34 -0400 Subject: -- POSSIBLE SPAM -- Tracking Open Access Institutional Repository Growth Worldwide Message-ID: (Thanks to Peter Suberand Charles Bailey for drawing attention to this item.) Repository Records Statistics Chris Keene This website provides data on the number of records in UK Institutional Respositories over time. The data was collected from late summer 2006, and has been collected weekly ever since. Since August 2008 is has collected data for Institutional Repositories worldwide. The data is from the excellent ROAR based at the University of Southampton (ECS). *Where to start?* Have a look at the table below (first link), it shows the number of records in each repository (registered in ROAR) for each week since July 2006. You can reorder the table, download the data (e.g. in to excel) and select individual repositories. Also check out the comparison page, which can be reached by first selectinig an IR on the right and then selecting an IR to compare with. Finally the info page is worth a read for details of what you are actually looking at, and issues with the data and presentation. - Table showing number of records in instiutional repositories over time (United Kingdom) - Click on one of the Repositories on the right, for info about that IR and the ability to compare it with others. (see an example here ) - Table view of random guess at totals of full text items in UK IRs over time (very experiemental, i.e. rubbish). This table is still UK only. Read more: Introduction, details, help and more -------------- next part -------------- An HTML attachment was scrubbed... URL: From havemanf at CMS.HU-BERLIN.DE Wed Oct 22 12:19:04 2008 From: havemanf at CMS.HU-BERLIN.DE (Frank Havemann) Date: Wed, 22 Oct 2008 18:19:04 +0200 Subject: OA e-book: Proceedings of the 2008 COLLNET Conference on Webometrics, Informetrics and Scientometrics Message-ID: Dear colleagues, we announce the publication of the Proceedings of the 2008 COLLNET Conference on Webometrics, Informetrics and Scientometrics at Humboldt-Universitaet zu Berlin as an open-access e-book available at http://www.collnet.de/Berlin-2008/Proceedings-WIS-2008.pdf More than 80 papers presented at COLLNET 2008 are included in the Proceedings. They can be reached via the link on its title in the alphabetical List of Contents (at the end of the PDF file). Yours sincerely, Hildrun Kretschmer and Frank Havemann Editors *************************** Dr. Frank Havemann Institute of Library and Information Science Humboldt University Dorotheenstr. 26 D-10099 Berlin Germany tel.: (0049) (030) 2093 4228 http://www.ib.hu-berlin.de/inf/havemann.html From loet at LEYDESDORFF.NET Wed Oct 22 14:45:37 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Wed, 22 Oct 2008 20:45:37 +0200 Subject: Speaking of disciplines and normalization... In-Reply-To: <934BB0B6D8A02C42BC6099FDE8149CCD0406020C@aplesjustice.dom1.jhuapl.edu> Message-ID: Dear Christina, Thank you for noting this to the list. These authors use the 172 ISI Subject Categories as an example, but their claim seems that however one divides the total set into fields, rescaling to the mean for each subset does the job of making results comparable because of the shape of the citation distributions. The mathematicians among us are probably able to prove this. Best wishes, Loet _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ _____ From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Pikas, Christina K. Sent: Wednesday, October 22, 2008 5:07 PM To: SIGMETRICS at LISTSERV.UTK.EDU Subject: [SIGMETRICS] Speaking of disciplines and normalization... See this news piece in Nature? Published online 20 October 2008 | Nature | doi:10.1038/news.2008.1169 (http://www.nature.com/news/2008/081020/full/news.2008.1169.html) News Is physics better than biology? Citation statistics now comparable across disciplines. Philip Ball Is the physics department at your university performing better than the biology department? Answering such questions objectively has been hard, because citation statistics and other bibliometric indicators can't be directly compared across disciplines. But now a team in Italy has found a way to do just that...... The full article isn't available at PNAS yet, but I *think* this ArXiv paper is the pre-print. Offered without commentary - I'll let you all react with shock and dismay (unless this has already appeared here, in which case, oops!) :) Universality of citation distributions: towards an objective measure of scientific impact Authors: Filippo Radicchi, Santo Fortunato, Claudio Castellano (Submitted on 5 Jun 2008) Abstract: We study the distributions of citations received by a single publication within several disciplines, spanning all fields of science. We show that the probability that a paper is cited $c$ times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator $c/c_0$ is considered, where $c_0$ is the average number of citations per paper for the discipline. In addition we show that the same universal behavior occurs when citation distributions of papers published in the same field, but in different years, are compared. These findings provide a strong validation of $c/c_0$ as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h-index suitable for comparing scientists working in different fields. Comments: 14 pages, 5 figures Subjects: Physics and Society (physics.soc-ph); Data Analysis, Statistics and Probability (physics.data-an) Cite as: arXiv:0806.0974v1 [physics.soc-ph] Christina Christina K. Pikas, MLS R.E. Gibson Library & Information Center The Johns Hopkins University Applied Physics Laboratory Voice 240.228.4812 (Washington), 443.778.4812 (Baltimore) Fax 443.778.5353 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: normalization.Radicchi.2008.pdf Type: application/pdf Size: 138136 bytes Desc: not available URL: From garfield at CODEX.CIS.UPENN.EDU Wed Oct 22 15:35:35 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Wed, 22 Oct 2008 15:35:35 -0400 Subject: Sagi, I (Sagi, Itay); Yechiam, E (Yechiam, Eldad) Amusing titles in scientific journals and article citation JOURNAL OF INFORMATION SCIENCE, 34 (5): 680-687 OCT 2008 Message-ID: E-mail Address: yeldad at tx.technion.ac.il Author(s): Sagi, I (Sagi, Itay); Yechiam, E (Yechiam, Eldad) Title: Amusing titles in scientific journals and article citation Source: JOURNAL OF INFORMATION SCIENCE, 34 (5): 680-687 OCT 2008 Language: English Document Type: Article Author Keywords: citation analysis; humor; research evaluation; writing style Keywords Plus: HUMOR; TEXTBOOKS; PRESTIGE; LENGTH; IMPACT Abstract: The present study examines whether the use of humor in scientific article titles is associated with the number of citations an article receives. Four judges rated the degree of amusement and pleasantness of titles of articles published over 10 years (from 1985 to 1994) in two of the most prestigious journals in psychology, Psychological Bulletin and Psychological Review. We then examined the association between the levels of amusement and pleasantness and the article's monthly citation average. The results show that, while the pleasantness rating was weakly associated with the number of citations, articles with highly amusing titles (2 standard deviations above average) received fewer citations. The negative association between amusing titles and subsequent citations cannot be attributed to differences in the title length and pleasantness, number of authors, year of publication, and article type (regular article vs comment). These findings are discussed in the context of the importance of titles for signalling an article's content. Addresses: [Yechiam, Eldad] Technion Israel Inst Technol, Fac Ind Engn & Management, Behav Sci Area, IL-32000 Haifa, Israel Reprint Address: Yechiam, E, Technion Israel Inst Technol, Fac Ind Engn & Management, Behav Sci Area, IL-32000 Haifa, Israel. E-mail Address: yeldad at tx.technion.ac.il Cited Reference Count: 23 Publisher: SAGE PUBLICATIONS LTD Publisher Address: 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND ISSN: 0165-5515 DOI: 10.1177/0165551507086261 29-char Source Abbrev.: J INFORM SCI ISO Source Abbrev.: J. Inf. Sci. Source Item Page Count: 8 Subject Category: Computer Science, Information Systems; Information Science & Library Science ISI Document Delivery No.: 349UQ ARMSTRONG JS UNINTELLIGIBLE MANAGEMENT RESEARCH AND ACADEMIC PRESTIGE INTERFACES 10 : 80 1980 ARMSTRONG JS READABILITY AND PRESTIGE IN SCIENTIFIC JOURNALS JOURNAL OF INFORMATION SCIENCE 15 : 123 1989 BRYANT J EFFECTS OF HUMOROUS ILLUSTRATIONS IN COLLEGE TEXTBOOKS HUMAN COMMUNICATION RESEARCH 8 : 43 1981 EASTMAN M ENJOYMENT LAUGHTER : 1936 EREV I VAGUENESS, AMBIGUITY, AND THE COST OF MUTUAL UNDERSTANDING PSYCHOLOGICAL SCIENCE 2 : 321 1991 HARTLEY J J TECHNICAL WRITING 37 : 95 2007 HASKINS JB TITLE-RATING - A METHOD FOR MEASURING READING INTERESTS AND PREDICTING READERSHIP EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 20 : 551 1960 IAROVICI E THE STRATEGY OF THE HEADLINE SEMIOTICA 77 : 441 1989 KLEIN D RELATIONSHIP BETWEEN HUMOR IN INTRODUCTORY TEXTBOOKS AND STUDENTS EVALUATIONS OF THE TEXTS APPEAL AND EFFECTIVENESS PSYCHOLOGICAL REPORTS 50 : 235 1982 KOESTLER A ART CREATION : 1964 LEWISON G What's in a title? Numbers of words and the presence of colons SCIENTOMETRICS 63 : 341 DOI 10.1007/s11192-005-0216-0 2005 LOSCHIAVO FM TEACH PSYCHOL 32 : 247 2005 NIERI M Citation classics in periodontology: a controlled study JOURNAL OF CLINICAL PERIODONTOLOGY 34 : 349 DOI 10.1111/j.1600- 051X.2007.01060.x 2007 PIETERS RGM Ad-evoked feelings: Structure and impact on A(ad) and recall JOURNAL OF BUSINESS RESEARCH 37 : 105 1996 ROMERO EJ The use of humor in the workplace ACADEMY OF MANAGEMENT PERSPECTIVES 20 : 58 2006 SNOW AJ PSYCHOL BUSINESS REL : 1930 STREMERSCH S The quest for citations: Drivers of article impact JOURNAL OF MARKETING 71 : 171 2007 UZUN A P INT WORKSH WEB INF : 87 VALLANCE E A DEADPAN LOOK AT HUMOR IN CURRICULUM DISCOURSE (OR, THE SERIOUS VERSUS THE SOLEMN IN EDUCATION) CURRICULUM INQUIRY 10 : 179 1980 WEINBERGER MG THE USE AND EFFECT OF HUMOR IN DIFFERENT ADVERTISING MEDIA JOURNAL OF ADVERTISING RESEARCH 35 : 44 1995 WHISSELL C Pleasantness, activation, and sex differences in advertising PSYCHOLOGICAL REPORTS 81 : 355 1997 YITZHAKI M Relation of the title length of a journal article to the length of the article SCIENTOMETRICS 54 : 435 2002 YITZHAKI M RELATION OF TITLE LENGTH OF JOURNAL ARTICLES TO NUMBER OF AUTHORS SCIENTOMETRICS 30 : 321 1994 From David.Watkins at SOLENT.AC.UK Thu Oct 23 05:43:13 2008 From: David.Watkins at SOLENT.AC.UK (David Watkins) Date: Thu, 23 Oct 2008 10:43:13 +0100 Subject: SIGMETRICS Digest - 21 Oct 2008 to 22 Oct 2008 (#2008-217) In-Reply-To: Message-ID: RE: Study on peer review of grant applications by Dr. Lutz Bornmann Surely it should be no surprise that people selected for prestigious fellowships 'perform' better. The direct resources and status accruing will assit them in both in actively researching and in acheiving publication. The psychological impact will also be positive. Isn't this just a further example of the 'Matthew Effect' rather than a test of the effectivenes of committee peer review? The surprise is that some of those selected STILL don't get the flying start one might anticipate. That might seem to indicate that the process is rewarding some very weak candidates. I've only seen the Abstract in the Listserv so far. Presumably these issues are discussed in the paper itself. Best wishes David Watkins Professor and Chair, Postgraduate Research Centre Southampton Solent University, UK david.watkins at solent.ac.uk From dwojick at HUGHES.NET Thu Oct 23 06:33:09 2008 From: dwojick at HUGHES.NET (David Wojick) Date: Thu, 23 Oct 2008 10:33:09 +0000 Subject: Speaking of disciplines and normalization... Message-ID: An HTML attachment was scrubbed... URL: From loet at LEYDESDORFF.NET Thu Oct 23 07:28:21 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Thu, 23 Oct 2008 13:28:21 +0200 Subject: Speaking of disciplines and normalization... In-Reply-To: <1726216785.6715721224757990822.JavaMail.mail@webmail08> Message-ID: Dear David, Now that I have read this more carefully, it seems to me that this normalization is the same as the Mean Expected Citation Rate which the Hungarian group uses already for two decades or so. As you correctly note, they claim on p. 4 that "(t)he distribution of the relative indicator c/c(0) is universal for all categories considered." These are two claims in one sentence, notably, that they found it in the 172 ISI Subject Categories and that this is universally true. If the latter is the case, one should be able to prove this making simple assumptions about the distribution (e.g., Lotka). Is this true for any Lotka distribution? The strict analogy with voting (footnote 18) suggests that scientists are campaigning for citations. Perhaps, we should consider ads and commercials. :-) Some of us already have been successful in making a business of citation analysis! Note that the analogy with voting is different from a market metaphor. Interesting. Best wishes, Loet _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ _____ From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David Wojick Sent: Thursday, October 23, 2008 12:33 PM To: SIGMETRICS at LISTSERV.UTK.EDU Subject: Re: [SIGMETRICS] Speaking of disciplines and normalization... Indeed Loet, they appear to be claiming that the distribution has the same form in all fields, differing only in height. A very strong claim. What does it tell us about science? Cheers, David David Wojick, Ph.D., Senior consultant for innovation, DOE OSTI http://www.osti.gov/innovation/ Oct 22, 2008 03:03:30 PM, SIGMETRICS at LISTSERV.UTK.EDU wrote: Dear Christina, Thank you for noting this to the list. These authors use the 172 ISI Subject Categories as an example, but their claim seems that however one divides the total set into fields, rescaling to the mean for each subset does the job of making results comparable because of the shape of the citation distributions. The mathematicians among us are probably able to prove this. Best wishes, Loet _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ _____ From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Pikas, Christina K. Sent: Wednesday, October 22, 2008 5:07 PM To: SIGMETRICS at LISTSERV.UTK.EDU Subject: [SIGMETRICS] Speaking of disciplines and normalization... See this news piece in Nature? Published online 20 October 2008 | Nature | doi:10.1038/news.2008.1169 (http://www.nature.com/news/2008/081020/full/news.2008.1169.html) News Is physics better than biology? Citation statistics now comparable across disciplines. Philip Ball Is the physics department at your university performing better than the biology department? Answering such questions objectively has been hard, because citation statistics and other bibliometric indicators can't be directly compared across disciplines. But now a team in Italy has found a way to do just that...... The full article isn't available at PNAS yet, but I *think* this ArXiv paper is the pre-print. Offered without commentary - I'll let you all react with shock and dismay (unless this has already appeared here, in which case, oops!) :) Universality of citation distributions: towards an objective measure of scientific impact Authors: Filippo Radicchi, Santo Fortunato, Claudio Castellano (Submitted on 5 Jun 2008) Abstract: We study the distributions of citations received by a single publication within several disciplines, spanning all fields of science. We show that the probability that a paper is cited $c$ times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator $c/c_0$ is considered, where $c_0$ is the average number of citations per paper for the discipline. In addition we show that the same universal behavior occurs when citation distributions of papers published in the same field, but in different years, are compared. These findings provide a strong validation of $c/c_0$ as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h-index suitable for comparing scientists working in different fields. Comments: 14 pages, 5 figures Subjects: Physics and Society (physics.soc-ph); Data Analysis, Statistics and Probability (physics.data-an) Cite as: arXiv:0806.0974v1 [physics.soc-ph] Christina Christina K. Pikas, MLS R.E. Gibson Library & Information Center The Johns Hopkins University Applied Physics Laboratory Voice 240.228.4812 (Washington), 443.778.4812 (Baltimore) Fax 443.778.5353 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.bornmann at GESS.ETHZ.CH Thu Oct 23 08:29:36 2008 From: lutz.bornmann at GESS.ETHZ.CH (Bornmann Lutz) Date: Thu, 23 Oct 2008 14:29:36 +0200 Subject: AW: [SIGMETRICS] SIGMETRICS Digest - 21 Oct 2008 to 22 Oct 2008 (#2008-217) In-Reply-To: A Message-ID: RE: RE: Study on peer review of grant applications by Dr. Lutz Bornmann Please find our paper attached. You are right. In the interpretation of our results it cannot be ruled out that the applicants who received funding from EMBO may have published more subsequent to application because they received funding and not necessarily because the committee made the right choice about who received funding. The higher productivity of the approved applicants against the rejected applicants may be because the committee made the right choice in deciding who should get funding but also be because they had funding allowing them (better) opportunities for research and subsequent publishing. We know that almost all rejected applicants did their research (proposed in the application) with the money of other funding organizations. However, we did not only consider number of publications in our bibliometric analyses, but also citations counts. Here we have the same results as with number of publications: For an approved applicant, the expected scientific mean performance is statistically significantly increased by 41% (citations for papers published prior to application) and by 49% (citations for papers published subsequent to application) against a rejected applicant. In view of citations I cannot make a connection between received EMBO funding and impact of papers. In my view it is not possible that the name of a certain funding organization in the acknowledgement of a paper can systematically increase citation counts. Best, Lutz -----Urspr?ngliche Nachricht----- Von: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at listserv.utk.edu] Im Auftrag von David Watkins Gesendet: Donnerstag, 23. Oktober 2008 11:43 An: SIGMETRICS at listserv.utk.edu Betreff: Re: [SIGMETRICS] SIGMETRICS Digest - 21 Oct 2008 to 22 Oct 2008 (#2008-217) RE: Study on peer review of grant applications by Dr. Lutz Bornmann Surely it should be no surprise that people selected for prestigious fellowships 'perform' better. The direct resources and status accruing will assit them in both in actively researching and in acheiving publication. The psychological impact will also be positive. Isn't this just a further example of the 'Matthew Effect' rather than a test of the effectivenes of committee peer review? The surprise is that some of those selected STILL don't get the flying start one might anticipate. That might seem to indicate that the process is rewarding some very weak candidates. I've only seen the Abstract in the Listserv so far. Presumably these issues are discussed in the paper itself. Best wishes David Watkins Professor and Chair, Postgraduate Research Centre Southampton Solent University, UK david.watkins at solent.ac.uk -------------- next part -------------- A non-text attachment was scrubbed... Name: Datei.pdf Type: application/pdf Size: 280610 bytes Desc: Datei.pdf URL: From garfield at CODEX.CIS.UPENN.EDU Thu Oct 23 13:04:35 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Thu, 23 Oct 2008 13:04:35 -0400 Subject: Neuhaus, C (Neuhaus, Christoph); Daniel, HD (Daniel, Hans-Dieter) Data sources for performing citation analysis: an overview JOURNAL OF DOCUMENTATION, 64 (2): 193-210 2008 Message-ID: E-mail Address: neuhaus at gess.ethz.ch Author(s): Neuhaus, C (Neuhaus, Christoph); Daniel, HD (Daniel, Hans- Dieter) Title: Data sources for performing citation analysis: an overview Source: JOURNAL OF DOCUMENTATION, 64 (2): 193-210 2008 Excerpt: As highlighted by Moed (2005, p. 316), including a greater number of data sources to perform citation analysis does not necessarily lead to more valid assessments of scientific advancement and of scientists? productivity. Given the methodological and technical difficulties in citation analysis, citation-enhanced databases need to be examined carefully, both in regard to their potentialities and their limitations for citation analysis (Moed, 2005). Particularly, they should be explored to determine whether they provide more complete citation data for publication types not covered in the Thomson Scientific citation indexes. Decisive for further bibliometric studies will be which databases perform best as data source for particular fields and time periods. As seen in this paper, each bibliographic database covers unique content, but none is comprehensive. In this respect, new citation-enhanced databases must be viewed more as a supplement than as a substitute to the Thomson Scientific citation indexes.Certainly, the usefulness of citation- enhanced databases will grow as the amount of content increases, e.g. when analysing the long-term impact of scientific work. In the future, citation-enhanced databases could potentially also be used to calculate reference standards to allow for field normalisation or metrics similar to the highly controversial journal impact factor calculated on an annual basis by Thomson Scientific. Definitively, coverage is not the only criteria determining the usefulness of bibliographic databases for performing citation analysis. The quality of data must be considered as well as the database implementation?s facilities for browsing, searching and analysing data. In any case, the central assumption of bibliometric assessment of research performance remains the same: scientists refer in their work to the earlier work of other scientists, which they have found useful in pursuing their own research. Obviously, the process of citation is a complex one and assessing research performance by citation analysis is a vulnerable method. Problems such as different motives for giving or not giving a reference to a particular publication, self-citations, or differences in publication and citation practices among fields and subfields have all been clearly outlined (e.g. MacRoberts and MacRoberts, 1996). Despite these limitations, many studies have demonstrated that citation analysis provides useful information for research evaluation and that ?ex ante peer review should be supplemented ex post with bibliometrics and other metrics of science to give a broader and powerful methodologywith which to assess scientific advancement? (Daniel, 2005, p.147). Note 1. The Century of Science initiative makes available approximately 850,000 publications from 262 scientific journals published from 1900 to 1944. For further information see www.thomsonscientific.com/centuryofscience/ Language: English Document Type: Article Author Keywords: information; publications; reference services; internet; online database Keywords Plus: SCIENCE; DATABASES; SOCIOLOGY; SEARCHES; SCHOLAR; WEB Abstract: Purpose - The purpose of this paper is to provide in overview of new citation-enhanced databases and to identify issues to be considered when they are used as a data source for performing citation analysis. Design/methodology/approach - The paper reports the limitations of Thomson Scientific's citation indexes and reviews the characteristics of the citation-enhanced databases Chemical Abstracts, Google Scholar and Scopus. Findings - The study suggests that citation-enhanced databases need to be examined carefully, With regard to both their potenlialities and their limitations for citation analysis. Originality/value - The paper presents a valuable overview of new citation- enhanced databases in the context of research evaluation. Addresses: [Neuhaus, Christoph] Swiss Fed Inst Technol, Zurich, Switzerland; [Daniel, Hans-Dieter] Univ Zurich, Zurich, Switzerland Reprint Address: Neuhaus, C, Swiss Fed Inst Technol, Zurich, Switzerland. E-mail Address: neuhaus at gess.ethz.ch Cited Reference Count: 26 Times Cited: 0 Publisher: EMERALD GROUP PUBLISHING LIMITED Publisher Address: HOWARD HOUSE, WAGON LANE, BINGLEY BD16 1WA, W YORKSHIRE, ENGLAND ISSN: 0022-0418 DOI: 10.1108/00220410810858010 29-char Source Abbrev.: J DOC ISO Source Abbrev.: J. Doc. Source Item Page Count: 18 Subject Category: Computer Science, Information Systems; Information Science & Library Science ISI Document Delivery No.: 341RY *CHEM ABSTR SERV STNOTES 24 : 2005 ABT HA ORG STRATEGIES ASTRO 6 : 169 2005 BAUER K D LIB MAGAZINE 11 : 2005 BRAUN T WEB KNOWLEDGE FESTSC : 251 2000 CRONIN B ACCOUNTING FOR INFLUENCE - ACKNOWLEDGMENTS IN CONTEMPORARY SOCIOLOGY JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 44 : 406 1993 CRONIN B Acknowledgement trends in the research literature of information science JOURNAL OF DOCUMENTATION 57 : 427 2001 CRONIN B Comparative citation rankings of authors in monographic and journal literature: A study of sociology JOURNAL OF DOCUMENTATION 53 : 263 1997 DANIEL HD Publications as a measure of scientific advancement and of scientists' productivity LEARNED PUBLISHING 18 : 143 2005 GARFIELD E CITATION INDEXES FOR SCIENCE - NEW DIMENSION IN DOCUMENTATION THROUGH ASSOCIATION OF IDEAS SCIENCE 122 : 108 1955 GARFIELD E The significant scientific literature appears in a small core of journals SCIENTIST 10 : 13 1996 GILES CL Who gets acknowledged: Measuring scientific contributions through automatic acknowledgment indexing PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 101 : 17599 2004 GLANZEL W An item-by-item subject classification of papers published in multidisciplinary and general journals using reference analysis SCIENTOMETRICS 44 : 427 1999 HICKS D The difficulty of achieving full coverage of international social science literature and the bibliometric consequences SCIENTOMETRICS 44 : 193 1999 HOOD WW Informetric studies using databases: Opportunities and challenges SCIENTOMETRICS 58 : 587 2003 JACSO P As we may search - Comparison of major features of the Web of Science, Scopus, and Google Scholar citation-based and citation-enhanced databases CURRENT SCIENCE 89 : 1537 2005 JACSO P GOOGLE SCHOLAR REDUX : 2005 JACSO P Citation-enhanced indexing/abstracting databases ONLINE INFORMATION REVIEW 28 : 235 DOI 10.1108/14684520410543689 2004 JASCO P ONLINE INFORM REV 29 : 107 2005 LAWRENCE S IEEE COMPUT 32 : 67 1999 LINDHOLMROMANTSCHUK Y The role of monographs in scholarly communication: An empirical study of philosophy, sociology and economics JOURNAL OF DOCUMENTATION 52 : 389 1996 MACROBERTS M SCIENTOMETRICS 36 : 3 1996 MERTON RK THE MATTHEW EFFECT IN SCIENCE .2. CUMULATIVE ADVANTAGE AND THE SYMBOLISM OF INTELLECTUAL PROPERTY ISIS 79 : 606 1988 MOED HF CITATION ANAL RES EV : 2005 RIDLEY DD Citation searches in on-line databases: possibilities and pitfalls TRAC-TRENDS IN ANALYTICAL CHEMISTRY 20 : 1 2001 WHITLEY KM Analysis of SciFinder scholar and web of science citation searches JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 53 : 1210 DOI 10.1002/asi.10192 2002 YOUNGEN GK Citation patterns to traditional and electronic preprints in the published literature COLLEGE & RESEARCH LIBRARIES 59 : 448 1998 From garfield at CODEX.CIS.UPENN.EDU Thu Oct 23 15:26:00 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Thu, 23 Oct 2008 15:26:00 -0400 Subject: Reedijk, J (Reedijk, Jan); Moed, HF (Moed, Henk F.) Is the impact of journal impact factors decreasing? Message-ID: E-mail Address: moed at cwts.leidenuniv.nl Author(s): Reedijk, J (Reedijk, Jan); Moed, HF (Moed, Henk F.) Title: Is the impact of journal impact factors decreasing? Source: JOURNAL OF DOCUMENTATION, 64 (2): 183-192 2008 Language: English Document Type: Article Author Keywords: information retrieval; electronic journals; user studies Keywords Plus: BIOMOLECULAR CHEMISTRY; CITATIONS; SCIENCE; INACCURACIES; INDICATORS Abstract: Purpose - The purpose of this paper is to examine the effects of the use of the citation-based journal impact factor for evaluative purposes upon the behaviour of authors and editors. It seeks to give a critical examination of a number of claims as regards the manipulability of this indicator on the basis of all empirical analysis of publication and referencing practices of authors and journal editors. Design/methodology/approach - The paper describes mechanisms that may affect the numerical values of journal impact factors. It also analyses general, "macro" patterns in large samples of journals in order to obtain indications of the extent to which such mechanisms are actually applied oil a large scale. Finally it presents case studies of particular science journals ill order to illustrate what their effects may be ill individual cases. Findings - The paper shows that the commonly used journal impact factor can to some extent be relatively easily manipulated. It discusses several types of strategic editorial behaviour, and presents Cases in which journal impact factors were - intentionally or otherwise - affected by particular editorial strategies. These findings lead to the conclusion that one must be most careful in interpreting and using journal impact factors, and that authors, editors mid policy makers must be aware of their potential manipulability. They also show that some mechanisms occur as of yet rather infrequently while for others it is most difficult if not impossible to assess empirically how often they are actually applied. If their frequency of occurrence increases, one should come to the conclusion that the impact of impact factors is decreasing. Originality/value - The paper systematically describes a number of claims about the manipulability of journal impact factors that are often based on "informal" or even anecdotal evidences and illustrates how these claims can be further examined in thorough empirical research of large data samples. Addresses: [Moed, Henk F.] Leiden Univ, Ctr Sci & Technol Studies CWTS, Leiden, Netherlands; [Reedijk, Jan] Leiden Univ, Leiden Inst Chem, NL-2300 RA Leiden, Netherlands Reprint Address: Moed, HF, Leiden Univ, Ctr Sci & Technol Studies CWTS, Leiden, Netherlands. E-mail Address: moed at cwts.leidenuniv.nl Cited Reference Count: 25 Times Cited: 0 Publisher: EMERALD GROUP PUBLISHING LIMITED Publisher Address: HOWARD HOUSE, WAGON LANE, BINGLEY BD16 1WA, W YORKSHIRE, ENGLAND ISSN: 0022-0418 DOI: 10.1108/00220110810858001 29-char Source Abbrev.: J DOC ISO Source Abbrev.: J. Doc. Source Item Page Count: 10 Subject Category: Computer Science, Information Systems; Information Science & Library Science ISI Document Delivery No.: 341RY AGRAWAL AA Corruption of journal Impact Factors TRENDS IN ECOLOGY & EVOLUTION 20 : 157 DOI 10.1016/j.tree.2005.02.002 2005 BOLLEN J Toward alternative metrics of journal impact: A comparison of download and citation data INFORMATION PROCESSING & MANAGEMENT 41 : 1419 DOI 10.1016/j.ipm.2005.03.024 2005 GARFIELD E How can impact factors be improved? BRITISH MEDICAL JOURNAL 313 : 411 1996 GARFIELD E CITATION INDEXING IT : 1979 GARFIELD E CURR CONTENTS 47 : 5 1970 GARFIELD E CURRENT COMMENTS GHO : 1985 GARFIELD E NEW YEAR, NEW BUILDING CURRENT CONTENTS 21 : 5 1980 GARFIELD E CITATION ANALYSIS AS A TOOL IN JOURNAL EVALUATION - JOURNALS CAN BE RANKED BY FREQUENCY AND IMPACT OF CITATIONS FOR SCIENCE POLICY STUDIES SCIENCE 178 : 471 1972 GLANZEL W Better late than never? On the chance to become highly cited only beyond the standard bibliometric time horizon SCIENTOMETRICS 58 : 571 2003 HARRIS RC OPUSCULA PHILOLICHEN 2 : 1 2005 MOED HF CITATION ANAL RES EV : 2005 MOED HF A new classification system to describe the ageing of scientific journals and their impact factors JOURNAL OF DOCUMENTATION 54 : 387 1998 MOED HF Impact factors can mislead NATURE 381 : 186 1996 MOED HF Proceedings of the Sixth International Conference on Science and Technology Indicators - Introduction SCIENTOMETRICS 51 : 5 2001 MOED HF A critical analysis of the journal impact factors of Angewandte Chemie and the Journal of the American Chemical Society - Inaccuracies in published impact factors based on overall citations only SCIENTOMETRICS 37 : 105 1996 MORLEY JE Flying through 5 years JOURNALS OF GERONTOLOGY SERIES A-BIOLOGICAL SCIENCES AND MEDICAL SCIENCES 59 : 1270 2004 POTTER CV Comment: 2004's fastest organic and biomolecular chemistry! CHEMICAL COMMUNICATIONS : 2781 DOI 10.1039/b417565b 2004 POTTER CV Comment: 2004's fastest organic and biomolecular chemistry! CHEMICAL SOCIETY REVIEWS 33 : 567 DOI 10.1039/b417566m 2004 POTTER CV Comment: 2004's fastest organic and biomolecular chemistry! JOURNAL OF MATERIALS CHEMISTRY 14 : E17 DOI 10.1039/b417514j 2004 POTTER CV Comment: 2004's fastest organic and biomolecular chemistry! NEW JOURNAL OF CHEMISTRY 28 : 1395 DOI 10.1039/b417567k 2004 POTTER CV Comment: 2004's fastest organic and biomolecular chemistry! ORGANIC & BIOMOLECULAR CHEMISTRY 2 : 3535 DOI 10.1039/b417338b 2004 REEDIJK J Sense and nonsense of science citation analyses: comments on the monopoly position of ISI and citation inaccuracies. Risks of possible misuse and biased citation and impact data. NEW JOURNAL OF CHEMISTRY 22 : 767 1998 SEGLEN PO Citations and journal impact factors: questionable indicators of research quality ALLERGY 52 : 1050 1997 SEGLEN PO Why the impact factor of journals should not be used for evaluating research BRITISH MEDICAL JOURNAL 314 : 498 1997 VANRAAN AFJ SCIENTOMETRICS 59 : 491 2004 From zahedi_zz at YAHOO.COM Sun Oct 26 05:21:22 2008 From: zahedi_zz at YAHOO.COM (zohreh zahedi) Date: Sun, 26 Oct 2008 02:21:22 -0700 Subject: OA e-book: Proceedings of the 2008 COLLNET Conference on Webometrics, Informetrics and Scientometrics In-Reply-To: <48FF5278.5090006@cms.hu-berlin.de> Message-ID: Dear Manager, ? As i mentioned before i am? interested?in including only my paper abstract in the proceedings. It is very appreciated if you would remove its fulltext from the list. ? Regards, Zahedi --- On Wed, 10/22/08, Frank Havemann wrote: From: Frank Havemann Subject: [SIGMETRICS] OA e-book: Proceedings of the 2008 COLLNET Conference on Webometrics, Informetrics and Scientometrics To: SIGMETRICS at LISTSERV.UTK.EDU Date: Wednesday, October 22, 2008, 7:49 PM Dear colleagues, we announce the publication of the Proceedings of the 2008 COLLNET Conference on Webometrics, Informetrics and Scientometrics at Humboldt-Universitaet zu Berlin as an open-access e-book available at http://www.collnet.de/Berlin-2008/Proceedings-WIS-2008.pdf More than 80 papers presented at COLLNET 2008 are included in the Proceedings. They can be reached via the link on its title in the alphabetical List of Contents (at the end of the PDF file). Yours sincerely, Hildrun Kretschmer and Frank Havemann Editors *************************** Dr. Frank Havemann Institute of Library and Information Science Humboldt University Dorotheenstr. 26 D-10099 Berlin Germany tel.: (0049) (030) 2093 4228 http://www.ib.hu-berlin.de/inf/havemann.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From liberman at SERVIDOR.UNAM.MX Mon Oct 27 09:33:50 2008 From: liberman at SERVIDOR.UNAM.MX (Sofia Liberman) Date: Mon, 27 Oct 2008 07:33:50 -0600 Subject: OA e-book: Proceedings of the 2008 COLLNET Conference on Webometrics, Informetrics and Scientometrics In-Reply-To: <48FF5278.5090006@cms.hu-berlin.de> Message-ID: Dear Frank and Hildrun, I am very sorry but I would appreciate if my full paper is not published in the Proceedings, so I would like it to be removed from the publication. Best regards! Sofia Liberman Frank Havemann wrote: > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear colleagues, > > we announce the publication of the Proceedings of the > > 2008 COLLNET Conference on Webometrics, Informetrics and Scientometrics > > at Humboldt-Universitaet zu Berlin as an open-access e-book available at > > http://www.collnet.de/Berlin-2008/Proceedings-WIS-2008.pdf > > More than 80 papers presented at COLLNET 2008 are included in the > Proceedings. They can be reached via the link on its title in the > alphabetical List of Contents (at the end of the PDF file). > > Yours sincerely, > > Hildrun Kretschmer and Frank Havemann > > Editors > > > *************************** > Dr. Frank Havemann > Institute of Library and Information Science > Humboldt University > Dorotheenstr. 26 > D-10099 Berlin > Germany > > tel.: (0049) (030) 2093 4228 > http://www.ib.hu-berlin.de/inf/havemann.html > > From garfield at CODEX.CIS.UPENN.EDU Mon Oct 27 16:32:38 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Mon, 27 Oct 2008 16:32:38 -0400 Subject: Ortega, JL (Luis Ortega, Jose); Aguillo, IF (Aguillo, Isidro F.) Linking patterns in European Union countries: geographical maps of the European academic web space JOURNAL OF INFORMATION SCIENCE, 34 (5): 705-714 OCT 2008 Message-ID: Author(s): Ortega, JL (Luis Ortega, Jose); Aguillo, IF (Aguillo, Isidro F.) Title: Linking patterns in European Union countries: geographical maps of the European academic web space Source: JOURNAL OF INFORMATION SCIENCE, 34 (5): 705-714 OCT 2008 Language: English Document Type: Article Author Keywords: European Higher Education Area; geographical maps; link analysis; webometrics Keywords Plus: GRAPH STRUCTURE; IMPACT FACTORS; INTERLINKING; INTERNET Abstract: This national level study intends to describe the existing relationships between the number of web pages, inlinks to and outlinks from Europe and national or internal links in the European Higher Education Area through a sample of 535 European universities' web domains. Several geographical maps are introduced to summarize and visualize this statistical information. The main result shows that larger countries, in number of web pages, link less to the remaining European countries, while the smaller ones are characterized by their link profusion to the European zone. The great presence of national links in large and medium size countries confirms that the European academic web space is shaped by the aggregation of national sub-networks, while the similar low presence in small countries suggests that these are linked to another large country. Addresses: [Luis Ortega, Jose] CSIC, Ctr Informac & Documentac Cientif, Cybermetr Lab, Madrid 28002, Spain Reprint Address: Ortega, JL, CSIC, Ctr Informac & Documentac Cientif, Cybermetr Lab, Joaquin Costa 22, Madrid 28002, Spain. E-mail Address: jortega at cindoc.csic.es Cited Reference Count: 29 Times Cited: 0 Publisher: SAGE PUBLICATIONS LTD Publisher Address: 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND ISSN: 0165-5515 DOI: 10.1177/0165551507086990 29-char Source Abbrev.: J INFORM SCI ISO Source Abbrev.: J. Inf. Sci. Source Item Page Count: 10 Subject Category: Computer Science, Information Systems; Information Science & Library Science ISI Document Delivery No.: 349UQ AGUILLO IF 9 INT C SCI TECHN IN 2006 ALMIND TC Informetric analyses on the World Wide Web: Methodological approaches to 'webometrics' JOURNAL OF DOCUMENTATION 53 : 404 1997 BAEZAYATES R CARACTERISTICS WEB C : 2005 BAEZAYATES R CYBERMETRICS 9 : 2005 BERNERSLEE T ELECTRONIC NETWORKIN 2 : 52 1992 BERNERSLEE TJ THE WORLDWIDE WEB COMPUTER NETWORKS AND ISDN SYSTEMS 25 : 454 1992 BONITZ M GLOBALISIERUNG WISSE : 2000 BRODER A Graph structure in the Web COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING 33 : 309 2000 CALDARELLI G The fractal properties of Internet EUROPHYSICS LETTERS 52 : 386 2000 CHEN M Particulate organic carbon export fluxes in the Canada Basin and Bering Sea as derived from Th-234/U-238 disequilibria ARCTIC 56 : 32 2003 COTHEY V P 10 INT C INT SOC S : 2005 DAUNCEY H CONVERGENCE INT J RE 3 : 72 1997 DUMEZ H GLOBAL INTERNET EC U : 2001 HEIMERIKS G CYBERMETRICS 10 : 2006 MUNZNER T IEEE S INF VIS SAN F : 1996 MUSGROVE PB A method for identifying clusters in sets of interlinking Web spaces SCIENTOMETRICS 58 : 657 2003 NORUZI A Web presence and impact factors for Middle-Eastern countries ONLINE 30 : 22 2006 ORTEGA JL 11 INT C INT SOC SCI 2007 ORTEGA JL SCIENTOMETR IN PRESS : POLANCO X NTTS ETK 2001 C HERS : POROJAN A Trade flows and spatial effects: The gravity model revisited OPEN ECONOMIES REVIEW 12 : 265 2001 SMITH A Web impact factors for Australasian universities SCIENTOMETRICS 54 : 363 2002 SMITH AG P INF ONL ON DISC 99 : 1999 SMITH AG P INFO 99 C INT INF : 1999 THELWALL M Graph structure in three national academic webs: Power laws with anomalies JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 54 : 706 DOI 10.1002/asi.10267 2003 THELWALL M Interlinking between Asia-Pacific University Web sites SCIENTOMETRICS 55 : 363 2002 THELWALL M European Union associated university websites SCIENTOMETRICS 53 : 95 2002 VAUGHAN L Comparing business competition positions based on Web co-link data: The global market vs. the Chinese market SCIENTOMETRICS 68 : 611 DOI 10.1007/s11192-006-0133-x 2006 ZOOK MA The web of production: the economic geography of commercial Internet content production in the United States ENVIRONMENT AND PLANNING A 32 : 411 2000 From garfield at CODEX.CIS.UPENN.EDU Tue Oct 28 12:32:44 2008 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Tue, 28 Oct 2008 12:32:44 -0400 Subject: Gorraiz, J (Gorraiz, Juan); Schloegl, C (Schloegl, Christian) A bibliometric analysis of pharmacology and pharmacy journals: Scopus versus Web of Science JOURNAL OF INFORMATION SCIENCE, 34 (5): 715-725 OCT 2008 Message-ID: E-mail Address: christian.schloegl at uni-graz.at Author(s): Gorraiz, J (Gorraiz, Juan); Schloegl, C (Schloegl, Christian) Title: A bibliometric analysis of pharmacology and pharmacy journals: Scopus versus Web of Science Source: JOURNAL OF INFORMATION SCIENCE, 34 (5): 715-725 OCT 2008 Language: English Document Type: Article Author Keywords: bibliometric analysis; comparison of databases; data reliability; impact factor; Journal Citation Reports; Scopus; Web of Science Abstract: Our study examines the suitability of Scopus for bibliometric analyses in comparison with the Web of Science (WOS). In particular we want to explore if the outcome of bibliometric analyses differs between Scopus and WOS and, if yes, in which aspects. Since journal indicators vary among disciplines, we analysed only journals from the subject pharmacy and pharmaceutical sciences. Nonetheless, our study has also broader implications. Its major findings are: (a) Each top- 100 JCR pharmacy journal was covered by Scopus. (b) The impact factor was higher for 82 and the immediacy index greater for 78 journals in Scopus in 2005. Pharmacy journals with a high impact factor in the JCR usually have a high impact factor in Scopus. (c) Several medium impact journals could be identified in Scopus which were not reported in JCR. (d) The two databases differed in the number of articles within a tolerable margin of deviation for most journals. Addresses: [Gorraiz, Juan] Univ Vienna, Lib & Arch Serv, Cent Lib Phys, A- 1010 Vienna, Austria; [Schloegl, Christian] Graz Univ, Inst Informat Sci & Informat Syst, A-8010 Graz, Austria Reprint Address: Schloegl, C, Univ Str 15-F3, A-8010 Graz, Austria. E-mail Address: christian.schloegl at uni-graz.at Cited Reference Count: 23 Times Cited: 0 Publisher: SAGE PUBLICATIONS LTD Publisher Address: 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND ISSN: 0165-5515 DOI: 10.1177/0165551507086991 29-char Source Abbrev.: J INFORM SCI ISO Source Abbrev.: J. Inf. Sci. Source Item Page Count: 11 Subject Category: Computer Science, Information Systems; Information Science & Library Science ISI Document Delivery No.: 349UQ *SCOP SCOP INF : *SUB DOK BIBL E V : 2007 ADAM D The counting house NATURE 415 : 726 2002 BALL R INFORM SERVICES USE 26 : 293 2006 BAUER K D LIB MAGAZINE 11 : 2005 BOSMAN J SCOPUS REV COMP COVE : 2006 DEIS HF CHARLESTON ADVISOR 7 : 2006 DEIS HF CHARLESTON ADVISOR 6 : 2005 DESS HM ISSUES SCI TECHNOLOG 45 : 2006 GARFIELD E CURRENT CONTENT 0620 : 3 1994 GORRAIZ J MITTEILUNGEN VEREINI 59 : 25 2006 JACSO P A deficiency in the algorithm for calculating the impact factor of scholarly journals: The journal impact factor CORTEX 37 : 590 2001 JACSO P As we may search - Comparison of major features of the Web of Science, Scopus, and Google Scholar citation-based and citation-enhanced databases CURRENT SCIENCE 89 : 1537 2005 JACSO P J INFORM PROCESSING 48 : 763 2006 JACSO P The number game ONLINE INFORMATION REVIEW 24 : 180 2000 KLAVANS R P ISSI 2007 1 : 437 2007 LAGUARDIA C LIB J 0115 : 2005 MOED HF Citation analysis of scientific journals and journal impact measures CURRENT SCIENCE 89 : 1990 2005 PIPP E MITTEILUNGEN VEREINI 59 : 3 2006 SCHLOEGL C Document delivery as a source for bibliometric analyses: the case of Subito JOURNAL OF INFORMATION SCIENCE 32 : 223 DOI 10.1177/0165551506064410 2006 SCHNEIDER K MITTEILUNGEN VEREINI 59 : 21 2006 STOCK WG PASSWORD : 24 2001 WILDNER B MITTEILUNGEN VEREINI 59 : 18 2006 From loet at LEYDESDORFF.NET Fri Oct 31 06:40:46 2008 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 31 Oct 2008 11:40:46 +0100 Subject: How are new citation-based journal indicators adding to the bibliometric toolbox?; preprint version available Message-ID: How are new citation-based journal indicators adding to the bibliometric toolbox? Abstract The launching of Scopus and Google Scholar, and methodological developments in Social Network Analysis have made many more indicators for evaluating journals available than the traditional Impact Factor, Cited Half-life, and Immediacy Index of the ISI. In this study, these new indicators are compared with one another and with the older ones. Do the various indicators measure new dimensions of the citation networks, or are they highly correlated among them? Are they robust and relatively stable over time? Two main dimensions are distinguished-size and impact-which together shape influence. The H-index combines the two dimensions and can also be considered as an indicator of reach (like Indegree). PageRank is mainly an indicator of size, but has important interactions with centrality measures. The Scimago Journal Ranking (SJR) indicator provides an alternative to the Journal Impact Factor, but the computation is less easy. Keywords: impact, H-index, journal, citation, centrality, ranking _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam. Tel. +31-20-525 6598; fax: +31-842239111 loet at leydesdorff.net ; http://www.leydesdorff.net/ Visiting Professor 2007-2010, ISTIC, Beijing; Honorary Fellow 2007-2010, SPRU, University of Sussex Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated, 385 pp.; US$ 18.95; The Self-Organization of the Knowledge-Based Society ; The Challenge of Scientometrics -------------- next part -------------- An HTML attachment was scrubbed... URL: