From David.Watkins at SOLENT.AC.UK Sun Apr 2 09:32:03 2006 From: David.Watkins at SOLENT.AC.UK (David Watkins) Date: Sun, 2 Apr 2006 14:32:03 +0100 Subject: SIGMETRICS Digest - 30 Mar 2006 to 31 Mar 2006 (#2006-44) Message-ID: RAE et al..... For me the basic problem with the RAE was an unintended consequence. The government's problem was that of 'big science'. HEFCE's desire was to concentrate funding for expensive, scientific / medical / engineering research in a few institutions. Probably a good idea. However, it swept up all the other disciplines, too, into a one size fits all system. Thus the lone researcher who can do good work in the history stacks and local archives - if given sufficient time - was treated like the guy who needs CERN time and a cast of hundreds to check out his/her theory. That is just bizarre. In the humanities and social sciences the results have been poor for academe since the basic premise was wrong. Switching the whole system to a metrics based one merely continues this and distorts scholarly good practice outside the sciences; we know that publication practices, citation behaviour etc are quite different in non-science disciplines in the absence of a distortion like RAE. What is needed is a funding regime which gives scholarly space-time and base-line funding to all academics - possibly working on the basis of a notionally equal 4 way split between teaching/teaching preparation/administration/research. Those who needed no special facilities could do good work on this basis as they always did traditionally.The rest of HEFCE's research funds would go to the Research Councils. Other academics who needed large scale funding / facilities would use their 'quartile' to work up proposals for substantial funding from the Research Councils and others. This could - but needn't - involve buying out more of their own admin - or even teaching - time. Teaching everywhere would be enhanced since few established academics would have / need non-teaching roles (evaluation of the whole scholarly role internally, with a level playing field) and the costs of external evaluation would be restricted to the project level, which is the most appropriate one. There would be no institutional 'halo' to support underperformers, but good scholars working in isolation or in small / peripheral institutions would not be discriminated against, and recruitment could return to the assessment of academic institutional need rather than just looking for 'four alpha publications' and to hell with the other academic skills and interests we would all favour seeing in a new colleague. ************************************************ Professor David Watkins Postgraduate Research Centre Southampton Business School East Park Terrace Southampton SO14 0RH David.Watkins at solent.ac.uk 023 80 319610 (Tel) +44 23 80 31 96 10 (Tel) 02380 33 26 27 (fax) +44 23 80 33 26 27 (fax) From P.Zhou at UVA.NL Mon Apr 3 03:51:36 2006 From: P.Zhou at UVA.NL (Ping Zhou) Date: Mon, 3 Apr 2006 09:51:36 +0200 Subject: Looking for a software Message-ID: Sorry for cross-pasting. Does anyone know if there is a technique or software that can prevent a document from being downloaded? Thanks in advance for providing the information! Ping Zhou From notsjb at LSU.EDU Mon Apr 3 14:19:19 2006 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Mon, 3 Apr 2006 13:19:19 -0500 Subject: RAE Questions Message-ID: I would like to use the commentary of David Watkins below to raise some questions I have concerning the British RAE. I have some insight on this matter, for I worked closely with the National Research Council (NRC) on its last evaluation of US research-doctorate programs. I was chosen as one of three persons to test the database which the NRC developed during this evalution. The NRC data is some of the best data in world with which to analyze variables involved in academic evaluations, and it is freely available at the following web site for those interested: http://www.nap.edu/readingroom/books/researchdoc/ Unfortunately it is not the full data set but only that published in the book, and you have to buy the book to understand it. My questions concern the effect of the types of distributions with which the RAEs are dealing. It is well known that informetric distributions are not only highly skewed but highly stable over time. Using the NRC data I found that peer rating, publication rates, citation rates, etc. correlated from one period to another about at 0.9. Moreover, the same programs maintained their dominant positions despite the addition of numerous new programs. This is a natural consequence of the cumulative advantage or success-breeds-success process underlying this stratification system. Translating this into UK terms, it means that Oxford and Cambridge have been dominant for about the last millenium and will maintain their dominance for the next millenium, barring their conversion into madrassas. The RAES should not only be finding this but reinforcing it. Second, such a hierarchical and stable social system is probably functionally necessary for the advance of science. Then comes the question about what is the mobility of individual scientists within this system. Here I think may be the primary fault of the RAEs. The distributions are not only skewed between programs but within programs. Analysis of the elite programs reveals a surprising amount of dead wood among their scientists. Some of it results of extinct volcanos that have ceased producing new work decades ago. The way the RAE allocates research resources seems to have the potential of feather-bedding these drones at the expense of the productive scientists at lesser institutions, making the system not only hierarchical but closed. And, third, evaluating research on a departmental basis seems to run the risk of introducing variables that extraneous to the quality of the research. It is well known that large departments are automatically more highly rated than small ones just on that basis alone. Moreover, such a broad method of evaluating research reduces the flexibility in defining proper sets for comparative purposes. Thus, there is the danger of losing sight of new and developing fields. Taking the above points all together, I think that the US utilization of such evaluations is more correct than that of the UK. In the US such evaluations are regarded as what they actually are--beauty contests in which academic programs strut down the runway in their swim suits, flaunting their particulars. ("And, Miss Harvard Physics, how do you think the world can be immediately saved?") The data can be used to determine how the system is actually functioning and thus explore ways in which the system can be proved. For example, I was able to use the NRC data to determine that there is no anti-Southern bias in the evaluation of research-doctorate programs, thus breaking numerous faculty hearts here at LSU. However, such evaluations should in no way be utilized for allocating research funds, which is best done on a project-by-project basis. Fortunately, the UK way is politically impossible in the US, which is a Federal state. The South may have lost the Civil War, but its soldiers inflicted 9 casualties for every one suffered, putting strict limits on the central Leviathan due to the huge butcher's bill. Any attempt to implement the UK system would be immediately killed in Congress and probably by the Southern delegations. These musings are only speculative, and I may be entirely wrong about the dangers of the UK system. SB David Watkins @LISTSERV.UTK.EDU> on 04/02/2006 08:32:03 AM Please respond to ASIS&T Special Interest Group on Metrics Sent by: ASIS&T Special Interest Group on Metrics To: SIGMETRICS at LISTSERV.UTK.EDU cc: (bcc: Stephen J Bensman/notsjb/LSU) Subject: Re: [SIGMETRICS] SIGMETRICS Digest - 30 Mar 2006 to 31 Mar 2006 (#2006-44) RAE et al..... For me the basic problem with the RAE was an unintended consequence. The government's problem was that of 'big science'. HEFCE's desire was to concentrate funding for expensive, scientific / medical / engineering research in a few institutions. Probably a good idea. However, it swept up all the other disciplines, too, into a one size fits all system. Thus the lone researcher who can do good work in the history stacks and local archives - if given sufficient time - was treated like the guy who needs CERN time and a cast of hundreds to check out his/her theory. That is just bizarre. In the humanities and social sciences the results have been poor for academe since the basic premise was wrong. Switching the whole system to a metrics based one merely continues this and distorts scholarly good practice outside the sciences; we know that publication practices, citation behaviour etc are quite different in non-science disciplines in the absence of a distortion like RAE. What is needed is a funding regime which gives scholarly space-time and base-line funding to all academics - possibly working on the basis of a notionally equal 4 way split between teaching/teaching preparation/administration/research. Those who needed no special facilities could do good work on this basis as they always did traditionally.The rest of HEFCE's research funds would go to the Research Councils. Other academics who needed large scale funding / facilities would use their 'quartile' to work up proposals for substantial funding from the Research Councils and others. This could - but needn't - involve buying out more of their own admin - or even teaching - time. Teaching everywhere would be enhanced since few established academics would have / need non-teaching roles (evaluation of the whole scholarly role internally, with a level playing field) and the costs of external evaluation would be restricted to the project level, which is the most appropriate one. There would be no institutional 'halo' to support underperformers, but good scholars working in isolation or in small / peripheral institutions would not be discriminated against, and recruitment could return to the assessment of academic institutional need rather than just looking for 'four alpha publications' and to hell with the other academic skills and interests we would all favour seeing in a new colleague. ************************************************ Professor David Watkins Postgraduate Research Centre Southampton Business School East Park Terrace Southampton SO14 0RH David.Watkins at solent.ac.uk 023 80 319610 (Tel) +44 23 80 31 96 10 (Tel) 02380 33 26 27 (fax) +44 23 80 33 26 27 (fax) From harnad at ECS.SOTON.AC.UK Mon Apr 3 15:44:42 2006 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Mon, 3 Apr 2006 20:44:42 +0100 Subject: RAE Questions In-Reply-To: Message-ID: May I point out that the UK's is a *dual* funding system: (1) direct, peer-reviewed, competitive research proposals, exactly as in the US, submitted to the UK funding councils (RCUK) plus (2) a much smaller but significant portion of top-sliced funding, based on the RAE. Stephen Bensman, below, seems to think it's all RAE. It's not! Not even mostly. But the RAE top-slice is important too. Is it just a Matthew effect? Surely not just, but no doubt it is in part, and perhaps it's not bad to have some longer-term reinforcement of prior productivity: the length of its tail (its time constant) can be calibrated, but it is quite possible that the dual system is both more stable, robust and equitable than an exclusive bid-by-bid horse-race. Very little of this has been objectively measures and monitored, let alone calibrated and optimized as it went all along (the excellent NRC data notwithstanding). Now that the RAE is going metric, all sorts of new possibilities for enriching, diversifying, monitoring and maximizing predictivity and validity open up, particularly with a webwide, digital open-access research database to base it all on. Keep your mind open open and hold onto your hats because things should soon be picking up for PostGutenberg Scientometrics! Stevan Harnad On Mon, 3 Apr 2006, Stephen J Bensman wrote: > I would like to use the commentary of David Watkins below to raise some > questions I have concerning the British RAE. I have some insight on this > matter, for I worked closely with the National Research Council (NRC) on > its last evaluation of US research-doctorate programs. I was chosen as one > of three persons to test the database which the NRC developed during this > evaluation. The NRC data is some of the best data in world with which to > analyze variables involved in academic evaluations, and it is freely > available at the following web site for those interested: > > http://www.nap.edu/readingroom/books/researchdoc/ > > Unfortunately it is not the full data set but only that published in the > book, and you have to buy the book to understand it. > > My questions concern the effect of the types of distributions with which > the RAEs are dealing. It is well known that informetric distributions are > not only highly skewed but highly stable over time. Using the NRC data I > found that peer rating, publication rates, citation rates, etc. correlated > from one period to another about at 0.9. Moreover, the same programs > maintained their dominant positions despite the addition of numerous new > programs. This is a natural consequence of the cumulative advantage or > success-breeds-success process underlying this stratification system. > Translating this into UK terms, it means that Oxford and Cambridge have > been dominant for about the last millennium and will maintain their > dominance for the next millennium, barring their conversion into madrassas. > The RAES should not only be finding this but reinforcing it. > > Second, such a hierarchical and stable social system is probably > functionally necessary for the advance of science. Then comes the question > about what is the mobility of individual scientists within this system. > Here I think may be the primary fault of the RAEs. The distributions are > not only skewed between programs but within programs. Analysis of the > elite programs reveals a surprising amount of dead wood among their > scientists. Some of it results of extinct volcanoes that have ceased > producing new work decades ago. The way the RAE allocates research > resources seems to have the potential of feather-bedding these drones at > the expense of the productive scientists at lesser institutions, making the > system not only hierarchical but closed. > > And, third, evaluating research on a departmental basis seems to run the > risk of introducing variables that extraneous to the quality of the > research. It is well known that large departments are automatically more > highly rated than small ones just on that basis alone. Moreover, such a > broad method of evaluating research reduces the flexibility in defining > proper sets for comparative purposes. Thus, there is the danger of losing > sight of new and developing fields. > > Taking the above points all together, I think that the US utilization of > such evaluations is more correct than that of the UK. In the US such > evaluations are regarded as what they actually are--beauty contests in > which academic programs strut down the runway in their swim suits, > flaunting their particulars. ("And, Miss Harvard Physics, how do you think > the world can be immediately saved?") The data can be used to determine > how the system is actually functioning and thus explore ways in which the > system can be proved. For example, I was able to use the NRC data to > determine that there is no anti-Southern bias in the evaluation of > research-doctorate programs, thus breaking numerous faculty hearts here at > LSU. However, such evaluations should in no way be utilized for allocating > research funds, which is best done on a project-by-project basis. > Fortunately, the UK way is politically impossible in the US, which is a > Federal state. The South may have lost the Civil War, but its soldiers > inflicted 9 casualties for every one suffered, putting strict limits on the > central Leviathan due to the huge butcher's bill. Any attempt to implement > the UK system would be immediately killed in Congress and probably by the > Southern delegations. > > These musings are only speculative, and I may be entirely wrong about the > dangers of the UK system. > > SB > > David Watkins @LISTSERV.UTK.EDU> on 04/02/2006 > 08:32:03 AM > > RAE et al..... > > For me the basic problem with the RAE was an unintended consequence. The > government's problem was that of 'big science'. HEFCE's desire was to > concentrate funding for expensive, scientific / medical / engineering > research in a few institutions. Probably a good idea. However, it swept up > all the other disciplines, too, into a one size fits all system. Thus the > lone researcher who can do good work in the history stacks and local > archives - if given sufficient time - was treated like the guy who needs > CERN time and a cast of hundreds to check out his/her theory. That is just > bizarre. In the humanities and social sciences the results have been poor > for academe since the basic premise was wrong. > > Switching the whole system to a metrics based one merely continues this and > distorts scholarly good practice outside the sciences; we know that > publication practices, citation behaviour etc are quite different in > non-science disciplines in the absence of a distortion like RAE. What is > needed is a funding regime which gives scholarly space-time and base-line > funding to all academics - possibly working on the basis of a notionally > equal 4 way split between teaching/teaching > preparation/administration/research. Those who needed no special facilities > could do good work on this basis as they always did traditionally.The rest > of HEFCE's research funds would go to the Research Councils. Other > academics who needed large scale funding / facilities would use their > 'quartile' to work up proposals for substantial funding from the Research > Councils and others. This could - but needn't - involve buying out more of > their own admin - or even teaching - time. Teaching everywhere would be > enhanced since few established academics would have / need non-teaching > roles (evaluation of the whole scholarly role internally, with a level > playing field) and the costs of external evaluation would be restricted to > the project level, which is the most appropriate one. There would be no > institutional 'halo' to support underperformers, but good scholars working > in isolation or in small / peripheral institutions would not be > discriminated against, and recruitment could return to the assessment of > academic institutional need rather than just looking for 'four alpha > publications' and to hell with the other academic skills and interests we > would all favour seeing in a new colleague. > > > ************************************************ > Professor David Watkins > Postgraduate Research Centre > Southampton Business School > East Park Terrace > Southampton SO14 0RH > > David.Watkins at solent.ac.uk > 023 80 319610 (Tel) > +44 23 80 31 96 10 (Tel) > > 02380 33 26 27 (fax) > +44 23 80 33 26 27 (fax) > From dgoodman at PRINCETON.EDU Mon Apr 3 19:55:13 2006 From: dgoodman at PRINCETON.EDU (David Goodman) Date: Mon, 3 Apr 2006 19:55:13 -0400 Subject: RAE Questions In-Reply-To: Message-ID: Dear Stephen, You report that NRC ratings > correlated from one period to another about at 0.9. but you object to a departmental based distribution of research funds because > Analysis of the > elite programs reveals a surprising amount of dead wood among their > scientists If both statements are correct, then the only way the NRC ratings can be valid is if the presence of a few such people in a department does not actually harm the department. The difficulty with the RAE approach is then "only" the problem of equitably distributing resources within a department. To a certain extent this is done by the students and post- doctoral fellows: they will work only with the active members. Dr. David Goodman Associate Professor Palmer School of Library and Information Science Long Island University and formerly Princeton University Library dgoodman at liu.edu dgoodman at princeton.edu ----- Original Message ----- From: Stephen J Bensman Date: Monday, April 3, 2006 2:23 pm Subject: [SIGMETRICS] RAE Questions To: SIGMETRICS at listserv.utk.edu > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > I would like to use the commentary of David Watkins below to raise > somequestions I have concerning the British RAE. I have some > insight on this > matter, for I worked closely with the National Research Council > (NRC) on > its last evaluation of US research-doctorate programs. I was > chosen as one > of three persons to test the database which the NRC developed > during this > evalution. The NRC data is some of the best data in world with > which to > analyze variables involved in academic evaluations, and it is freely > available at the following web site for those interested: > > http://www.nap.edu/readingroom/books/researchdoc/ > > Unfortunately it is not the full data set but only that published > in the > book, and you have to buy the book to understand it. > > My questions concern the effect of the types of distributions with > whichthe RAEs are dealing. It is well known that informetric > distributions are > not only highly skewed but highly stable over time. Using the NRC > data I > found that peer rating, publication rates, citation rates, etc. > correlatedfrom one period to another about at 0.9. Moreover, the > same programs > maintained their dominant positions despite the addition of > numerous new > programs. This is a natural consequence of the cumulative > advantage or > success-breeds-success process underlying this stratification system. > Translating this into UK terms, it means that Oxford and Cambridge > havebeen dominant for about the last millenium and will maintain their > dominance for the next millenium, barring their conversion into > madrassas.The RAES should not only be finding this but reinforcing it. > > Second, such a hierarchical and stable social system is probably > functionally necessary for the advance of science. Then comes the > questionabout what is the mobility of individual scientists within > this system. > Here I think may be the primary fault of the RAEs. The > distributions are > not only skewed between programs but within programs. Analysis of the > elite programs reveals a surprising amount of dead wood among their > scientists. Some of it results of extinct volcanos that have ceased > producing new work decades ago. The way the RAE allocates research > resources seems to have the potential of feather-bedding these > drones at > the expense of the productive scientists at lesser institutions, > making the > system not only hierarchical but closed. > > And, third, evaluating research on a departmental basis seems to > run the > risk of introducing variables that extraneous to the quality of the > research. It is well known that large departments are > automatically more > highly rated than small ones just on that basis alone. Moreover, > such a > broad method of evaluating research reduces the flexibility in > definingproper sets for comparative purposes. Thus, there is the > danger of losing > sight of new and developing fields. > > Taking the above points all together, I think that the US > utilization of > such evaluations is more correct than that of the UK. In the US such > evaluations are regarded as what they actually are--beauty contests in > which academic programs strut down the runway in their swim suits, > flaunting their particulars. ("And, Miss Harvard Physics, how do > you think > the world can be immediately saved?") The data can be used to > determinehow the system is actually functioning and thus explore > ways in which the > system can be proved. For example, I was able to use the NRC data to > determine that there is no anti-Southern bias in the evaluation of > research-doctorate programs, thus breaking numerous faculty hearts > here at > LSU. However, such evaluations should in no way be utilized for > allocatingresearch funds, which is best done on a project-by- > project basis. > Fortunately, the UK way is politically impossible in the US, which > is a > Federal state. The South may have lost the Civil War, but its > soldiersinflicted 9 casualties for every one suffered, putting > strict limits on the > central Leviathan due to the huge butcher's bill. Any attempt to > implementthe UK system would be immediately killed in Congress and > probably by the > Southern delegations. > > These musings are only speculative, and I may be entirely wrong > about the > dangers of the UK system. > > SB > > > > > David Watkins @LISTSERV.UTK.EDU> on > 04/02/200608:32:03 AM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] SIGMETRICS Digest - 30 Mar 2006 to 31 > Mar 2006 > (#2006-44) > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > RAE et al..... > > > For me the basic problem with the RAE was an unintended > consequence. The > government's problem was that of 'big science'. HEFCE's desire was to > concentrate funding for expensive, scientific / medical / engineering > research in a few institutions. Probably a good idea. However, it > swept up > all the other disciplines, too, into a one size fits all system. > Thus the > lone researcher who can do good work in the history stacks and local > archives - if given sufficient time - was treated like the guy who > needsCERN time and a cast of hundreds to check out his/her theory. > That is just > bizarre. In the humanities and social sciences the results have > been poor > for academe since the basic premise was wrong. > > Switching the whole system to a metrics based one merely continues > this and > distorts scholarly good practice outside the sciences; we know that > publication practices, citation behaviour etc are quite different in > non-science disciplines in the absence of a distortion like RAE. > What is > needed is a funding regime which gives scholarly space-time and > base-line > funding to all academics - possibly working on the basis of a > notionallyequal 4 way split between teaching/teaching > preparation/administration/research. Those who needed no special > facilitiescould do good work on this basis as they always did > traditionally.The rest > of HEFCE's research funds would go to the Research Councils. Other > academics who needed large scale funding / facilities would use their > 'quartile' to work up proposals for substantial funding from the > ResearchCouncils and others. This could - but needn't - involve > buying out more of > their own admin - or even teaching - time. Teaching everywhere > would be > enhanced since few established academics would have / need non- > teachingroles (evaluation of the whole scholarly role internally, > with a level > playing field) and the costs of external evaluation would be > restricted to > the project level, which is the most appropriate one. There would > be no > institutional 'halo' to support underperformers, but good scholars > workingin isolation or in small / peripheral institutions would not be > discriminated against, and recruitment could return to the > assessment of > academic institutional need rather than just looking for 'four alpha > publications' and to hell with the other academic skills and > interests we > would all favour seeing in a new colleague. > > > ************************************************ > Professor David Watkins > Postgraduate Research Centre > Southampton Business School > East Park Terrace > Southampton SO14 0RH > > David.Watkins at solent.ac.uk > 023 80 319610 (Tel) > +44 23 80 31 96 10 (Tel) > > 02380 33 26 27 (fax) > +44 23 80 33 26 27 (fax) > From loet at LEYDESDORFF.NET Tue Apr 4 14:00:12 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Tue, 4 Apr 2006 20:00:12 +0200 Subject: RAE Questions In-Reply-To: Message-ID: Dear Steven, Now that we have amply discussed the political side of the RAE, let us turn to your research program of replacing the RAE with a metrics. Two problems have been mentioned which cannot easily be solved: 1. the skewness of the distributions 2. the heterogeneity of department as units of analysis The first problem can be solved by using non-parametric regression analysis (probit or logit) instead of multi-variate regression analysis of the LISREL type. However, will this provide you with a ranking? I cannot oversee it because I never did it myself. Stephen Bensman also mentioned the instability of these skewed curves over time. I would anyhow be worried about the comparisons over time because of auto-correlation (auto-covariance) effects. I have run into these problems before, and therefore I am a big fan of entropy statistics. But policy makers tend not to understand the results if one can teach them something about "reduction of the uncertainty". They will wish firm numbers to legitimate decisions. The second problem is generated because you will have institutional units of analysis which may be composed of different disciplinary affiliations and to a variable extent. For example, I am myself misplaced in a unit of communication studies. In other cases, universities will have set up "interdisciplinary units" on purpose while individual scholars continue to affiliate themselves with their original disciplines. We know that publication and citation practices vary among disciplines. Thus, one should not compare apples with oranges. I would be inclined to disadvise to embark on this research project before one has an idea of how to handle these two problems. Fortunately, I was not the reviewer :-). With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ From harnad at ECS.SOTON.AC.UK Tue Apr 4 15:34:21 2006 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Tue, 4 Apr 2006 20:34:21 +0100 Subject: RAE Questions In-Reply-To: Message-ID: On Tue, 4 Apr 2006, Loet Leydesdorff wrote: > Now that we have amply discussed the political side of the RAE, let us turn > to your research program of replacing the RAE with a metrics. Loet, we can discuss my research progam if you like, but we were not discussing that. We were discussing the UK government's proposed policy of replacing the RAE with metrics. That has nothing to do with my research program. They decided to switch from the present hybrid system (of re-reviewing published articles plus some metrics) to metrics alone because metrics alone are already so highly correlated with the current RAE outcomes (in many, though not necessarily all fields). No critique of metrics over-rides that decision where the two are already so highly correlated. It would be pure superstition to continue going through the ergonomically and eocnomically wasteful motions of the re-review when the outcome is already there in the metrics. > Two problems have been mentioned which cannot easily be solved: > > 1. the skewness of the distributions I think there are ways to adjust for this. > 2. the heterogeneity of department as units of analysis That is a separate matter. The proposal to swap metrics alone for a redundant, expensive, time-consuming hybrid process that yields the same outcome was based on the units of analysis as they now are. The units too could be revised, and perhaps should be, but that is an independent question. > The first problem can be solved by using non-parametric regression analysis > (probit or logit) instead of multi-variate regression analysis of the LISREL > type. However, will this provide you with a ranking? I cannot oversee it > because I never did it myself. The present RAE outcome (rankings) is highly correlated with metrics already. If we correct the metrics for skewness, this may continue to give the same highly correlated outcome, or another one. RAE can then decide which one it wants to trust more, and why, but either way, it has no bearing on the validity of the decision to scrap re-reviews for metrics when they give almost the same outcome anyway. > Stephen Bensman also mentioned the > instability of these skewed curves over time. I would anyhow be worried > about the comparisons over time because of auto-correlation > (auto-covariance) effects. Whatever their skewness, temporal variability and auto-correlation, the ranking based on metrics are very similar to the rankings based on re-review. The starting point is to have a metric that does *at least as well* as the re-review did, and then to start work on optimizing it. Let us not forget the real alternatives at issue. As I said, it would be superstitious and absurd to go back from cheap metrics to profligate re-reviews because of putative blemishes in the metrics *when both yield the same outcome*. > I have run into these problems before, and therefore I am a big fan of > entropy statistics. But policy makers tend not to understand the results if > one can teach them something about "reduction of the uncertainty". They will > wish firm numbers to legitimate decisions. If policy makers have been content to rank the departments and shell out the money in proportion with the ranks for two decades now, and those ranks are derivable from cheap metrics instead of costly re-reviews, they will understand enough to know they should go with metrics. Then you can give them a course on how to improve on their metrics with "entropy statistics". > The second problem is generated because you will have institutional units of > analysis which may be composed of different disciplinary affiliations and to > a variable extent. That is already true, and it is true regardless of whether the RAE does or does not do the re-review over and above the metrics which are already highly correlated with the outcome. If rejuggling units improves the equity and predictivity of the rankings, by all means rejuggle them. But in and of itself that has nothing to do with the obvious good sense of scrapping profligate re-review in favour of parsimonious metrics when they yield the same outcome -- even with the present unit structure. > For example, I am myself misplaced in a unit of > communication studies. In other cases, universities will have set up > "interdisciplinary units" on purpose while individual scholars continue to > affiliate themselves with their original disciplines. We know that > publication and citation practices vary among disciplines. Thus, one should > not compare apples with oranges. It sounds worth remedying, but the question is orthogonal to the question of whether to retain wasteful re-review or to rely on metrics that give the same outcome at a fraction of the cost in lost time and money (that could have been devoted to funding research instead of just rating it). > I would be inclined to disadvise to embark on this research project before > one has an idea of how to handle these two problems. Fortunately, I was not > the reviewer :-). I am not sure which research project you are talking about. (I was just funded for a metrics project in Canada, but it has nothing to do with the RAE. The RAE, in contrast, has elected to scrap re-review in favour of the metrics that already yield the same outcome, but that has nothing to do with my research project.) Stevan Harnad American Scientist Open Access Forum http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html Chaire de recherche du Canada Professor of Cognitive Science Ctr. de neuroscience de la cognition Dpt. Electronics & Computer Science Universit? du Qu?bec ? Montr?al University of Southampton Montr?al, Qu?bec Highfield, Southampton Canada H3C 3P8 SO17 1BJ United Kingdom http://www.crsc.uqam.ca/ http://www.ecs.soton.ac.uk/~harnad/ From loet at LEYDESDORFF.NET Tue Apr 4 16:07:39 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Tue, 4 Apr 2006 22:07:39 +0200 Subject: RAE Questions In-Reply-To: Message-ID: Oh, I thought that you were advocating a metrics research project instead of the RAE. But this now seems all to be hot air. Success with your project in Canada. Best, Loet > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad > Sent: Tuesday, April 04, 2006 9:34 PM > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > On Tue, 4 Apr 2006, Loet Leydesdorff wrote: > > > Now that we have amply discussed the political side of the > RAE, let us > > turn to your research program of replacing the RAE with a metrics. > > Loet, we can discuss my research progam if you like, but we > were not discussing that. We were discussing the UK > government's proposed policy of replacing the RAE with > metrics. That has nothing to do with my research program. > They decided to switch from the present hybrid system (of > re-reviewing published articles plus some metrics) to metrics > alone because metrics alone are already so highly correlated > with the current RAE outcomes (in many, though not > necessarily all fields). No critique of metrics over-rides > that decision where the two are already so highly correlated. > It would be pure superstition to continue going through the > ergonomically and eocnomically wasteful motions of the > re-review when the outcome is already there in the metrics. > > > Two problems have been mentioned which cannot easily be solved: > > > > 1. the skewness of the distributions > > I think there are ways to adjust for this. > > > 2. the heterogeneity of department as units of analysis > > That is a separate matter. The proposal to swap metrics alone > for a redundant, expensive, time-consuming hybrid process > that yields the same outcome was based on the units of > analysis as they now are. The units too could be revised, and > perhaps should be, but that is an independent question. > > > The first problem can be solved by using non-parametric regression > > analysis (probit or logit) instead of multi-variate regression > > analysis of the LISREL type. However, will this provide you with a > > ranking? I cannot oversee it because I never did it myself. > > The present RAE outcome (rankings) is highly correlated with > metrics already. If we correct the metrics for skewness, this > may continue to give the same highly correlated outcome, or > another one. RAE can then decide which one it wants to trust > more, and why, but either way, it has no bearing on the > validity of the decision to scrap re-reviews for metrics when > they give almost the same outcome anyway. > > > Stephen Bensman also mentioned the > > instability of these skewed curves over time. I would anyhow be > > worried about the comparisons over time because of auto-correlation > > (auto-covariance) effects. > > Whatever their skewness, temporal variability and > auto-correlation, the ranking based on metrics are very > similar to the rankings based on re-review. The starting > point is to have a metric that does *at least as > well* as the re-review did, and then to start work on > optimizing it. Let us not forget the real alternatives at > issue. As I said, it would be superstitious and absurd to go > back from cheap metrics to profligate re-reviews because of > putative blemishes in the metrics *when both yield the same outcome*. > > > I have run into these problems before, and therefore I am a > big fan of > > entropy statistics. But policy makers tend not to understand the > > results if one can teach them something about "reduction of the > > uncertainty". They will wish firm numbers to legitimate decisions. > > If policy makers have been content to rank the departments > and shell out the money in proportion with the ranks for two > decades now, and those ranks are derivable from cheap metrics > instead of costly re-reviews, they will understand enough to > know they should go with metrics. Then you can give them a > course on how to improve on their metrics with "entropy statistics". > > > The second problem is generated because you will have institutional > > units of analysis which may be composed of different disciplinary > > affiliations and to a variable extent. > > That is already true, and it is true regardless of whether > the RAE does or does not do the re-review over and above the > metrics which are already highly correlated with the outcome. > If rejuggling units improves the equity and predictivity of > the rankings, by all means rejuggle them. But in and of > itself that has nothing to do with the obvious good sense of > scrapping profligate re-review in favour of parsimonious > metrics when they yield the same outcome -- even with the > present unit structure. > > > For example, I am myself misplaced in a unit of > communication studies. > > In other cases, universities will have set up "interdisciplinary > > units" on purpose while individual scholars continue to affiliate > > themselves with their original disciplines. We know that > publication > > and citation practices vary among disciplines. Thus, one should not > > compare apples with oranges. > > It sounds worth remedying, but the question is orthogonal to > the question of whether to retain wasteful re-review or to > rely on metrics that give the same outcome at a fraction of > the cost in lost time and money (that could have been devoted > to funding research instead of just rating it). > > > I would be inclined to disadvise to embark on this research project > > before one has an idea of how to handle these two problems. > > Fortunately, I was not the reviewer :-). > > I am not sure which research project you are talking about. > (I was just funded for a metrics project in Canada, but it > has nothing to do with the RAE. The RAE, in contrast, has > elected to scrap re-review in favour of the metrics that > already yield the same outcome, but that has nothing to do > with my research project.) > > Stevan Harnad > American Scientist Open Access Forum > http://amsci-forum.amsci.org/archives/American-Scientist-Open- > Access-Forum.html > > Chaire de recherche du Canada Professor of Cognitive Science > Ctr. de neuroscience de la cognition Dpt. Electronics & > Computer Science > Universit? du Qu?bec ? Montr?al University of Southampton > Montr?al, Qu?bec Highfield, Southampton > Canada H3C 3P8 SO17 1BJ United Kingdom > http://www.crsc.uqam.ca/ > http://www.ecs.soton.ac.uk/~harnad/ > From notsjb at LSU.EDU Tue Apr 4 16:28:31 2006 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Tue, 4 Apr 2006 15:28:31 -0500 Subject: RAE Questions Message-ID: It seems that I have walked into the middle of a discussion that I do not understand. I just want to correct one thing. The following comment was made below: "Stephen Bensman also mentioned the instability of these skewed curves over time. I would anyhow be worried about the comparisons over time because of auto-correlation auto-covariance) effects." I did not state that. I stated that these skewed curves are highly stable over time with high intertemporal correlations and the same programs comprising the top stratum for decades. For example, the ten chemistry programs most highly rated in 1910 were still among the top 15 programs most highly rated in 1993. It is interesting to note that Garfield found the same phenomenon in respect to journals, which have the same high distributional stability over time. This is probably due to the cumulative advantage process underlying both phenomena. I suppose it leads to high auto-correlation also. From this perspective RAEs every four years seem somewhat of a redundancy. You might as well give the money to the same departments you found at the top in the previous rating without any analysis. My main concern was not about the stability of the hierarchy--which is a given--but about mobility of individuals within the hierarchy. There is nothing more self-destructive than a closed hierarchy. It leads to class war of the worse kind. Really I was only speculating, musing about negative possibilities that I perceived while reading about the RAEs. I was really raising questions more than answering them. SB Stevan Harnad @listserv.utk.edu> on 04/04/2006 02:34:21 PM Please respond to ASIS&T Special Interest Group on Metrics Sent by: ASIS&T Special Interest Group on Metrics To: SIGMETRICS at listserv.utk.edu cc: (bcc: Stephen J Bensman/notsjb/LSU) Subject: Re: [SIGMETRICS] RAE Questions On Tue, 4 Apr 2006, Loet Leydesdorff wrote: > Now that we have amply discussed the political side of the RAE, let us turn > to your research program of replacing the RAE with a metrics. Loet, we can discuss my research progam if you like, but we were not discussing that. We were discussing the UK government's proposed policy of replacing the RAE with metrics. That has nothing to do with my research program. They decided to switch from the present hybrid system (of re-reviewing published articles plus some metrics) to metrics alone because metrics alone are already so highly correlated with the current RAE outcomes (in many, though not necessarily all fields). No critique of metrics over-rides that decision where the two are already so highly correlated. It would be pure superstition to continue going through the ergonomically and eocnomically wasteful motions of the re-review when the outcome is already there in the metrics. > Two problems have been mentioned which cannot easily be solved: > > 1. the skewness of the distributions I think there are ways to adjust for this. > 2. the heterogeneity of department as units of analysis That is a separate matter. The proposal to swap metrics alone for a redundant, expensive, time-consuming hybrid process that yields the same outcome was based on the units of analysis as they now are. The units too could be revised, and perhaps should be, but that is an independent question. > The first problem can be solved by using non-parametric regression analysis > (probit or logit) instead of multi-variate regression analysis of the LISREL > type. However, will this provide you with a ranking? I cannot oversee it > because I never did it myself. The present RAE outcome (rankings) is highly correlated with metrics already. If we correct the metrics for skewness, this may continue to give the same highly correlated outcome, or another one. RAE can then decide which one it wants to trust more, and why, but either way, it has no bearing on the validity of the decision to scrap re-reviews for metrics when they give almost the same outcome anyway. > Stephen Bensman also mentioned the > instability of these skewed curves over time. I would anyhow be worried > about the comparisons over time because of auto-correlation > (auto-covariance) effects. Whatever their skewness, temporal variability and auto-correlation, the ranking based on metrics are very similar to the rankings based on re-review. The starting point is to have a metric that does *at least as well* as the re-review did, and then to start work on optimizing it. Let us not forget the real alternatives at issue. As I said, it would be superstitious and absurd to go back from cheap metrics to profligate re-reviews because of putative blemishes in the metrics *when both yield the same outcome*. > I have run into these problems before, and therefore I am a big fan of > entropy statistics. But policy makers tend not to understand the results if > one can teach them something about "reduction of the uncertainty". They will > wish firm numbers to legitimate decisions. If policy makers have been content to rank the departments and shell out the money in proportion with the ranks for two decades now, and those ranks are derivable from cheap metrics instead of costly re-reviews, they will understand enough to know they should go with metrics. Then you can give them a course on how to improve on their metrics with "entropy statistics". > The second problem is generated because you will have institutional units of > analysis which may be composed of different disciplinary affiliations and to > a variable extent. That is already true, and it is true regardless of whether the RAE does or does not do the re-review over and above the metrics which are already highly correlated with the outcome. If rejuggling units improves the equity and predictivity of the rankings, by all means rejuggle them. But in and of itself that has nothing to do with the obvious good sense of scrapping profligate re-review in favour of parsimonious metrics when they yield the same outcome -- even with the present unit structure. > For example, I am myself misplaced in a unit of > communication studies. In other cases, universities will have set up > "interdisciplinary units" on purpose while individual scholars continue to > affiliate themselves with their original disciplines. We know that > publication and citation practices vary among disciplines. Thus, one should > not compare apples with oranges. It sounds worth remedying, but the question is orthogonal to the question of whether to retain wasteful re-review or to rely on metrics that give the same outcome at a fraction of the cost in lost time and money (that could have been devoted to funding research instead of just rating it). > I would be inclined to disadvise to embark on this research project before > one has an idea of how to handle these two problems. Fortunately, I was not > the reviewer :-). I am not sure which research project you are talking about. (I was just funded for a metrics project in Canada, but it has nothing to do with the RAE. The RAE, in contrast, has elected to scrap re-review in favour of the metrics that already yield the same outcome, but that has nothing to do with my research project.) Stevan Harnad American Scientist Open Access Forum http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html Chaire de recherche du Canada Professor of Cognitive Science Ctr. de neuroscience de la cognition Dpt. Electronics & Computer Science Universit? du Qu?bec ? Montr?al University of Southampton Montr?al, Qu?bec Highfield, Southampton Canada H3C 3P8 SO17 1BJ United Kingdom http://www.crsc.uqam.ca/ http://www.ecs.soton.ac.uk/~harnad/ From loet at LEYDESDORFF.NET Tue Apr 4 17:15:40 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Tue, 4 Apr 2006 23:15:40 +0200 Subject: RAE Questions In-Reply-To: Message-ID: Yes, Stephen, I meant your mobility issue. Thanks for the correction. My points were also mainly questions in response to the plea for a metrics program as a replacement for the RAE (without defending the latter in any sense). The idea of a multi-variate regression is attractive, but there are some unsolved problems which Steven Harnad thinks that can easily be solved or dismissed. Best, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Tuesday, April 04, 2006 10:29 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > It seems that I have walked into the middle of a discussion > that I do not understand. I just want to correct one thing. > The following comment was made below: > > "Stephen Bensman also mentioned the instability of these > skewed curves over time. I would anyhow be worried about the > comparisons over time because of auto-correlation > auto-covariance) effects." > > I did not state that. I stated that these skewed curves are > highly stable over time with high intertemporal correlations > and the same programs comprising the top stratum for decades. > For example, the ten chemistry programs most highly rated in > 1910 were still among the top 15 programs most highly rated > in 1993. It is interesting to note that Garfield found the > same phenomenon in respect to journals, which have the same > high distributional stability over time. This is probably > due to the cumulative advantage process underlying both > phenomena. I suppose it leads to high auto-correlation also. > From this perspective RAEs every four years seem somewhat of > a redundancy. You might as well give the money to the same > departments you found at the top in the previous rating > without any analysis. My main concern was not about the > stability of the hierarchy--which is a given--but about > mobility of individuals within the hierarchy. There is > nothing more self-destructive than a closed hierarchy. > It leads to class war of the worse kind. > > Really I was only speculating, musing about negative > possibilities that I perceived while reading about the RAEs. > I was really raising questions more than answering them. > > SB > > > > > Stevan Harnad @listserv.utk.edu> on 04/04/2006 > 02:34:21 PM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at listserv.utk.edu > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > On Tue, 4 Apr 2006, Loet Leydesdorff wrote: > > > Now that we have amply discussed the political side of the > RAE, let us > turn > > to your research program of replacing the RAE with a metrics. > > Loet, we can discuss my research progam if you like, but we > were not discussing that. We were discussing the UK > government's proposed policy of replacing the RAE with > metrics. That has nothing to do with my research program. > They decided to switch from the present hybrid system (of > re-reviewing published articles plus some metrics) to metrics > alone because metrics alone are already so highly correlated > with the current RAE outcomes (in many, though not > necessarily all fields). No critique of metrics over-rides > that decision where the two are already so highly correlated. > It would be pure superstition to continue going through the > ergonomically and eocnomically wasteful motions of the > re-review when the outcome is already there in the metrics. > > > Two problems have been mentioned which cannot easily be solved: > > > > 1. the skewness of the distributions > > I think there are ways to adjust for this. > > > 2. the heterogeneity of department as units of analysis > > That is a separate matter. The proposal to swap metrics alone > for a redundant, expensive, time-consuming hybrid process > that yields the same outcome was based on the units of > analysis as they now are. The units too could be revised, and > perhaps should be, but that is an independent question. > > > The first problem can be solved by using non-parametric regression > analysis > > (probit or logit) instead of multi-variate regression > analysis of the > LISREL > > type. However, will this provide you with a ranking? I > cannot oversee > > it because I never did it myself. > > The present RAE outcome (rankings) is highly correlated with > metrics already. If we correct the metrics for skewness, this > may continue to give the same highly correlated outcome, or > another one. RAE can then decide which one it wants to trust > more, and why, but either way, it has no bearing on the > validity of the decision to scrap re-reviews for metrics when > they give almost the same outcome anyway. > > > Stephen Bensman also mentioned the > > instability of these skewed curves over time. I would anyhow be > > worried about the comparisons over time because of auto-correlation > > (auto-covariance) effects. > > Whatever their skewness, temporal variability and > auto-correlation, the ranking based on metrics are very > similar to the rankings based on re-review. The starting > point is to have a metric that does *at least as > well* as the re-review did, and then to start work on > optimizing it. Let us not forget the real alternatives at > issue. As I said, it would be superstitious and absurd to go > back from cheap metrics to profligate re-reviews because of > putative blemishes in the metrics *when both yield the same outcome*. > > > I have run into these problems before, and therefore I am a > big fan of > > entropy statistics. But policy makers tend not to understand the > > results > if > > one can teach them something about "reduction of the uncertainty". > > They > will > > wish firm numbers to legitimate decisions. > > If policy makers have been content to rank the departments > and shell out the money in proportion with the ranks for two > decades now, and those ranks are derivable from cheap metrics > instead of costly re-reviews, they will understand enough to > know they should go with metrics. Then you can give them a > course on how to improve on their metrics with "entropy statistics". > > > The second problem is generated because you will have institutional > > units > of > > analysis which may be composed of different disciplinary > affiliations > > and > to > > a variable extent. > > That is already true, and it is true regardless of whether > the RAE does or does not do the re-review over and above the > metrics which are already highly correlated with the outcome. > If rejuggling units improves the equity and predictivity of > the rankings, by all means rejuggle them. But in and of > itself that has nothing to do with the obvious good sense of > scrapping profligate re-review in favour of parsimonious > metrics when they yield the same outcome -- even with the > present unit structure. > > > For example, I am myself misplaced in a unit of > communication studies. > > In other cases, universities will have set up "interdisciplinary > > units" on purpose while individual scholars continue > to > > affiliate themselves with their original disciplines. We know that > > publication and citation practices vary among disciplines. Thus, one > should > > not compare apples with oranges. > > It sounds worth remedying, but the question is orthogonal to > the question of whether to retain wasteful re-review or to > rely on metrics that give the same outcome at a fraction of > the cost in lost time and money (that could have been devoted > to funding research instead of just rating it). > > > I would be inclined to disadvise to embark on this research project > before > > one has an idea of how to handle these two problems. Fortunately, I > > was > not > > the reviewer :-). > > I am not sure which research project you are talking about. > (I was just funded for a metrics project in Canada, but it > has nothing to do with the RAE. The RAE, in contrast, has > elected to scrap re-review in favour of the metrics that > already yield the same outcome, but that has nothing to do > with my research project.) > > Stevan Harnad > American Scientist Open Access Forum > http://amsci-forum.amsci.org/archives/American-Scientist-Open- > Access-Forum.html > > > Chaire de recherche du Canada Professor of Cognitive Science > Ctr. de neuroscience de la cognition Dpt. Electronics & > Computer Science > Universit? du Qu?bec ? Montr?al University of Southampton > Montr?al, Qu?bec Highfield, Southampton > Canada H3C 3P8 SO17 1BJ United Kingdom > http://www.crsc.uqam.ca/ > http://www.ecs.soton.ac.uk/~harnad/ > From notsjb at LSU.EDU Tue Apr 4 21:43:43 2006 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Tue, 4 Apr 2006 20:43:43 -0500 Subject: RAE Questions Message-ID: I have given lectures to faculty on the use of citation analysis for purposes of faculty evaluation. I have always prefaced these lectures with the comment that I feel guilty in that I may be passing out hand grenades to kindergarten students. Citation analysis can be extraordinarily destructive if misapplied. If you take just one department, you can see the problem. Even in a department covering a relatively homogeneous field, you have professors engaged in different specialities of differing size. Then you have the problem of differing professional age. Due to these factors you cannot use raw citation counts, but must compare professors to an outside set of the same subject specialty and same professional age. Then you have to standardize the scores for comparative purposes. Just defining the subject set can cause horrendous difficulties, as certain professors may consider a given professor's subject set insignificant in the first place and unworthy of even being pursued. I mean, you should already see the difficulties. I have always come back from these experiences clawed to pieces and seeking a hole in which to hide. It is more politics and art form than a science. The trouble with a thing like the NRC ratings is that it works on gross parameters and misses certain strengths. For example, LSU is not highly rated in history and English, but change the sets to Southern history and Southern literature, and it suddenly comes out on top. I am sure that you can invent even more difficulties. It is good to study these things, but it is best to analyze people as individuals rather than in aggregate. Use of citation analysis is so provocative, that I have advised the person in charge of serials cancellations not to use impact factor in any way to analyze journals lest he be killed by the faculty and, if he does, to hide the fact that he is doing so. Facutly do not like outsiders with measures they consider questionable sticking their noses in what they consider their business. SB Loet Leydesdorff @listserv.utk.edu> on 04/04/2006 04:15:40 PM Please respond to ASIS&T Special Interest Group on Metrics Sent by: ASIS&T Special Interest Group on Metrics To: SIGMETRICS at listserv.utk.edu cc: (bcc: Stephen J Bensman/notsjb/LSU) Subject: Re: [SIGMETRICS] RAE Questions Yes, Stephen, I meant your mobility issue. Thanks for the correction. My points were also mainly questions in response to the plea for a metrics program as a replacement for the RAE (without defending the latter in any sense). The idea of a multi-variate regression is attractive, but there are some unsolved problems which Steven Harnad thinks that can easily be solved or dismissed. Best, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Tuesday, April 04, 2006 10:29 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > It seems that I have walked into the middle of a discussion > that I do not understand. I just want to correct one thing. > The following comment was made below: > > "Stephen Bensman also mentioned the instability of these > skewed curves over time. I would anyhow be worried about the > comparisons over time because of auto-correlation > auto-covariance) effects." > > I did not state that. I stated that these skewed curves are > highly stable over time with high intertemporal correlations > and the same programs comprising the top stratum for decades. > For example, the ten chemistry programs most highly rated in > 1910 were still among the top 15 programs most highly rated > in 1993. It is interesting to note that Garfield found the > same phenomenon in respect to journals, which have the same > high distributional stability over time. This is probably > due to the cumulative advantage process underlying both > phenomena. I suppose it leads to high auto-correlation also. > From this perspective RAEs every four years seem somewhat of > a redundancy. You might as well give the money to the same > departments you found at the top in the previous rating > without any analysis. My main concern was not about the > stability of the hierarchy--which is a given--but about > mobility of individuals within the hierarchy. There is > nothing more self-destructive than a closed hierarchy. > It leads to class war of the worse kind. > > Really I was only speculating, musing about negative > possibilities that I perceived while reading about the RAEs. > I was really raising questions more than answering them. > > SB > > > > > Stevan Harnad @listserv.utk.edu> on 04/04/2006 > 02:34:21 PM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at listserv.utk.edu > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > On Tue, 4 Apr 2006, Loet Leydesdorff wrote: > > > Now that we have amply discussed the political side of the > RAE, let us > turn > > to your research program of replacing the RAE with a metrics. > > Loet, we can discuss my research progam if you like, but we > were not discussing that. We were discussing the UK > government's proposed policy of replacing the RAE with > metrics. That has nothing to do with my research program. > They decided to switch from the present hybrid system (of > re-reviewing published articles plus some metrics) to metrics > alone because metrics alone are already so highly correlated > with the current RAE outcomes (in many, though not > necessarily all fields). No critique of metrics over-rides > that decision where the two are already so highly correlated. > It would be pure superstition to continue going through the > ergonomically and eocnomically wasteful motions of the > re-review when the outcome is already there in the metrics. > > > Two problems have been mentioned which cannot easily be solved: > > > > 1. the skewness of the distributions > > I think there are ways to adjust for this. > > > 2. the heterogeneity of department as units of analysis > > That is a separate matter. The proposal to swap metrics alone > for a redundant, expensive, time-consuming hybrid process > that yields the same outcome was based on the units of > analysis as they now are. The units too could be revised, and > perhaps should be, but that is an independent question. > > > The first problem can be solved by using non-parametric regression > analysis > > (probit or logit) instead of multi-variate regression > analysis of the > LISREL > > type. However, will this provide you with a ranking? I > cannot oversee > > it because I never did it myself. > > The present RAE outcome (rankings) is highly correlated with > metrics already. If we correct the metrics for skewness, this > may continue to give the same highly correlated outcome, or > another one. RAE can then decide which one it wants to trust > more, and why, but either way, it has no bearing on the > validity of the decision to scrap re-reviews for metrics when > they give almost the same outcome anyway. > > > Stephen Bensman also mentioned the > > instability of these skewed curves over time. I would anyhow be > > worried about the comparisons over time because of auto-correlation > > (auto-covariance) effects. > > Whatever their skewness, temporal variability and > auto-correlation, the ranking based on metrics are very > similar to the rankings based on re-review. The starting > point is to have a metric that does *at least as > well* as the re-review did, and then to start work on > optimizing it. Let us not forget the real alternatives at > issue. As I said, it would be superstitious and absurd to go > back from cheap metrics to profligate re-reviews because of > putative blemishes in the metrics *when both yield the same outcome*. > > > I have run into these problems before, and therefore I am a > big fan of > > entropy statistics. But policy makers tend not to understand the > > results > if > > one can teach them something about "reduction of the uncertainty". > > They > will > > wish firm numbers to legitimate decisions. > > If policy makers have been content to rank the departments > and shell out the money in proportion with the ranks for two > decades now, and those ranks are derivable from cheap metrics > instead of costly re-reviews, they will understand enough to > know they should go with metrics. Then you can give them a > course on how to improve on their metrics with "entropy statistics". > > > The second problem is generated because you will have institutional > > units > of > > analysis which may be composed of different disciplinary > affiliations > > and > to > > a variable extent. > > That is already true, and it is true regardless of whether > the RAE does or does not do the re-review over and above the > metrics which are already highly correlated with the outcome. > If rejuggling units improves the equity and predictivity of > the rankings, by all means rejuggle them. But in and of > itself that has nothing to do with the obvious good sense of > scrapping profligate re-review in favour of parsimonious > metrics when they yield the same outcome -- even with the > present unit structure. > > > For example, I am myself misplaced in a unit of > communication studies. > > In other cases, universities will have set up "interdisciplinary > > units" on purpose while individual scholars continue > to > > affiliate themselves with their original disciplines. We know that > > publication and citation practices vary among disciplines. Thus, one > should > > not compare apples with oranges. > > It sounds worth remedying, but the question is orthogonal to > the question of whether to retain wasteful re-review or to > rely on metrics that give the same outcome at a fraction of > the cost in lost time and money (that could have been devoted > to funding research instead of just rating it). > > > I would be inclined to disadvise to embark on this research project > before > > one has an idea of how to handle these two problems. Fortunately, I > > was > not > > the reviewer :-). > > I am not sure which research project you are talking about. > (I was just funded for a metrics project in Canada, but it > has nothing to do with the RAE. The RAE, in contrast, has > elected to scrap re-review in favour of the metrics that > already yield the same outcome, but that has nothing to do > with my research project.) > > Stevan Harnad > American Scientist Open Access Forum > http://amsci-forum.amsci.org/archives/American-Scientist-Open- > Access-Forum.html > > > Chaire de recherche du Canada Professor of Cognitive Science > Ctr. de neuroscience de la cognition Dpt. Electronics & > Computer Science > Universit? du Qu?bec ? Montr?al University of Southampton > Montr?al, Qu?bec Highfield, Southampton > Canada H3C 3P8 SO17 1BJ United Kingdom > http://www.crsc.uqam.ca/ > http://www.ecs.soton.ac.uk/~harnad/ > From loet at LEYDESDORFF.NET Wed Apr 5 02:25:58 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Wed, 5 Apr 2006 08:25:58 +0200 Subject: RAE Questions In-Reply-To: Message-ID: Dear Stephen and colleagues, There are legitimate uses of these measures like, for example, learning faculty how to consider their own position in the literature. This may enable them to improve the quality and visibility of their contributions, to reorganize units, etc. The other legitimate use, of course, is our scholarly communication about how to study these bibliometric tools and how to use them as variables in a model of how the sciences (and technologies) develop. A number of problems of these measures in policy processes have now been listed. I want to add one: As long as we are not able to rank document sets (e.g., journals) clearly, it remains tricky to make an inference to authors and institutions. OA will not help solving these problems. :-) With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Wednesday, April 05, 2006 3:44 AM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > I have given lectures to faculty on the use of citation > analysis for purposes of faculty evaluation. I have always > prefaced these lectures with the comment that I feel guilty > in that I may be passing out hand grenades to kindergarten > students. Citation analysis can be extraordinarily > destructive if misapplied. If you take just one department, > you can see the problem. Even in a department covering a > relatively homogeneous field, you have professors engaged in > different specialities of differing size. > Then you have the problem of differing professional age. Due > to these factors you cannot use raw citation counts, but must > compare professors to an outside set of the same subject > specialty and same professional age. > Then you have to standardize the scores for comparative > purposes. Just defining the subject set can cause horrendous > difficulties, as certain professors may consider a given > professor's subject set insignificant in the first place and > unworthy of even being pursued. I mean, you should already > see the difficulties. I have always come back from these > experiences clawed to pieces and seeking a hole in which to > hide. It is more politics and art form than a science. The > trouble with a thing like the NRC ratings is that it works on > gross parameters and misses certain strengths. For example, > LSU is not highly rated in history and English, but change > the sets to Southern history and Southern literature, and it > suddenly comes out on top. I am sure that you can invent > even more difficulties. It is good to study these things, > but it is best to analyze people as individuals rather than > in aggregate. Use of citation analysis is so provocative, > that I have advised the person in charge of serials > cancellations not to use impact factor in any way to analyze > journals lest he be killed by the faculty and, if he does, to > hide the fact that he is doing so. Facutly do not like > outsiders with measures they consider questionable sticking > their noses in what they consider their business. > > SB > > > > > > > Loet Leydesdorff @listserv.utk.edu> on > 04/04/2006 04:15:40 PM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at listserv.utk.edu > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Yes, Stephen, I meant your mobility issue. Thanks for the > correction. My points were also mainly questions in response > to the plea for a metrics program as a replacement for the > RAE (without defending the latter in any sense). The idea of > a multi-variate regression is attractive, but there are some > unsolved problems which Steven Harnad thinks that can easily > be solved or dismissed. > > Best, Loet > > ________________________________ > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > > Sent: Tuesday, April 04, 2006 10:29 PM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] RAE Questions > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > It seems that I have walked into the middle of a discussion > that I do > > not understand. I just want to correct one thing. > > The following comment was made below: > > > > "Stephen Bensman also mentioned the instability of these > skewed curves > > over time. I would anyhow be worried about the comparisons > over time > > because of auto-correlation > > auto-covariance) effects." > > > > I did not state that. I stated that these skewed curves are highly > > stable over time with high intertemporal correlations and the same > > programs comprising the top stratum for decades. > > For example, the ten chemistry programs most highly rated in 1910 > > were still among the top 15 programs most highly rated in > 1993. It is > > interesting to note that Garfield found the same phenomenon > in respect > > to journals, which have the same high distributional stability over > > time. This is probably due to the cumulative advantage process > > underlying both phenomena. I suppose it leads to high > > auto-correlation also. > > From this perspective RAEs every four years seem somewhat of a > > redundancy. You might as well give the money to the same > departments > > you found at the top in the previous rating without any > analysis. My > > main concern was not about the stability of the > hierarchy--which is a > > given--but about mobility of individuals within the > hierarchy. There > > is nothing more self-destructive than a closed hierarchy. > > It leads to class war of the worse kind. > > > > Really I was only speculating, musing about negative possibilities > > that I perceived while reading about the RAEs. > > I was really raising questions more than answering them. > > > > SB > > > > > > > > > > Stevan Harnad @listserv.utk.edu> on > 04/04/2006 > > 02:34:21 PM > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > To: SIGMETRICS at listserv.utk.edu > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > Subject: Re: [SIGMETRICS] RAE Questions > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > On Tue, 4 Apr 2006, Loet Leydesdorff wrote: > > > > > Now that we have amply discussed the political side of the > > RAE, let us > > turn > > > to your research program of replacing the RAE with a metrics. > > > > Loet, we can discuss my research progam if you like, but we > were not > > discussing that. We were discussing the UK government's proposed > > policy of replacing the RAE with metrics. That has nothing > to do with > > my research program. > > They decided to switch from the present hybrid system (of > re-reviewing > > published articles plus some metrics) to metrics alone > because metrics > > alone are already so highly correlated with the current RAE > outcomes > > (in many, though not necessarily all fields). No critique > of metrics > > over-rides that decision where the two are already so highly > > correlated. > > It would be pure superstition to continue going through the > > ergonomically and eocnomically wasteful motions of the > re-review when > > the outcome is already there in the metrics. > > > > > Two problems have been mentioned which cannot easily be solved: > > > > > > 1. the skewness of the distributions > > > > I think there are ways to adjust for this. > > > > > 2. the heterogeneity of department as units of analysis > > > > That is a separate matter. The proposal to swap metrics alone for a > > redundant, expensive, time-consuming hybrid process that yields the > > same outcome was based on the units of analysis as they now > are. The > > units too could be revised, and perhaps should be, but that is an > > independent question. > > > > > The first problem can be solved by using non-parametric regression > > analysis > > > (probit or logit) instead of multi-variate regression > > analysis of the > > LISREL > > > type. However, will this provide you with a ranking? I > > cannot oversee > > > it because I never did it myself. > > > > The present RAE outcome (rankings) is highly correlated > with metrics > > already. If we correct the metrics for skewness, this may > continue to > > give the same highly correlated outcome, or another one. > RAE can then > > decide which one it wants to trust more, and why, but > either way, it > > has no bearing on the validity of the decision to scrap > re-reviews for > > metrics when they give almost the same outcome anyway. > > > > > Stephen Bensman also mentioned the > > > instability of these skewed curves over time. I would anyhow be > > > worried about the comparisons over time because of > auto-correlation > > > (auto-covariance) effects. > > > > Whatever their skewness, temporal variability and auto-correlation, > > the ranking based on metrics are very similar to the > rankings based on > > re-review. The starting point is to have a metric that does > *at least > > as > > well* as the re-review did, and then to start work on > optimizing it. > > Let us not forget the real alternatives at issue. As I > said, it would > > be superstitious and absurd to go back from cheap metrics to > > profligate re-reviews because of putative blemishes in the metrics > > *when both yield the same outcome*. > > > > > I have run into these problems before, and therefore I am a > > big fan of > > > entropy statistics. But policy makers tend not to understand the > > > results > > if > > > one can teach them something about "reduction of the uncertainty". > > > They > > will > > > wish firm numbers to legitimate decisions. > > > > If policy makers have been content to rank the departments > and shell > > out the money in proportion with the ranks for two decades now, and > > those ranks are derivable from cheap metrics instead of costly > > re-reviews, they will understand enough to know they should go with > > metrics. Then you can give them a course on how to improve on their > > metrics with "entropy statistics". > > > > > The second problem is generated because you will have > institutional > > > units > > of > > > analysis which may be composed of different disciplinary > > affiliations > > > and > > to > > > a variable extent. > > > > That is already true, and it is true regardless of whether the RAE > > does or does not do the re-review over and above the > metrics which are > > already highly correlated with the outcome. > > If rejuggling units improves the equity and predictivity of the > > rankings, by all means rejuggle them. But in and of itself that has > > nothing to do with the obvious good sense of scrapping profligate > > re-review in favour of parsimonious metrics when they yield > the same > > outcome -- even with the present unit structure. > > > > > For example, I am myself misplaced in a unit of > > communication studies. > > > In other cases, universities will have set up "interdisciplinary > > > units" on purpose while individual scholars continue > > to > > > affiliate themselves with their original disciplines. We > know that > > > publication and citation practices vary among > disciplines. Thus, one > > should > > > not compare apples with oranges. > > > > It sounds worth remedying, but the question is orthogonal to the > > question of whether to retain wasteful re-review or to rely > on metrics > > that give the same outcome at a fraction of the cost in > lost time and > > money (that could have been devoted to funding research instead of > > just rating it). > > > > > I would be inclined to disadvise to embark on this > research project > > before > > > one has an idea of how to handle these two problems. > Fortunately, I > > > was > > not > > > the reviewer :-). > > > > I am not sure which research project you are talking about. > > (I was just funded for a metrics project in Canada, but it > has nothing > > to do with the RAE. The RAE, in contrast, has elected to scrap > > re-review in favour of the metrics that already yield the same > > outcome, but that has nothing to do with my research project.) > > > > Stevan Harnad > > American Scientist Open Access Forum > > http://amsci-forum.amsci.org/archives/American-Scientist-Open- > > Access-Forum.html > > > > > > Chaire de recherche du Canada Professor of > Cognitive Science > > Ctr. de neuroscience de la cognition Dpt. Electronics & > > Computer Science > > Universit? du Qu?bec ? Montr?al University of Southampton > > Montr?al, Qu?bec Highfield, Southampton > > Canada H3C 3P8 SO17 1BJ United Kingdom > > http://www.crsc.uqam.ca/ > > http://www.ecs.soton.ac.uk/~harnad/ > > > From notsjb at LSU.EDU Wed Apr 5 09:08:51 2006 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Wed, 5 Apr 2006 08:08:51 -0500 Subject: RAE Questions Message-ID: Loet, The one reason I am so fascinated with your work is that you deal with the fundamental problem--proper set definition for the analysis. This is the most difficult and, for me, the most subjective, value-laden part of the analysis. Once people can agree on these, the rest is proper statistical technique, provided you use multiple measures that can cross-check each other--expert ratings, citations, library use, Internet use, etc. It is a fundmental mistake to use citation analysis by itself, particularly since it seems that ISI data are dominated by the citation patterns of the US academic social stratification system. This may make it invalid for other areas. It is also necessary to be aware that there is not just one answer but multiple ones depending on your objectives, etc. The US NRC ratings were a broad brush effort to determine the importance of programs in disciplines as whole and not in specific subsets, which can be crucial. For example, one specific subset that was not well covered by the NRC ratings was how to deal with wetlands, river delta areas, flood control, coastal zone areas, etc. I think that you would find that perhaps the Netherlands would rank the highest in this subset, and this is of the utmost interest to Louisiana now. I still think that the questions I raised about the RAEs are valid ones. SB Loet Leydesdorff @LISTSERV.UTK.EDU> on 04/05/2006 01:25:58 AM Please respond to ASIS&T Special Interest Group on Metrics Sent by: ASIS&T Special Interest Group on Metrics To: SIGMETRICS at LISTSERV.UTK.EDU cc: (bcc: Stephen J Bensman/notsjb/LSU) Subject: Re: [SIGMETRICS] RAE Questions Dear Stephen and colleagues, There are legitimate uses of these measures like, for example, learning faculty how to consider their own position in the literature. This may enable them to improve the quality and visibility of their contributions, to reorganize units, etc. The other legitimate use, of course, is our scholarly communication about how to study these bibliometric tools and how to use them as variables in a model of how the sciences (and technologies) develop. A number of problems of these measures in policy processes have now been listed. I want to add one: As long as we are not able to rank document sets (e.g., journals) clearly, it remains tricky to make an inference to authors and institutions. OA will not help solving these problems. :-) With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Wednesday, April 05, 2006 3:44 AM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > I have given lectures to faculty on the use of citation > analysis for purposes of faculty evaluation. I have always > prefaced these lectures with the comment that I feel guilty > in that I may be passing out hand grenades to kindergarten > students. Citation analysis can be extraordinarily > destructive if misapplied. If you take just one department, > you can see the problem. Even in a department covering a > relatively homogeneous field, you have professors engaged in > different specialities of differing size. > Then you have the problem of differing professional age. Due > to these factors you cannot use raw citation counts, but must > compare professors to an outside set of the same subject > specialty and same professional age. > Then you have to standardize the scores for comparative > purposes. Just defining the subject set can cause horrendous > difficulties, as certain professors may consider a given > professor's subject set insignificant in the first place and > unworthy of even being pursued. I mean, you should already > see the difficulties. I have always come back from these > experiences clawed to pieces and seeking a hole in which to > hide. It is more politics and art form than a science. The > trouble with a thing like the NRC ratings is that it works on > gross parameters and misses certain strengths. For example, > LSU is not highly rated in history and English, but change > the sets to Southern history and Southern literature, and it > suddenly comes out on top. I am sure that you can invent > even more difficulties. It is good to study these things, > but it is best to analyze people as individuals rather than > in aggregate. Use of citation analysis is so provocative, > that I have advised the person in charge of serials > cancellations not to use impact factor in any way to analyze > journals lest he be killed by the faculty and, if he does, to > hide the fact that he is doing so. Facutly do not like > outsiders with measures they consider questionable sticking > their noses in what they consider their business. > > SB > > > > > > > Loet Leydesdorff @listserv.utk.edu> on > 04/04/2006 04:15:40 PM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at listserv.utk.edu > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Yes, Stephen, I meant your mobility issue. Thanks for the > correction. My points were also mainly questions in response > to the plea for a metrics program as a replacement for the > RAE (without defending the latter in any sense). The idea of > a multi-variate regression is attractive, but there are some > unsolved problems which Steven Harnad thinks that can easily > be solved or dismissed. > > Best, Loet > > ________________________________ > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > > Sent: Tuesday, April 04, 2006 10:29 PM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] RAE Questions > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > It seems that I have walked into the middle of a discussion > that I do > > not understand. I just want to correct one thing. > > The following comment was made below: > > > > "Stephen Bensman also mentioned the instability of these > skewed curves > > over time. I would anyhow be worried about the comparisons > over time > > because of auto-correlation > > auto-covariance) effects." > > > > I did not state that. I stated that these skewed curves are highly > > stable over time with high intertemporal correlations and the same > > programs comprising the top stratum for decades. > > For example, the ten chemistry programs most highly rated in 1910 > > were still among the top 15 programs most highly rated in > 1993. It is > > interesting to note that Garfield found the same phenomenon > in respect > > to journals, which have the same high distributional stability over > > time. This is probably due to the cumulative advantage process > > underlying both phenomena. I suppose it leads to high > > auto-correlation also. > > From this perspective RAEs every four years seem somewhat of a > > redundancy. You might as well give the money to the same > departments > > you found at the top in the previous rating without any > analysis. My > > main concern was not about the stability of the > hierarchy--which is a > > given--but about mobility of individuals within the > hierarchy. There > > is nothing more self-destructive than a closed hierarchy. > > It leads to class war of the worse kind. > > > > Really I was only speculating, musing about negative possibilities > > that I perceived while reading about the RAEs. > > I was really raising questions more than answering them. > > > > SB > > > > > > > > > > Stevan Harnad @listserv.utk.edu> on > 04/04/2006 > > 02:34:21 PM > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > To: SIGMETRICS at listserv.utk.edu > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > Subject: Re: [SIGMETRICS] RAE Questions > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > On Tue, 4 Apr 2006, Loet Leydesdorff wrote: > > > > > Now that we have amply discussed the political side of the > > RAE, let us > > turn > > > to your research program of replacing the RAE with a metrics. > > > > Loet, we can discuss my research progam if you like, but we > were not > > discussing that. We were discussing the UK government's proposed > > policy of replacing the RAE with metrics. That has nothing > to do with > > my research program. > > They decided to switch from the present hybrid system (of > re-reviewing > > published articles plus some metrics) to metrics alone > because metrics > > alone are already so highly correlated with the current RAE > outcomes > > (in many, though not necessarily all fields). No critique > of metrics > > over-rides that decision where the two are already so highly > > correlated. > > It would be pure superstition to continue going through the > > ergonomically and eocnomically wasteful motions of the > re-review when > > the outcome is already there in the metrics. > > > > > Two problems have been mentioned which cannot easily be solved: > > > > > > 1. the skewness of the distributions > > > > I think there are ways to adjust for this. > > > > > 2. the heterogeneity of department as units of analysis > > > > That is a separate matter. The proposal to swap metrics alone for a > > redundant, expensive, time-consuming hybrid process that yields the > > same outcome was based on the units of analysis as they now > are. The > > units too could be revised, and perhaps should be, but that is an > > independent question. > > > > > The first problem can be solved by using non-parametric regression > > analysis > > > (probit or logit) instead of multi-variate regression > > analysis of the > > LISREL > > > type. However, will this provide you with a ranking? I > > cannot oversee > > > it because I never did it myself. > > > > The present RAE outcome (rankings) is highly correlated > with metrics > > already. If we correct the metrics for skewness, this may > continue to > > give the same highly correlated outcome, or another one. > RAE can then > > decide which one it wants to trust more, and why, but > either way, it > > has no bearing on the validity of the decision to scrap > re-reviews for > > metrics when they give almost the same outcome anyway. > > > > > Stephen Bensman also mentioned the > > > instability of these skewed curves over time. I would anyhow be > > > worried about the comparisons over time because of > auto-correlation > > > (auto-covariance) effects. > > > > Whatever their skewness, temporal variability and auto-correlation, > > the ranking based on metrics are very similar to the > rankings based on > > re-review. The starting point is to have a metric that does > *at least > > as > > well* as the re-review did, and then to start work on > optimizing it. > > Let us not forget the real alternatives at issue. As I > said, it would > > be superstitious and absurd to go back from cheap metrics to > > profligate re-reviews because of putative blemishes in the metrics > > *when both yield the same outcome*. > > > > > I have run into these problems before, and therefore I am a > > big fan of > > > entropy statistics. But policy makers tend not to understand the > > > results > > if > > > one can teach them something about "reduction of the uncertainty". > > > They > > will > > > wish firm numbers to legitimate decisions. > > > > If policy makers have been content to rank the departments > and shell > > out the money in proportion with the ranks for two decades now, and > > those ranks are derivable from cheap metrics instead of costly > > re-reviews, they will understand enough to know they should go with > > metrics. Then you can give them a course on how to improve on their > > metrics with "entropy statistics". > > > > > The second problem is generated because you will have > institutional > > > units > > of > > > analysis which may be composed of different disciplinary > > affiliations > > > and > > to > > > a variable extent. > > > > That is already true, and it is true regardless of whether the RAE > > does or does not do the re-review over and above the > metrics which are > > already highly correlated with the outcome. > > If rejuggling units improves the equity and predictivity of the > > rankings, by all means rejuggle them. But in and of itself that has > > nothing to do with the obvious good sense of scrapping profligate > > re-review in favour of parsimonious metrics when they yield > the same > > outcome -- even with the present unit structure. > > > > > For example, I am myself misplaced in a unit of > > communication studies. > > > In other cases, universities will have set up "interdisciplinary > > > units" on purpose while individual scholars continue > > to > > > affiliate themselves with their original disciplines. We > know that > > > publication and citation practices vary among > disciplines. Thus, one > > should > > > not compare apples with oranges. > > > > It sounds worth remedying, but the question is orthogonal to the > > question of whether to retain wasteful re-review or to rely > on metrics > > that give the same outcome at a fraction of the cost in > lost time and > > money (that could have been devoted to funding research instead of > > just rating it). > > > > > I would be inclined to disadvise to embark on this > research project > > before > > > one has an idea of how to handle these two problems. > Fortunately, I > > > was > > not > > > the reviewer :-). > > > > I am not sure which research project you are talking about. > > (I was just funded for a metrics project in Canada, but it > has nothing > > to do with the RAE. The RAE, in contrast, has elected to scrap > > re-review in favour of the metrics that already yield the same > > outcome, but that has nothing to do with my research project.) > > > > Stevan Harnad > > American Scientist Open Access Forum > > http://amsci-forum.amsci.org/archives/American-Scientist-Open- > > Access-Forum.html > > > > > > Chaire de recherche du Canada Professor of > Cognitive Science > > Ctr. de neuroscience de la cognition Dpt. Electronics & > > Computer Science > > Universit? du Qu?bec ? Montr?al University of Southampton > > Montr?al, Qu?bec Highfield, Southampton > > Canada H3C 3P8 SO17 1BJ United Kingdom > > http://www.crsc.uqam.ca/ > > http://www.ecs.soton.ac.uk/~harnad/ > > > From loet at LEYDESDORFF.NET Wed Apr 5 11:18:42 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Wed, 5 Apr 2006 17:18:42 +0200 Subject: RAE Questions In-Reply-To: Message-ID: Dear Stephen: Thank you so much for your nice words. With best wishes and in friendship, Loet > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Wednesday, April 05, 2006 3:09 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Loet, > The one reason I am so fascinated with your work is that you > deal with the > fundamental problem--proper set definition for the analysis. > This is the > most difficult and, for me, the most subjective, value-laden > part of the analysis. Once people can agree on these, the > rest is proper statistical technique, provided you use > multiple measures that can cross-check each other--expert > ratings, citations, library use, Internet use, etc. It is a > fundmental mistake to use citation analysis by itself, > particularly since it seems that ISI data are dominated by > the citation patterns of the US academic social > stratification system. This may make it invalid for other > areas. It is also necessary to be aware that there is not > just one answer but multiple ones depending on your > objectives, etc. The US NRC ratings were a broad brush > effort to determine the importance of programs in disciplines > as whole and not in specific subsets, which can be crucial. > For example, one specific subset that was not well covered by > the NRC ratings was how to deal with wetlands, river delta > areas, flood control, coastal zone areas, etc. I think that > you would find that perhaps the Netherlands would rank the > highest in this subset, and this is of the utmost interest to > Louisiana now. > > I still think that the questions I raised about the RAEs are > valid ones. > > SB > > > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > 04/05/2006 > 01:25:58 AM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear Stephen and colleagues, > > There are legitimate uses of these measures like, for > example, learning faculty how to consider their own position > in the literature. This may enable them to improve the > quality and visibility of their contributions, to reorganize > units, etc. The other legitimate use, of course, is our > scholarly communication about how to study these bibliometric > tools and how to use them as variables in a model of how the > sciences (and technologies) develop. > > A number of problems of these measures in policy processes > have now been listed. I want to add one: As long as we are > not able to rank document sets (e.g., journals) clearly, it > remains tricky to make an inference to authors and > institutions. OA will not help solving these problems. :-) > > With best wishes, > > > Loet > > ________________________________ > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > > Sent: Wednesday, April 05, 2006 3:44 AM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] RAE Questions > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > I have given lectures to faculty on the use of citation > analysis for > > purposes of faculty evaluation. I have always prefaced > these lectures > > with the comment that I feel guilty in that I may be > passing out hand > > grenades to kindergarten students. Citation analysis can be > > extraordinarily destructive if misapplied. If you take just one > > department, you can see the problem. Even in a department > covering a > > relatively homogeneous field, you have professors engaged > in different > > specialities of differing size. > > Then you have the problem of differing professional age. > Due to these > > factors you cannot use raw citation counts, but must compare > > professors to an outside set of the same subject specialty and same > > professional age. > > Then you have to standardize the scores for comparative purposes. > > Just defining the subject set can cause horrendous difficulties, as > > certain professors may consider a given professor's subject set > > insignificant in the first place and unworthy of even being > pursued. > > I mean, you should already see the difficulties. I have > always come > > back from these experiences clawed to pieces and seeking a hole in > > which to hide. It is more politics and art form than a > science. The > > trouble with a thing like the NRC ratings is that it works on gross > > parameters and misses certain strengths. For example, LSU is not > > highly rated in history and English, but change the sets to > Southern > > history and Southern literature, and it suddenly comes out > on top. I > > am sure that you can invent even more difficulties. It is good to > > study these things, but it is best to analyze people as individuals > > rather than in aggregate. Use of citation analysis is so > provocative, > > that I have advised the person in charge of serials > cancellations not > > to use impact factor in any way to analyze journals lest he > be killed > > by the faculty and, if he does, to hide the fact that he is > doing so. > > Facutly do not like outsiders with measures they consider > questionable > > sticking their noses in what they consider their business. > > > > SB > > > > > > > > > > > > > > Loet Leydesdorff @listserv.utk.edu> on > > 04/04/2006 04:15:40 PM > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > To: SIGMETRICS at listserv.utk.edu > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > Subject: Re: [SIGMETRICS] RAE Questions > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Yes, Stephen, I meant your mobility issue. Thanks for the > correction. > > My points were also mainly questions in response to the plea for a > > metrics program as a replacement for the RAE (without defending the > > latter in any sense). The idea of a multi-variate regression is > > attractive, but there are some unsolved problems which > Steven Harnad > > thinks that can easily be solved or dismissed. > > > > Best, Loet > > > > ________________________________ > > Loet Leydesdorff > > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal > > 48, 1012 CX Amsterdam. > > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; > > http://www.leydesdorff.net/ > > > > > > > > > -----Original Message----- > > > From: ASIS&T Special Interest Group on Metrics > > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen > J Bensman > > > Sent: Tuesday, April 04, 2006 10:29 PM > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > > Subject: Re: [SIGMETRICS] RAE Questions > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > It seems that I have walked into the middle of a discussion > > that I do > > > not understand. I just want to correct one thing. > > > The following comment was made below: > > > > > > "Stephen Bensman also mentioned the instability of these > > skewed curves > > > over time. I would anyhow be worried about the comparisons > > over time > > > because of auto-correlation > > > auto-covariance) effects." > > > > > > I did not state that. I stated that these skewed curves > are highly > > > stable over time with high intertemporal correlations and > the same > > > programs comprising the top stratum for decades. > > > For example, the ten chemistry programs most highly > rated in 1910 > > > were still among the top 15 programs most highly rated in > > 1993. It is > > > interesting to note that Garfield found the same phenomenon > > in respect > > > to journals, which have the same high distributional > stability over > > > time. This is probably due to the cumulative advantage process > > > underlying both phenomena. I suppose it leads to high > > > auto-correlation also. > > > From this perspective RAEs every four years seem somewhat of a > > > redundancy. You might as well give the money to the same > > departments > > > you found at the top in the previous rating without any > > analysis. My > > > main concern was not about the stability of the > > hierarchy--which is a > > > given--but about mobility of individuals within the > > hierarchy. There > > > is nothing more self-destructive than a closed hierarchy. > > > It leads to class war of the worse kind. > > > > > > Really I was only speculating, musing about negative > possibilities > > > that I perceived while reading about the RAEs. > > > I was really raising questions more than answering them. > > > > > > SB > > > > > > > > > > > > > > > Stevan Harnad @listserv.utk.edu> on > > 04/04/2006 > > > 02:34:21 PM > > > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > > > > > To: SIGMETRICS at listserv.utk.edu > > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > > > Subject: Re: [SIGMETRICS] RAE Questions > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > On Tue, 4 Apr 2006, Loet Leydesdorff wrote: > > > > > > > Now that we have amply discussed the political side of the > > > RAE, let us > > > turn > > > > to your research program of replacing the RAE with a metrics. > > > > > > Loet, we can discuss my research progam if you like, but we > > were not > > > discussing that. We were discussing the UK government's proposed > > > policy of replacing the RAE with metrics. That has nothing > > to do with > > > my research program. > > > They decided to switch from the present hybrid system (of > > re-reviewing > > > published articles plus some metrics) to metrics alone > > because metrics > > > alone are already so highly correlated with the current RAE > > outcomes > > > (in many, though not necessarily all fields). No critique > > of metrics > > > over-rides that decision where the two are already so highly > > > correlated. > > > It would be pure superstition to continue going through the > > > ergonomically and eocnomically wasteful motions of the > > re-review when > > > the outcome is already there in the metrics. > > > > > > > Two problems have been mentioned which cannot easily be solved: > > > > > > > > 1. the skewness of the distributions > > > > > > I think there are ways to adjust for this. > > > > > > > 2. the heterogeneity of department as units of analysis > > > > > > That is a separate matter. The proposal to swap metrics > alone for a > > > redundant, expensive, time-consuming hybrid process that > yields the > > > same outcome was based on the units of analysis as they now > > are. The > > > units too could be revised, and perhaps should be, but that is an > > > independent question. > > > > > > > The first problem can be solved by using non-parametric > regression > > > analysis > > > > (probit or logit) instead of multi-variate regression > > > analysis of the > > > LISREL > > > > type. However, will this provide you with a ranking? I > > > cannot oversee > > > > it because I never did it myself. > > > > > > The present RAE outcome (rankings) is highly correlated > > with metrics > > > already. If we correct the metrics for skewness, this may > > continue to > > > give the same highly correlated outcome, or another one. > > RAE can then > > > decide which one it wants to trust more, and why, but > > either way, it > > > has no bearing on the validity of the decision to scrap > > re-reviews for > > > metrics when they give almost the same outcome anyway. > > > > > > > Stephen Bensman also mentioned the instability of these skewed > > > > curves over time. I would anyhow be worried about the > comparisons > > > > over time because of > > auto-correlation > > > > (auto-covariance) effects. > > > > > > Whatever their skewness, temporal variability and > auto-correlation, > > > the ranking based on metrics are very similar to the > > rankings based on > > > re-review. The starting point is to have a metric that does > > *at least > > > as > > > well* as the re-review did, and then to start work on > > optimizing it. > > > Let us not forget the real alternatives at issue. As I > > said, it would > > > be superstitious and absurd to go back from cheap metrics to > > > profligate re-reviews because of putative blemishes in > the metrics > > > *when both yield the same outcome*. > > > > > > > I have run into these problems before, and therefore I am a > > > big fan of > > > > entropy statistics. But policy makers tend not to > understand the > > > > results > > > if > > > > one can teach them something about "reduction of the > uncertainty". > > > > They > > > will > > > > wish firm numbers to legitimate decisions. > > > > > > If policy makers have been content to rank the departments > > and shell > > > out the money in proportion with the ranks for two > decades now, and > > > those ranks are derivable from cheap metrics instead of costly > > > re-reviews, they will understand enough to know they > should go with > > > metrics. Then you can give them a course on how to > improve on their > > > metrics with "entropy statistics". > > > > > > > The second problem is generated because you will have > > institutional > > > > units > > > of > > > > analysis which may be composed of different disciplinary > > > affiliations > > > > and > > > to > > > > a variable extent. > > > > > > That is already true, and it is true regardless of > whether the RAE > > > does or does not do the re-review over and above the > > metrics which are > > > already highly correlated with the outcome. > > > If rejuggling units improves the equity and predictivity of the > > > rankings, by all means rejuggle them. But in and of > itself that has > > > nothing to do with the obvious good sense of scrapping profligate > > > re-review in favour of parsimonious metrics when they yield > > the same > > > outcome -- even with the present unit structure. > > > > > > > For example, I am myself misplaced in a unit of > > > communication studies. > > > > In other cases, universities will have set up > "interdisciplinary > > > > units" on purpose while individual scholars continue > > > to > > > > affiliate themselves with their original disciplines. We > > know that > > > > publication and citation practices vary among > > disciplines. Thus, one > > > should > > > > not compare apples with oranges. > > > > > > It sounds worth remedying, but the question is orthogonal to the > > > question of whether to retain wasteful re-review or to rely > > on metrics > > > that give the same outcome at a fraction of the cost in > > lost time and > > > money (that could have been devoted to funding research > instead of > > > just rating it). > > > > > > > I would be inclined to disadvise to embark on this > > research project > > > before > > > > one has an idea of how to handle these two problems. > > Fortunately, I > > > > was > > > not > > > > the reviewer :-). > > > > > > I am not sure which research project you are talking about. > > > (I was just funded for a metrics project in Canada, but it > > has nothing > > > to do with the RAE. The RAE, in contrast, has elected to scrap > > > re-review in favour of the metrics that already yield the same > > > outcome, but that has nothing to do with my research project.) > > > > > > Stevan Harnad > > > American Scientist Open Access Forum > > > http://amsci-forum.amsci.org/archives/American-Scientist-Open- > > > Access-Forum.html > > > > > > > > > Chaire de recherche du Canada Professor of > > Cognitive Science > > > Ctr. de neuroscience de la cognition Dpt. Electronics & > > > Computer Science > > > Universit? du Qu?bec ? Montr?al University of Southampton > > > Montr?al, Qu?bec Highfield, Southampton > > > Canada H3C 3P8 SO17 1BJ United Kingdom > > > http://www.crsc.uqam.ca/ > > > http://www.ecs.soton.ac.uk/~harnad/ > > > > > > From notsjb at LSU.EDU Wed Apr 5 17:16:04 2006 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Wed, 5 Apr 2006 16:16:04 -0500 Subject: Future UK RAEs to be Metrics-Based Message-ID: Loet, The principle in real estate is "location, location, location." The principle in program evaluation is "set definition, set definition, set definition." I pointed out in another posting that a major discovery of the 1993 NRC rating was that all previous ratings in the biosciences were incorrect due to an incorrect method of classification resulting in non-comparable sets. I am somewhat proud that I was able to show to the NRC people how a change in classification method had an enormous impact on the ratings of LSU, turning us from a nonentity into something quite respectable and more in line with Louisiana's pioneering role in medicine through mainly the Ochsner Clinic and the first attempt at a charity hospital system. SB Loet Leydesdorff @LISTSERV.UTK.EDU> on 03/31/2006 11:57:46 AM Please respond to ASIS&T Special Interest Group on Metrics Sent by: ASIS&T Special Interest Group on Metrics To: SIGMETRICS at LISTSERV.UTK.EDU cc: (bcc: Stephen J Bensman/notsjb/LSU) Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based Dear Stephen, Although I am politically at the other end of the spectrum, I fully agree with your critique of the RAE. But the critique would equally hold for a "metric" that would rate departments against each other as proposed by some of our colleagues. The problem is to take departments as units of analysis. With best wishes, Loet > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stephen J Bensman > Sent: Thursday, March 30, 2006 10:32 PM > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Gee, I consider myself anything but a cultural elitist. > After all, I work at LSU. The basic problem of the RAE is > that it is biased against an institution like LSU. At least > under the American system, good researchers at a place like > LSU have an even chance to obtain research funding, and many > take advantage of this system. That way a good researcher > maintains his independence and advance his career. This way > LSU plays a major role as a launch pad for up and coming > scientists. The British RAE always reminded me of the > Tsarist system of krugovaia poruka, where all the peasants of > a commune were held liable for communal taxes. This was the > taxation system of serfdom, causing peasants to be chained to > the commune, stifling individual initiative, thereby causing > agricultural stagnation, and ultimately a violent revolution. > If this makes me a cultural elitist, then so be it. > > SB > > > > > Phil Davis @LISTSERV.UTK.EDU> on 03/30/2006 > 02:09:28 PM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Stephen, I wouldn't call you a "capitalist pig" but a > willfully blind, cultural elitist. In countries where > education is wholly (or mostly) funded by the government -- > not just the UK and Europe, but Canada and others -- the > government is concerned about making sure that everyone gets > some modicum of funding. That does not mean a completely > equitable rationing system, but it ensures a base-level of > funding. In the United States, this base-level funding often > comes from one's own department or college. Granted, the > capitalist-approach you speak of does reward the best and > greatest, and this Winner-takes-all approach does result in > pioneering research, yet it only rewards the few. > > --Phil Davis > > > > Stephen Bensman wrote: > > >Speaking as a capitalist pig, the entire RAE system is just another > example > >of socialists hoisting themselves on their own petards. > Point 1 below > >contains the essence of the problem. The US has done > pioneering work > >on the evaluation of research-doctorate programs but was never silly > >enough > to > >allocate research resources on the basis of it. Luckily > because these > >evaluations were usually screwed up in some way. Allocation of > >research resources was always done on a project-by-project > basis by the > >NSF, NIH, and others, with experts in the fields evaluating > individual > >research proposals. The Europeans have a tendency to overplan > >everything with disastrous consequences--the disaster in > Eastern Europe > >just being the latest example of it. > > > >SB > From notsjb at LSU.EDU Wed Apr 5 17:18:06 2006 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Wed, 5 Apr 2006 16:18:06 -0500 Subject: RAE Questions Message-ID: No sweat. I say so many bad things that I make it a principle never to skip a chance to say something nice. SB Loet Leydesdorff @LISTSERV.UTK.EDU> on 04/05/2006 10:18:42 AM Please respond to ASIS&T Special Interest Group on Metrics Sent by: ASIS&T Special Interest Group on Metrics To: SIGMETRICS at LISTSERV.UTK.EDU cc: (bcc: Stephen J Bensman/notsjb/LSU) Subject: Re: [SIGMETRICS] RAE Questions Dear Stephen: Thank you so much for your nice words. With best wishes and in friendship, Loet > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Wednesday, April 05, 2006 3:09 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Loet, > The one reason I am so fascinated with your work is that you > deal with the > fundamental problem--proper set definition for the analysis. > This is the > most difficult and, for me, the most subjective, value-laden > part of the analysis. Once people can agree on these, the > rest is proper statistical technique, provided you use > multiple measures that can cross-check each other--expert > ratings, citations, library use, Internet use, etc. It is a > fundmental mistake to use citation analysis by itself, > particularly since it seems that ISI data are dominated by > the citation patterns of the US academic social > stratification system. This may make it invalid for other > areas. It is also necessary to be aware that there is not > just one answer but multiple ones depending on your > objectives, etc. The US NRC ratings were a broad brush > effort to determine the importance of programs in disciplines > as whole and not in specific subsets, which can be crucial. > For example, one specific subset that was not well covered by > the NRC ratings was how to deal with wetlands, river delta > areas, flood control, coastal zone areas, etc. I think that > you would find that perhaps the Netherlands would rank the > highest in this subset, and this is of the utmost interest to > Louisiana now. > > I still think that the questions I raised about the RAEs are > valid ones. > > SB > > > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > 04/05/2006 > 01:25:58 AM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] RAE Questions > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear Stephen and colleagues, > > There are legitimate uses of these measures like, for > example, learning faculty how to consider their own position > in the literature. This may enable them to improve the > quality and visibility of their contributions, to reorganize > units, etc. The other legitimate use, of course, is our > scholarly communication about how to study these bibliometric > tools and how to use them as variables in a model of how the > sciences (and technologies) develop. > > A number of problems of these measures in policy processes > have now been listed. I want to add one: As long as we are > not able to rank document sets (e.g., journals) clearly, it > remains tricky to make an inference to authors and > institutions. OA will not help solving these problems. :-) > > With best wishes, > > > Loet > > ________________________________ > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > > Sent: Wednesday, April 05, 2006 3:44 AM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] RAE Questions > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > I have given lectures to faculty on the use of citation > analysis for > > purposes of faculty evaluation. I have always prefaced > these lectures > > with the comment that I feel guilty in that I may be > passing out hand > > grenades to kindergarten students. Citation analysis can be > > extraordinarily destructive if misapplied. If you take just one > > department, you can see the problem. Even in a department > covering a > > relatively homogeneous field, you have professors engaged > in different > > specialities of differing size. > > Then you have the problem of differing professional age. > Due to these > > factors you cannot use raw citation counts, but must compare > > professors to an outside set of the same subject specialty and same > > professional age. > > Then you have to standardize the scores for comparative purposes. > > Just defining the subject set can cause horrendous difficulties, as > > certain professors may consider a given professor's subject set > > insignificant in the first place and unworthy of even being > pursued. > > I mean, you should already see the difficulties. I have > always come > > back from these experiences clawed to pieces and seeking a hole in > > which to hide. It is more politics and art form than a > science. The > > trouble with a thing like the NRC ratings is that it works on gross > > parameters and misses certain strengths. For example, LSU is not > > highly rated in history and English, but change the sets to > Southern > > history and Southern literature, and it suddenly comes out > on top. I > > am sure that you can invent even more difficulties. It is good to > > study these things, but it is best to analyze people as individuals > > rather than in aggregate. Use of citation analysis is so > provocative, > > that I have advised the person in charge of serials > cancellations not > > to use impact factor in any way to analyze journals lest he > be killed > > by the faculty and, if he does, to hide the fact that he is > doing so. > > Facutly do not like outsiders with measures they consider > questionable > > sticking their noses in what they consider their business. > > > > SB > > > > > > > > > > > > > > Loet Leydesdorff @listserv.utk.edu> on > > 04/04/2006 04:15:40 PM > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > To: SIGMETRICS at listserv.utk.edu > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > Subject: Re: [SIGMETRICS] RAE Questions > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Yes, Stephen, I meant your mobility issue. Thanks for the > correction. > > My points were also mainly questions in response to the plea for a > > metrics program as a replacement for the RAE (without defending the > > latter in any sense). The idea of a multi-variate regression is > > attractive, but there are some unsolved problems which > Steven Harnad > > thinks that can easily be solved or dismissed. > > > > Best, Loet > > > > ________________________________ > > Loet Leydesdorff > > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal > > 48, 1012 CX Amsterdam. > > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; > > http://www.leydesdorff.net/ > > > > > > > > > -----Original Message----- > > > From: ASIS&T Special Interest Group on Metrics > > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen > J Bensman > > > Sent: Tuesday, April 04, 2006 10:29 PM > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > > Subject: Re: [SIGMETRICS] RAE Questions > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > It seems that I have walked into the middle of a discussion > > that I do > > > not understand. I just want to correct one thing. > > > The following comment was made below: > > > > > > "Stephen Bensman also mentioned the instability of these > > skewed curves > > > over time. I would anyhow be worried about the comparisons > > over time > > > because of auto-correlation > > > auto-covariance) effects." > > > > > > I did not state that. I stated that these skewed curves > are highly > > > stable over time with high intertemporal correlations and > the same > > > programs comprising the top stratum for decades. > > > For example, the ten chemistry programs most highly > rated in 1910 > > > were still among the top 15 programs most highly rated in > > 1993. It is > > > interesting to note that Garfield found the same phenomenon > > in respect > > > to journals, which have the same high distributional > stability over > > > time. This is probably due to the cumulative advantage process > > > underlying both phenomena. I suppose it leads to high > > > auto-correlation also. > > > From this perspective RAEs every four years seem somewhat of a > > > redundancy. You might as well give the money to the same > > departments > > > you found at the top in the previous rating without any > > analysis. My > > > main concern was not about the stability of the > > hierarchy--which is a > > > given--but about mobility of individuals within the > > hierarchy. There > > > is nothing more self-destructive than a closed hierarchy. > > > It leads to class war of the worse kind. > > > > > > Really I was only speculating, musing about negative > possibilities > > > that I perceived while reading about the RAEs. > > > I was really raising questions more than answering them. > > > > > > SB > > > > > > > > > > > > > > > Stevan Harnad @listserv.utk.edu> on > > 04/04/2006 > > > 02:34:21 PM > > > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > > > > > To: SIGMETRICS at listserv.utk.edu > > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > > > Subject: Re: [SIGMETRICS] RAE Questions > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > On Tue, 4 Apr 2006, Loet Leydesdorff wrote: > > > > > > > Now that we have amply discussed the political side of the > > > RAE, let us > > > turn > > > > to your research program of replacing the RAE with a metrics. > > > > > > Loet, we can discuss my research progam if you like, but we > > were not > > > discussing that. We were discussing the UK government's proposed > > > policy of replacing the RAE with metrics. That has nothing > > to do with > > > my research program. > > > They decided to switch from the present hybrid system (of > > re-reviewing > > > published articles plus some metrics) to metrics alone > > because metrics > > > alone are already so highly correlated with the current RAE > > outcomes > > > (in many, though not necessarily all fields). No critique > > of metrics > > > over-rides that decision where the two are already so highly > > > correlated. > > > It would be pure superstition to continue going through the > > > ergonomically and eocnomically wasteful motions of the > > re-review when > > > the outcome is already there in the metrics. > > > > > > > Two problems have been mentioned which cannot easily be solved: > > > > > > > > 1. the skewness of the distributions > > > > > > I think there are ways to adjust for this. > > > > > > > 2. the heterogeneity of department as units of analysis > > > > > > That is a separate matter. The proposal to swap metrics > alone for a > > > redundant, expensive, time-consuming hybrid process that > yields the > > > same outcome was based on the units of analysis as they now > > are. The > > > units too could be revised, and perhaps should be, but that is an > > > independent question. > > > > > > > The first problem can be solved by using non-parametric > regression > > > analysis > > > > (probit or logit) instead of multi-variate regression > > > analysis of the > > > LISREL > > > > type. However, will this provide you with a ranking? I > > > cannot oversee > > > > it because I never did it myself. > > > > > > The present RAE outcome (rankings) is highly correlated > > with metrics > > > already. If we correct the metrics for skewness, this may > > continue to > > > give the same highly correlated outcome, or another one. > > RAE can then > > > decide which one it wants to trust more, and why, but > > either way, it > > > has no bearing on the validity of the decision to scrap > > re-reviews for > > > metrics when they give almost the same outcome anyway. > > > > > > > Stephen Bensman also mentioned the instability of these skewed > > > > curves over time. I would anyhow be worried about the > comparisons > > > > over time because of > > auto-correlation > > > > (auto-covariance) effects. > > > > > > Whatever their skewness, temporal variability and > auto-correlation, > > > the ranking based on metrics are very similar to the > > rankings based on > > > re-review. The starting point is to have a metric that does > > *at least > > > as > > > well* as the re-review did, and then to start work on > > optimizing it. > > > Let us not forget the real alternatives at issue. As I > > said, it would > > > be superstitious and absurd to go back from cheap metrics to > > > profligate re-reviews because of putative blemishes in > the metrics > > > *when both yield the same outcome*. > > > > > > > I have run into these problems before, and therefore I am a > > > big fan of > > > > entropy statistics. But policy makers tend not to > understand the > > > > results > > > if > > > > one can teach them something about "reduction of the > uncertainty". > > > > They > > > will > > > > wish firm numbers to legitimate decisions. > > > > > > If policy makers have been content to rank the departments > > and shell > > > out the money in proportion with the ranks for two > decades now, and > > > those ranks are derivable from cheap metrics instead of costly > > > re-reviews, they will understand enough to know they > should go with > > > metrics. Then you can give them a course on how to > improve on their > > > metrics with "entropy statistics". > > > > > > > The second problem is generated because you will have > > institutional > > > > units > > > of > > > > analysis which may be composed of different disciplinary > > > affiliations > > > > and > > > to > > > > a variable extent. > > > > > > That is already true, and it is true regardless of > whether the RAE > > > does or does not do the re-review over and above the > > metrics which are > > > already highly correlated with the outcome. > > > If rejuggling units improves the equity and predictivity of the > > > rankings, by all means rejuggle them. But in and of > itself that has > > > nothing to do with the obvious good sense of scrapping profligate > > > re-review in favour of parsimonious metrics when they yield > > the same > > > outcome -- even with the present unit structure. > > > > > > > For example, I am myself misplaced in a unit of > > > communication studies. > > > > In other cases, universities will have set up > "interdisciplinary > > > > units" on purpose while individual scholars continue > > > to > > > > affiliate themselves with their original disciplines. We > > know that > > > > publication and citation practices vary among > > disciplines. Thus, one > > > should > > > > not compare apples with oranges. > > > > > > It sounds worth remedying, but the question is orthogonal to the > > > question of whether to retain wasteful re-review or to rely > > on metrics > > > that give the same outcome at a fraction of the cost in > > lost time and > > > money (that could have been devoted to funding research > instead of > > > just rating it). > > > > > > > I would be inclined to disadvise to embark on this > > research project > > > before > > > > one has an idea of how to handle these two problems. > > Fortunately, I > > > > was > > > not > > > > the reviewer :-). > > > > > > I am not sure which research project you are talking about. > > > (I was just funded for a metrics project in Canada, but it > > has nothing > > > to do with the RAE. The RAE, in contrast, has elected to scrap > > > re-review in favour of the metrics that already yield the same > > > outcome, but that has nothing to do with my research project.) > > > > > > Stevan Harnad > > > American Scientist Open Access Forum > > > http://amsci-forum.amsci.org/archives/American-Scientist-Open- > > > Access-Forum.html > > > > > > > > > Chaire de recherche du Canada Professor of > > Cognitive Science > > > Ctr. de neuroscience de la cognition Dpt. Electronics & > > > Computer Science > > > Universit? du Qu?bec ? Montr?al University of Southampton > > > Montr?al, Qu?bec Highfield, Southampton > > > Canada H3C 3P8 SO17 1BJ United Kingdom > > > http://www.crsc.uqam.ca/ > > > http://www.ecs.soton.ac.uk/~harnad/ > > > > > > From loet at LEYDESDORFF.NET Thu Apr 6 01:13:37 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Thu, 6 Apr 2006 07:13:37 +0200 Subject: Future UK RAEs to be Metrics-Based In-Reply-To: Message-ID: If I correctly understand, you wish to say that any ranking of authors or institutions is ultimately dependent on how the sets are defined. A definition of sets on the basis of institutions would thus make the RAE operation circular. With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Wednesday, April 05, 2006 11:16 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Loet, > The principle in real estate is "location, location, > location." The principle in program evaluation is "set > definition, set definition, set definition." I pointed out > in another posting that a major discovery of the 1993 NRC > rating was that all previous ratings in the biosciences were > incorrect due to an incorrect method of classification > resulting in non-comparable sets. I am somewhat proud that I > was able to show to the NRC people how a change in > classification method had an enormous impact on the ratings > of LSU, turning us from a nonentity into something quite > respectable and more in line with Louisiana's pioneering role > in medicine through mainly the Ochsner Clinic and the first > attempt at a charity hospital system. > > SB > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > 03/31/2006 > 11:57:46 AM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear Stephen, > > Although I am politically at the other end of the spectrum, I > fully agree with your critique of the RAE. But the critique > would equally hold for a "metric" that would rate departments > against each other as proposed by some of our colleagues. The > problem is to take departments as units of analysis. > > With best wishes, > > > Loet > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stephen J Bensman > > Sent: Thursday, March 30, 2006 10:32 PM > > To: SIGMETRICS at listserv.utk.edu > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Gee, I consider myself anything but a cultural elitist. > > After all, I work at LSU. The basic problem of the RAE is > that it is > > biased against an institution like LSU. At least under the > American > > system, good researchers at a place like LSU have an even chance to > > obtain research funding, and many take advantage of this > system. That > > way a good researcher maintains his independence and advance his > > career. This way LSU plays a major role as a launch pad for up and > > coming scientists. The British RAE always reminded me of > the Tsarist > > system of krugovaia poruka, where all the peasants of a > commune were > > held liable for communal taxes. This was the taxation system of > > serfdom, causing peasants to be chained to the commune, stifling > > individual initiative, thereby causing agricultural stagnation, and > > ultimately a violent revolution. > > If this makes me a cultural elitist, then so be it. > > > > SB > > > > > > > > > > Phil Davis @LISTSERV.UTK.EDU> on 03/30/2006 > > 02:09:28 PM > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Stephen, I wouldn't call you a "capitalist pig" but a > willfully blind, > > cultural elitist. In countries where education is wholly > (or mostly) > > funded by the government -- not just the UK and Europe, but > Canada and > > others -- the government is concerned about making sure > that everyone > > gets some modicum of funding. That does not mean a completely > > equitable rationing system, but it ensures a base-level of > funding. > > In the United States, this base-level funding often comes > from one's > > own department or college. Granted, the > capitalist-approach you speak > > of does reward the best and greatest, and this Winner-takes-all > > approach does result in pioneering research, yet it only > rewards the > > few. > > > > --Phil Davis > > > > > > > > Stephen Bensman wrote: > > > > >Speaking as a capitalist pig, the entire RAE system is just another > > example > > >of socialists hoisting themselves on their own petards. > > Point 1 below > > >contains the essence of the problem. The US has done > > pioneering work > > >on the evaluation of research-doctorate programs but was > never silly > > >enough > > to > > >allocate research resources on the basis of it. Luckily > > because these > > >evaluations were usually screwed up in some way. Allocation of > > >research resources was always done on a project-by-project > > basis by the > > >NSF, NIH, and others, with experts in the fields evaluating > > individual > > >research proposals. The Europeans have a tendency to overplan > > >everything with disastrous consequences--the disaster in > > Eastern Europe > > >just being the latest example of it. > > > > > >SB > > > From harnad at ECS.SOTON.AC.UK Thu Apr 6 07:27:45 2006 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Thu, 6 Apr 2006 12:27:45 +0100 Subject: Future UK RAEs to be Metrics-Based In-Reply-To: <1E93EA79FECAE149808363138FAE5BE10439F1B4@exchange03.unv.wlv.ac.uk> Message-ID: On Thu, 6 Apr 2006, Zuccala, Alesia wrote: > I recently stumbled across an article online that was published in > the Chronicle of Higher Education - October 14th, 2005 and I think > it bears an interesting relationship to the discussion thread on > metric-based RAEs. I wonder if some of you had seen this article: > http://chronicle.com/free/v52/i08/08a01201.htm and if anyone has any > thoughts about what the Princeton researcher has said: "The impact > factor may be a pox upon the land because of the abuse of that number" > > Open access to research is definitely valuable. I am interested in > all of its developments etc, but I wonder if we should be concerned > about how citation impact factors are going to be used in the future - > should we be wary of potential abuses? Will it become more important > to introduce complementary qualitative evaluations? Richard Monastersky's article in CHE was responded to in this Forum on Oct. 10 2005. The response is reproduced below. (To minimize redundancy and non-sequiturs, I urge AmSci posters to do a google search on "amsci" plus the keywords of their proposed posting to check whether the topic has already been treated.) The answer to Alesia Zuccala's query is: No, the whole point of scrapping RAE's wasteful "complementary qualitative [re]-evaluation" in favour of metrics -- which are already so highly correlated with, hence predictive of the RAE's outcome rankings -- is to stop wasting time and money that could have been spent on research itself. Published articles have already been "qualitatively evaluated," and that process is called *peer review.* There is no sense in repeating, with a local, inexpert UK panel, what has already been done by each individual journal by purpose-picked, qualified experts. Journals vary in quality; so weight them by their known track-record (of which their impact factor is but one partial correlate) and "complement" them with other metrics, such as the article's and author's own exact citation counts (and download counts, and recursive CiteRank weights, and fan-on/fan-out co-citation weights, and hub/authority counts, and endogamy/exogamy weights, and profile uniqueness scores, and latency/longevity scores, and latent semantic co-text weights, etc. etc. -- the full, rich panoply of present and future metrics afforded by a digital, online, full-text, open-access research database), always comparing like with like, and customizing the metric equation to the field. Abuses will be ever more detectable and deterrable by digital detection algorithms and naming-and-shaming in an open digital corpus. Move forward, not backward. A-priori Peer Review remains the most heavily weighted and important determinant of research quality, but open-access-based metrics provide a rich and diverse coterie of a-posteriori complements to Peer Review -- all part of the collective, cumulative and self-corrective nature of learned inquiry. ---- "Chronicle of Higher Education Impact Factor Article" (Oct 2005) http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/4848.html Comment on: Richard Monastersky, The Number That's Devouring Science, Chronicle of Higher Education, October 1, 2005 http://chronicle.com/weekly/v52/i08/08a01201.htm [text appended at the end of the comment] Impact Analysis in the PostGutenberg Era Although Richard Monasterky describes a real problem -- the abuse of journal impact factors -- its solution is so obvious one hardly required so many words on the subject: A journal's citation impact factor (CIF) is the average number of citations received by articles in that journal (ISI -- somewhat arbitrarily -- calculates CIFs on the basis of the preceding two years, although other time-windows may also be informative; see http://citebase.eprints.org/analysis/correlation.php ) There is an undeniable relationship between the usefulness of an article and how many other articles use and hence cite it. Hence CIF does measure the average usefulness of the articles in a journal. But there are three problems with the way CIF itself is used, each of them readily correctable: (1) A measure of the average usefulness of the articles in the journal in which a given article appears is no substitute for the actual usefulness of each article itself: In other words, the journal CIF is merely a crude and indirect measure of usefulness; each article's own citation count is the far more direct and accurate measure. (Using the CIF instead of an article's own citation count [or the average citation count for the author] for evaluation and comparison is like using the average marks for the school from which a candidate graduated, rather than the actual marks of the candidate.) (2) Whether comparing CIFs or direct article/author citation counts, one must always compare like with like. There is no point comparing either CIFs between journals in different fields, or citation counts for articles/authors in different fields. (No one has yet bothered to develop a normalised citation count, adjusting for different baseline citation levels and variability in different fields. It could easily be done, but it has not been -- or if it has been done, it was in an obscure scholarly article, but not applied by the actual daily users of CIFs or citation counts today.) (3) Both CIFs and citation counts can be distorted and abused. Authors can self-cite, or cite their friends; some journal editors can and do encourage self-citing their journal. These malpractices are deplorable, but most are also detectable, and then name-and-shame-able and correctable. ISI could do a better job policing them, but soon the playing field will widen, for as authors make their articles open access online, other harvesters -- such as citebase and citeseer and even google scholar -- will be able to harvest and calculate citation counts, and average, compare, expose, enrich and correct them in powerful ways that were in the inconceivable in the Gutenberg era:. http://citebase.eprints.org/ http://citebase.eprints.org/ http://scholar.google.com/ So, yes, CIFs are being misused and abused currently, but the cure is already obvious -- and wealth of powerful new resources are on the way for measuring and analyzing research usage and impact online, including (1) download counts, (2) co-citation counts (co-cited with, co-cited by), (3) hub/authority ranks (authorities are highly cited papers cited by many highly cited papers; hubs cite many authorities), (4) download/citation correlations and other time-series analyses, (5) download growth-curve and peak latency scores, (6) citation growth-curve and peak-latency scores, (7) download/citation longevity scores, (8) co-text analysis (comparing similar texts, extrapolating directional trends), and much more. It will no longer be just CIFs and citation counts but a rich multiple regression equation, with many weighted predictor variables based on these new measures. And they will be available both for navigators and evaluators online, and based not just on the current ISI database but on all of the peer-reviewed research literature. Meanwhile, use the direct citation counts, not the CIFs. Some self-citations follow (and then the CHE article's text): Brody, T. (2003) Citebase Search: Autonomous Citation Database for e-print Archives, sinn03 Conference on Worldwide Coherent Workforce, Satisfied Users - New Services For Scientific Information, Oldenburg, Germany, September 2003 http://eprints.ecs.soton.ac.uk/10677/ Brody, T. (2004) Citation Analysis in the Open Access World Interactive Media International http://eprints.ecs.soton.ac.uk/10000/ Brody, T. , Harnad, S. and Carr, L. (2005) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST, in press). http://eprints.ecs.soton.ac.uk/10713/ Hajjem, C., Gingras, Y., Brody, T., Carr, L. & Harnad, S. (2005) Across Disciplines, Open Access Increases Citation Impact. (manuscript in preparation). http://www.ecs.soton.ac.uk/~harnad/Temp/chawki1.doc Hajjem, C. (2005) Analyse de la variation de pourcentages d'articles en acc?s libre en fonction de taux de citations http://www.crsc.uqam.ca/lab/chawki/ch.htm Harnad, S. and Brody, T. (2004a) Comparing the Impact of Open Access (OA) vs. Non-OA Articles in the Same Journals. D-Lib Magazine, Vol. 10 No. 6 http://eprints.ecs.soton.ac.uk/10207/ Harnad, S. and Brody, T. (2004) Prior evidence that downloads predict citations. British Medical Journal online. http://eprints.ecs.soton.ac.uk/10206/ Harnad, S. and Carr, L. (2000) Integrating, Navigating and Analyzing Eprint Archives Through Open Citation Linking (the OpCit Project). Current Science 79(5):pp. 629-638. http://eprints.ecs.soton.ac.uk/5940/ Harnad, S. , Brody, T. , Vallieres, F. , Carr, L. , Hitchcock, S. , Gingras, Y. , Oppenheim, C. , Stamerjohanns, H. and Hilf, E. (2004) The Access/Impact Problem and the Green and Gold Roads to Open Access. Serials Review, Vol. 30, No. 4, 310-314 http://eprints.ecs.soton.ac.uk/10209/ Hitchcock, S. , Brody, T. , Gutteridge, C. , Carr, L. , Hall, W. , Harnad, S. , Bergmark, D. and Lagoze, C. (2002) Open Citation Linking: The Way Forward. D-Lib Magazine 8(10). http://eprints.ecs.soton.ac.uk/7717/ Hitchcock, S. , Carr, L. , Jiao, Z. , Bergmark, D. , Hall, W. , Lagoze, C. and Harnad, S. (2000) Developing services for open eprint archives: globalisation, integration and the impact of links. In Proceedings of the 5th ACM Conference on Digital Libraries, San Antonio, Texas, June 2000. , pages pp. 143-151. http://eprints.ecs.soton.ac.uk/2860/ Hitchcock, S. , Woukeu, A. , Brody, T. , Carr, L. , Hall, W. and Harnad, S. (2003) Evaluating Citebase, an open access Web-based citation-ranked search and impact discovery service. Technical Report ECSTR-IAM03-005, School of Electronics and Computer Science, University of Southampton http://eprints.ecs.soton.ac.uk/8204/ AMERICAN SCIENTIST OPEN ACCESS FORUM: A complete Hypermail archive of the ongoing discussion of providing open access to the peer-reviewed research literature online (1998-2005) is available at: http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ To join or leave the Forum or change your subscription address: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html Post discussion to: american-scientist-open-access-forum at amsci.org UNIVERSITIES: If you have adopted or plan to adopt an institutional policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php UNIFIED DUAL OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("gold"): Publish your article in a open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ From notsjb at LSU.EDU Thu Apr 6 10:08:34 2006 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Thu, 6 Apr 2006 09:08:34 -0500 Subject: Future UK RAEs to be Metrics-Based Message-ID: Loet, We go back to the frequency theory of probability, which was best set forth by Richard Von Mises in his Wahr, Wahrheit, und Statistik (Truth, Probability, and Statistics). Von Mises states that no probability can calculated until a proper set--or "kollektiv" in his language--has been defined. Karl Pearson operated within the frequency theory, and he stated in his Grammar of Science that classification is the basis of all science, and he dismisses any study that is not based on classification as not science. So if you are going to rate programs, then you are going to have to establish precisely what it is you are going to rate and then select your data and measures accordingly. Take history for an example. You can rate history as a whole or select a historical specialty as southern history. Your selection of variables and bibliometric data will vary according to your purpose. By this very fact you are in a sense predetermining the outcome of who is going to come to the top. Now I know that the NRC was after history as a whole from a close up analysis of what they did. For bibliometric data they used the entire SSCI. Moreover, I know which professors at LSU were selected for the ratings. LSU has its strongest faculty in Southern history but these were not selected. Instead the professors in Russian and Chinese were selected for the ratings. This seems illogical, but then it has to be remembered that LSU has one of the few programs big enough to hate specialists in Russian and Chinese history, and these were needed for an adequate rating sample. Now I do not understand exactly how the Brits go about their RAEs. From what I understand, a prmgram has to volunteer to be rated, or otherwise it is not rated. So the sample seems to be self-selecting. The programs then submit publications to a committee, which uses them as a basis for ratings. But these programs may have different strengths and agendas, and this should affect the ratings. But do the ratings have the purpose of selecting some research areas as more important than other research areas? This I do not know, and this would definitely affect the ratings. Is this what you mean by "circular." What I would mean by "circular" is that, due to the stability of distributions, the same programs would be selected again and again for funding, reinforcing the hierarchy, and blocking either lesser programs or better faculty at the lesser programs from advancing their agendas. Did you make any sense out of all of this confusion? SB Loet Leydesdorff @LISTSERV.UTK.EDU> on 04/06/2006 12:13:37 AM Please respond to ASIS&T Special Interest Group on Metrics Sent by: ASIS&T Special Interest Group on Metrics To: SIGMETRICS at LISTSERV.UTK.EDU cc: (bcc: Stephen J Bensman/notsjb/LSU) Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based If I correctly understand, you wish to say that any ranking of authors or institutions is ultimately dependent on how the sets are defined. A definition of sets on the basis of institutions would thus make the RAE operation circular. With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Wednesday, April 05, 2006 11:16 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Loet, > The principle in real estate is "location, location, > location." The principle in program evaluation is "set > definition, set definition, set definition." I pointed out > in another posting that a major discovery of the 1993 NRC > rating was that all previous ratings in the biosciences were > incorrect due to an incorrect method of classification > resulting in non-comparable sets. I am somewhat proud that I > was able to show to the NRC people how a change in > classification method had an enormous impact on the ratings > of LSU, turning us from a nonentity into something quite > respectable and more in line with Louisiana's pioneering role > in medicine through mainly the Ochsner Clinic and the first > attempt at a charity hospital system. > > SB > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > 03/31/2006 > 11:57:46 AM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear Stephen, > > Although I am politically at the other end of the spectrum, I > fully agree with your critique of the RAE. But the critique > would equally hold for a "metric" that would rate departments > against each other as proposed by some of our colleagues. The > problem is to take departments as units of analysis. > > With best wishes, > > > Loet > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stephen J Bensman > > Sent: Thursday, March 30, 2006 10:32 PM > > To: SIGMETRICS at listserv.utk.edu > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Gee, I consider myself anything but a cultural elitist. > > After all, I work at LSU. The basic problem of the RAE is > that it is > > biased against an institution like LSU. At least under the > American > > system, good researchers at a place like LSU have an even chance to > > obtain research funding, and many take advantage of this > system. That > > way a good researcher maintains his independence and advance his > > career. This way LSU plays a major role as a launch pad for up and > > coming scientists. The British RAE always reminded me of > the Tsarist > > system of krugovaia poruka, where all the peasants of a > commune were > > held liable for communal taxes. This was the taxation system of > > serfdom, causing peasants to be chained to the commune, stifling > > individual initiative, thereby causing agricultural stagnation, and > > ultimately a violent revolution. > > If this makes me a cultural elitist, then so be it. > > > > SB > > > > > > > > > > Phil Davis @LISTSERV.UTK.EDU> on 03/30/2006 > > 02:09:28 PM > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Stephen, I wouldn't call you a "capitalist pig" but a > willfully blind, > > cultural elitist. In countries where education is wholly > (or mostly) > > funded by the government -- not just the UK and Europe, but > Canada and > > others -- the government is concerned about making sure > that everyone > > gets some modicum of funding. That does not mean a completely > > equitable rationing system, but it ensures a base-level of > funding. > > In the United States, this base-level funding often comes > from one's > > own department or college. Granted, the > capitalist-approach you speak > > of does reward the best and greatest, and this Winner-takes-all > > approach does result in pioneering research, yet it only > rewards the > > few. > > > > --Phil Davis > > > > > > > > Stephen Bensman wrote: > > > > >Speaking as a capitalist pig, the entire RAE system is just another > > example > > >of socialists hoisting themselves on their own petards. > > Point 1 below > > >contains the essence of the problem. The US has done > > pioneering work > > >on the evaluation of research-doctorate programs but was > never silly > > >enough > > to > > >allocate research resources on the basis of it. Luckily > > because these > > >evaluations were usually screwed up in some way. Allocation of > > >research resources was always done on a project-by-project > > basis by the > > >NSF, NIH, and others, with experts in the fields evaluating > > individual > > >research proposals. The Europeans have a tendency to overplan > > >everything with disastrous consequences--the disaster in > > Eastern Europe > > >just being the latest example of it. > > > > > >SB > > > From garfield at CODEX.CIS.UPENN.EDU Thu Apr 6 16:56:18 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Thu, 6 Apr 2006 16:56:18 -0400 Subject: Aksnes D."Citation rates and perceptions of scientific contribution " JASIST 57 (2): 169-185 JAN 15 2006 Message-ID: Dag W. Aksnes : E-mail Addresses: Dag.W.Aksnes at nifustep.no Title: Citation rates and perceptions of scientific contribution Author(s): Aksnes DW Source: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 57 (2): 169-185 JAN 15 2006 Document Type: Article Language: English Cited References: 47 Times Cited: 0 Abstract: In this study scientists were asked about their own publication history and their citation counts. The study shows that the citation counts of the publications correspond reasonably well with the authors' own assessments of scientific contribution. Generally, citations proved to have the highest accuracy in identifying either major or minor contributions. Nevertheless, according to these judgments, citations are not a reliable indicator of scientific contribution at the level of the individual article. In the construction of relative citation indicators, the average citation rate of the subfield appears to be slightly more appropriate as a reference standard than the journal citation rate. The study confirms that review articles are cited more frequently than other publication types. Compared to the significance authors attach to these articles they appear to be considerably "overcited." However, there were only marginal differences in the citation rates between empirical, methods, and theoretical contributions. Addresses: Aksnes DW (reprint author), Norwegian Inst Studies Res & Higher Educ, NIFU, STEP, Wergelandsv 7, Oslo, N-0167 Norway Norwegian Inst Studies Res & Higher Educ, NIFU, STEP, Oslo, N-0167 Norway E-mail Addresses: Dag.W.Aksnes at nifustep.no Publisher: JOHN WILEY & SONS INC, 111 RIVER ST, HOBOKEN, NJ 07030 USA Subject Category: COMPUTER SCIENCE, INFORMATION SYSTEMS; INFORMATION SCIENCE & LIBRARY SCIENCE IDS Number: 000VM ISSN: 1532-2882 REFERENCES : *EUR COMM KEY FIG 2001 : 2001 ABT HA Do important papers produce high citation counts? SCIENTOMETRICS 48 : 65 2000 AKSNES DW Peer reviews and bibliometric indicators: a comparative study at a Norwegian university RESEARCH EVALUATION 13 : 33 2004 AKSNES DW Characteristics of highly cited papers RESEARCH EVALUATION 12 : 159 2003 AKSNES DW The effect of highly cited papers on national citation indicators SCIENTOMETRICS 59 : 213 2004 COLE JR SOCIAL STRATIFICATIO : 1973 COLE S WEB KNOWLEDGE FESTSC : 2000 CRONIN B CITATION-BASED AUDITING OF ACADEMIC-PERFORMANCE JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 45 : 61 1994 CRONIN B Rates of return to citation JOURNAL OF DOCUMENTATION 52 : 188 1996 GARFIELD E Dispelling a few common myths about journal citation impacts SCIENTIST 11 : 11 1997 GARFIELD E IS CITATION ANALYSIS A LEGITIMATE EVALUATION TOOL SCIENTOMETRICS 1 : 359 1979 GARFIELD E OF NOBEL CLASS - A CITATION PERSPECTIVE ON HIGH-IMPACT RESEARCH AUTHORS THEORETICAL MEDICINE 13 : 117 1992 GILBERT GN REFERENCING AS PERSUASION SOCIAL STUDIES OF SCIENCE 7 : 113 1977 GILLMOR CS CITATION CHARACTERISTICS OF JATP LITERATURE JOURNAL OF ATMOSPHERIC AND TERRESTRIAL PHYSICS 37 : 1401 1975 GOTTFREDSON SD EVALUATING PSYCHOLOGICAL-RESEARCH REPORTS - DIMENSIONS, RELIABILITY, AND CORRELATES OF QUALITY JUDGMENTS AMERICAN PSYCHOLOGIST 33 : 920 1978 LAWANI SM VALIDITY OF CITATION CRITERIA FOR ASSESSING THE INFLUENCE OF SCIENTIFIC PUBLICATIONS - NEW EVIDENCE WITH PEER ASSESSMENT JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 34 : 59 1983 LOWRY OH PROTEIN MEASUREMENT WITH THE FOLIN PHENOL REAGENT JOURNAL OF BIOLOGICAL CHEMISTRY 193 : 265 1951 LUUKKONEN T RES EVALUAT 1 : 21 1991 MACROBERTS M SCIENTOMETRICS 36 : 3 1996 MACROBERTS MH PROBLEMS OF CITATION ANALYSIS - A CRITICAL-REVIEW JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 40 : 342 1989 MARTIN BR ASSESSING BASIC RESEARCH - SOME PARTIAL INDICATORS OF SCIENTIFIC PROGRESS IN RADIO ASTRONOMY RESEARCH POLICY 12 : 61 1983 MOED HF IMPROVING THE ACCURACY OF INSTITUTE FOR SCIENTIFIC INFORMATIONS JOURNAL IMPACT FACTORS JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 46 : 461 1995 MOED HF A critical analysis of the journal impact factors of Angewandte Chemie and the Journal of the American Chemical Society - Inaccuracies in published impact factors based on overall citations only SCIENTOMETRICS 37 : 105 1996 MORAVCSIK MJ CITATION CONTEXT CLASSIFICATION OF A CITATION CLASSIC CONCERNING CITATION CONTEXT CLASSIFICATION SOCIAL STUDIES OF SCIENCE 18 : 515 1988 OPPENHEIM C HIGHLY CITED OLD PAPERS AND REASONS WHY THEY CONTINUE TO BE CITED JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 29 : 225 1978 OPPENHEIM C The correlation between citation counts and the 1992 research assessment exercise ratings for British research in genetics, anatomy and archaeology JOURNAL OF DOCUMENTATION 53 : 477 1997 PERITZ BC ARE METHODOLOGICAL PAPERS MORE CITED THAN THEORETICAL OR EMPIRICAL ONES - THE CASE OF SOCIOLOGY SCIENTOMETRICS 5 : 211 1983 PETERS HPF ON DETERMINANTS OF CITATION SCORES - A CASE-STUDY IN CHEMICAL-ENGINEERING JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 45 : 39 1994 PLOMP R THE HIGHLY CITED PAPERS OF PROFESSORS AS AN INDICATOR OF A RESEARCH GROUPS SCIENTIFIC PERFORMANCE SCIENTOMETRICS 29 : 377 1994 PORTER AL CITATIONS AND SCIENTIFIC PROGRESS - COMPARING BIBLIOMETRIC MEASURES WITH SCIENTIST JUDGMENTS SCIENTOMETRICS 13 : 103 1988 RINIA EJ Comparative analysis of a set of bibliometric indicators and central peer review criteria - Evaluation of condensed matter physics in the Netherlands RESEARCH POLICY 27 : 95 1998 SCHUBERT A HDB QUANTITATIVE STU : 1988 SCHUBERT A RELATIVE INDICATORS AND RELATIONAL CHARTS FOR COMPARATIVE-ASSESSMENT OF PUBLICATION OUTPUT AND CITATION IMPACT SCIENTOMETRICS 9 : 281 1986 SEGLEN PO Citations and journal impact factors: questionable indicators of research quality ALLERGY 52 : 1050 1997 SEGLEN PO THE SKEWNESS OF SCIENCE JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 43 : 628 1992 SMALL HG CITED DOCUMENTS AS CONCEPT SYMBOLS SOCIAL STUDIES OF SCIENCE 8 : 327 1978 THOMSON ISI NATL CITATION REPORT : 2003 THOMSON ISI NATL SCI INDICATORS : 2003 TIJSSEN RJW Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? SCIENTOMETRICS 54 : 381 2002 VANLEEUWEN TN 9609 CWTS : 1996 VANRAAN AFJ WEB KNOWLEDGE FESTSC : 301 2000 VINKLER P Relations of relative scientometric impact indicators. The relative publication strategy index SCIENTOMETRICS 40 : 163 1997 VINKLER P EVALUATION OF SOME METHODS FOR THE RELATIVE ASSESSMENT OF SCIENTIFIC PUBLICATIONS SCIENTOMETRICS 10 : 157 1986 VIRGO JA STATISTICAL PROCEDURE FOR EVALUATING IMPORTANCE OF SCIENTIFIC PAPERS LIBRARY QUARTERLY 47 : 415 1977 WEINBERG AM CRITERIA FOR SCIENTIFIC CHOICE MINERVA 1 : 159 1963 WEINGART P Impact of bibliometrics upon the science system: Inadvertent consequences? SCIENTOMETRICS 62 : 117 2005 WOUTERS P THESIS U AMSTERDAM A : 1999 From garfield at CODEX.CIS.UPENN.EDU Thu Apr 6 17:17:44 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Thu, 6 Apr 2006 17:17:44 -0400 Subject: Klavans R, Boyack KW "Identifying a better measure of relatedness for mapping science " JASIST 57 (2): 251-263 JAN 15 2006 Message-ID: E-mail Addresses: Richard Klavans : rklavans at mapofscience.com Kevin W. Boyack : kboyack at sandia.gov Title: Identifying a better measure of relatedness for mapping science Author(s): Klavans R, Boyack KW Source: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 57 (2): 251-263 JAN 15 2006 Document Type: Article Language: English Cited References: 41 Times Cited: 0 Abstract: Measuring the relatedness between bibliometric units (journals, documents, authors, or words) is a central task in bibliometric analysis. Relatedness measures are used for many different tasks, among them the generating of maps, or visual pictures, showing the relationship between all items from these data. Despite the importance of these tasks, there has been little written on how to quantitatively evaluate the accuracy of relatedness measures or the resulting maps. The authors propose a new framework for assessing the performance of relatedness measures and visualization algorithms that contains four factors: accuracy, coverage, scalability, and robustness. This method was applied to 10 measures of journal-journal relatedness to determine the best measure. The 10 relatedness measures were then used as inputs to a visualization algorithm to create an additional 10 measures of journal-journal relatedness based on the distances between pairs of journals in two-dimensional space. This second step determines robustness (i.e., which measure remains best after dimension reduction). Results show that, for low coverage (under 50%) the Pearson correlation is the most accurate raw relatedness measure. However, the best overall measure, both at high coverage, and after dimension reduction, is the cosine index or a modified cosine index. Results also showed that the visualization algorithm increased local accuracy for most measures. Possible reasons for this counterintuitive finding are discussed. Addresses: Klavans R (reprint author), SciTech Strategies Inc, 2405 White Horse Rd, Berwyn, PA 19312 USA SciTech Strategies Inc, Berwyn, PA 19312 USA Sandia Natl Labs, Albuquerque, NM 87185 USA E-mail Addresses: rklavans at mapofscience.com, kboyack at sandia.gov Publisher: JOHN WILEY & SONS INC, 111 RIVER ST, HOBOKEN, NJ 07030 USA Subject Category: COMPUTER SCIENCE, INFORMATION SYSTEMS; INFORMATION SCIENCE & LIBRARY SCIENCE IDS Number: 000VM ISSN: 1532-2882 CITED REFERENCES : THOMPS ISI SCI CIT IND EXP : 2001 *THOMPS ISI SOC SCI CIT IND : 2001 BASSECOULARD E Indicators in a research institute: A multi-level classification of scientific journals SCIENTOMETRICS 44 : 323 1999 BATAGELJ V CONNECTIONS 21 : 47 1998 BOYACK KW Domain visualization using VxInsight (R) for science and technology management JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 53 : 764 2002 CHEN C MAPPING SCI FRONTIER : 2003 CHEN CM Visualizing and tracking the growth of competing paradigms: Two case studies JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 53 : 678 2002 CHEN GT Constrained Steiner trees in Halin graphs RAIRO-OPERATIONS RESEARCH 37 : 179 2003 DAVIDSON GS P IEEE INF VIS 2001 : 23 2001 DECHAZAL P title not available P ANN INT IEEE E 1-6 20 : 1422 1998 DING Y Journal as markers of intellectual space: Journal co-citation analysis of information Retrieval area, 1987-1997 SCIENTOMETRICS 47 : 55 2000 FILLIATREAU G BIBLIO ANAL RES GENO : 2003 GMUR M Co-citation analysis and the search for invisible colleges: A methodological evaluation SCIENTOMETRICS 57 : 27 2003 JONES WP PICTURES OF RELEVANCE - A GEOMETRIC ANALYSIS OF SIMILARITY MEASURES JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 38 : 420 1987 KESSLER MM BIBLIOGRAPHIC COUPLING BETWEEN SCIENTIFIC PAPERS AMERICAN DOCUMENTATION 14 : 10 1963 KIM SK A gene expression map for Caenorhabditis elegans SCIENCE 293 : 2087 2001 KOHONEN T Self organization of a massive document collection IEEE TRANSACTIONS ON NEURAL NETWORKS 11 : 574 2000 KOHONEN T SELF ORG MAPS : 1995 LEYDESDORFF L INFORMETRICS 87 : 105 1988 LEYDESDORFF L Clusters and maps of science journals based on bi-connected graphs in Journal Citation Reports JOURNAL OF DOCUMENTATION 60 : 371 2004 LEYDESDORFF L Top-down decomposition of the Journal Citation Report of the Social Science Citation Index: Graph- and factor-analytical approaches SCIENTOMETRICS 60 : 159 2004 LEYDESDORFF L Indicators of structural change in the dynamics of science: Entropy statistics of the SCI Journal Citation Reports SCIENTOMETRICS 53 : 131 2002 MCCAIN KW MAPPING ECONOMICS THROUGH THE JOURNAL LITERATURE - AN EXPERIMENT IN JOURNAL COCITATION ANALYSIS JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 42 : 290 1991 MCCAIN KW COCITED AUTHOR MAPPING AS A VALID REPRESENTATION OF INTELLECTUAL STRUCTURE JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 37 : 111 1986 MCCAIN KW CORE JOURNAL NETWORKS AND COCITATION MAPS IN THE MARINE SCIENCES - TOOLS FOR INFORMATION MANAGEMENT IN INTERDISCIPLINARY RESEARCH PROCEEDINGS OF THE ASIS ANNUAL MEETING 29 : 3 1992 MCCAIN KW Neural networks research in context: A longitudinal journal cogitation analysis of an emerging interdisciplinary field SCIENTOMETRICS 41 : 389 1998 MCGILL M EVALUATION FACTORS A : 1979 MORILLO F Interdisciplinarity in science: A tentative typology of disciplines and research areas JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 54 : 1237 2003 MORRIS TA The structure of medical informatics journal literature JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION 5 : 448 1998 PERRY CA Scholarly communication in developmental dyslexia: Influence of network structure on change in a hybrid problem area JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 49 : 151 1998 PUDOVKIN AI Algorithmic procedure for finding semantically related journals JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 53 : 1113 2002 PUDOVKIN AI INDEXES OF JOURNAL CITATION RELATEDNESS AND CITATION RELATIONSHIPS AMONG AQUATIC BIOLOGY JOURNALS SCIENTOMETRICS 32 : 227 1995 SCHWECHHEIMER H Mapping interdisciplinary research fronts in neuroscience: A bibliometric view to retrograde amnesia SCIENTOMETRICS 51 : 311 2001 SMALL H Visualizing science by citation mapping JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 50 : 799 1999 SMALL H Update on science mapping: Creating large document spaces SCIENTOMETRICS 38 : 275 1997 SMALL H CLUSTERING THE SCIENCE CITATION INDEX USING CO-CITATIONS .2. MAPPING SCIENCE SCIENTOMETRICS 8 : 321 1985 TIJSSEN RJW ON GENERALIZING SCIENTOMETRIC JOURNAL MAPPING BEYOND ISIS JOURNAL AND CITATION DATABASES SCIENTOMETRICS 33 : 93 1995 TIJSSEN RJW A SCIENTOMETRIC COGNITIVE STUDY OF NEURAL-NETWORK RESEARCH - EXPERT MENTAL MAPS VERSUS BIBLIOMETRIC MAPS SCIENTOMETRICS 28 : 111 1993 TSAY MY Journal co-citation analysis of semiconductor literature SCIENTOMETRICS 57 : 7 2003 WHITE HD Author cocitation analysis and Pearson's r JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 54 : 1250 2003 WHITE HD Visualizing a discipline: An author co-citation analysis of information science, 1972-1995 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 49 : 327 1998 From garfield at CODEX.CIS.UPENN.EDU Thu Apr 6 17:33:42 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Thu, 6 Apr 2006 17:33:42 -0400 Subject: Leydesdorff L, "Can scientific journals be classified in terms of aggregated journal-journal citation relations using the Journal Citation Reports? " JASIST 57 (5): 601-613 MAR 2006 Message-ID: Loet Leydesdorff : loet at leydesdorff.net Title: Can scientific journals be classified in terms of aggregated journal- journal citation relations using the Journal Citation Reports? Author(s): Leydesdorff T Source: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 57 (5): 601-613 MAR 2006 Document Type: Article Language: English Cited References: 54 Times Cited: 0 Abstract: The aggregated citation relations among journals included in the Science Citation Index provide us with a huge matrix, which can be analyzed in various ways. By using principal component analysis or factor analysis, the factor scores can be employed as indicators of the position of the cited journals in the citing dimensions of the database. Unrotated factor scores are exact, and the extraction of principal components can be made stepwise because the principal components are independent. Rotation may be needed for the designation, but in the rotated solution a model is assumed. This assumption can be legitimated on pragmatic or theoretical grounds. Because the resulting outcomes remain sensitive to the assumptions in the model, an unambiguous classification is no longer possible in this case. However, the factor-analytic solutions allow us to test classifications against the structures contained in the database; in this article the process will be demonstrated for the delineation of a set of biochemistry journals. Addresses: Leydesdorff T (reprint author), Univ Lausanne, Sch Econ, Lausanne, Switzerland Univ Lausanne, Sch Econ, Lausanne, Switzerland Univ Amsterdam, Amsterdam Sch Commun Res, Amsterdam, NL-1018 CE Netherlands E-mail Addresses: loet at leydesdorff.net Publisher: JOHN WILEY & SONS INC, 111 RIVER ST, HOBOKEN, NJ 07030 USA IDS Number: 019WY ISSN: 1532-2882 CITED REFERENCES : BENSMAN SJ IN PRESS J AM SOC IN BENSMAN SJ Pearson's r and author cocitation analysis: A commentary on the controversy JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 55 : 935 2004 BOYACK KW Mapping the backbone of science SCIENTOMETRICS 64 : 351 2005 BURT RS STRUCTURAL THEORY AC : 1982 CRANE D INVISIBLE COLL : 1972 DOREIAN P STRUCTURAL EQUIVALENCE IN A JOURNAL NETWORK JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 36 : 28 1985 DUPRE J DISORDER THINGS META : 1993 GALISON PL DISUNITY SCI BOUNDAR : 1995 GARFIELD E CITATION INDEXING IT : 1979 GARFIELD E SYSTEM FOR AUTOMATIC CLASSIFICATION OF SCIENTIFIC LITERATURE JOURNAL OF THE INDIAN INSTITUTE OF SCIENCE 57 : 61 1975 GARSON GD QUANTITATIVE RES PUB : 2004 GLANZEL W A new classification scheme of science fields and subfields designed for Scientometric evaluation purposes SCIENTOMETRICS 56 : 357 2003 GRANOVETTER MS AM J SOCIOL 78 : 6 1973 HEMPEL CG PHILOS SCI 15 : 135 1948 HULL DL SCI SELECTION ESSAYS : 2001 KANT I KRITIK REINEN VERNUN : 1956 KIM JO FACTOR ANAL STAT MET : 1978 KIM JO STAT PACKAGE SOCIAL : 468 1975 KNASTER B FUND MATH 2 : 206 1921 KOCHEN M SCI SCI 3 : 199 1983 LEYDESDORFF L ANN M SOC SOC STUD S : 2005 LEYDESDORFF L IN PESS J AM SOC INF LEYDESDORFF L Mapping the Chinese Science Citation Database in terms of aggregated journal-journal citation relations JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 56 : 1469 2005 LEYDESDORFF L Dynamic and evolutionary updates of classificatory schemes in scientific journal structures JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 53 : 987 2002 LEYDESDORFF L Clusters and maps of science journals based on bi-connected graphs in Journal Citation Reports JOURNAL OF DOCUMENTATION 60 : 371 2004 LEYDESDORFF L STRUCTURE ACTION CONTINGENCIES AND THE MODEL OF PARALLEL DISTRIBUTED- PROCESSING JOURNAL FOR THE THEORY OF SOCIAL BEHAVIOUR 23 : 47 1993 LEYDESDORFF L TRACKING AREAS OF STRATEGIC IMPORTANCE USING SCIENTOMETRIC JOURNAL MAPPINGS RESEARCH POLICY 23 : 217 1994 LEYDESDORFF L SCI TECHNOLOGY POLIC : 289 1993 LEYDESDORFF L Top-down decomposition of the Journal Citation Report of the Social Science Citation Index: Graph- and factor-analytical approaches SCIENTOMETRICS 60 : 159 2004 LEYDESDORFF L Indicators of structural change in the dynamics of science: Entropy Statistics of the SCI Journal Citation Reports SCIENTOMETRICS 53 : 131 2002 LEYDESDORFF L SCIENTOMETRICS 26 : 133 1993 LEYDESDORFF L SCIENTOMETRICS 11 : 291 1987 LEYDESDORFF L THE DEVELOPMENT OF FRAMES OF REFERENCES SCIENTOMETRICS 9 : 103 1986 MARSHAKOVA IV NAUCHNO TEKHNICHESKA 2 : 3 1973 MOODY J Structural cohesion and embeddedness: A hierarchical concept of social Groups AMERICAN SOCIOLOGICAL REVIEW 68 : 103 2003 NAGEL E STRUCTURE SCI : 1961 NARIN F EVALUATIVE BIBLIOMET : 1976 NARIN F NATIONAL PUBLICATION AND CITATION COMPARISONS JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 26 : 80 1975 NARIN F INTERRELATIONSHIPS OF SCIENTIFIC JOURNALS JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 23 : 323 1972 NETWON I I NEWTONS MATH PRINC : 1687 NEURATH O WISSENSCHAFTLICHE WE : 1929 NOMA E AN IMPROVED METHOD FOR ANALYZING SQUARE SCIENTOMETRIC TRANSACTION MATRICES SCIENTOMETRICS 4 : 297 1982 OTTE E J INFORMATION SCI 28 : 443 2002 POPPER KR LOGIC SCI DISCOVERY : 1959 PRICE DD THE ANALYSIS OF SQUARE MATRICES OF SCIENTOMETRIC TRANSACTIONS SCIENTOMETRICS 3 : 55 1981 PRICE DJ LITTLE SCI BIG SCI : 1963 PRICE DJD NETWORKS OF SCIENTIFIC PAPERS SCIENCE 149 : 510 1965 PUDOVKIN AI Algorithmic procedure for finding semantically related journals JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 53 : 1113 2002 SMALL H THE GEOGRAPHY OF SCIENCE - DISCIPLINARY AND NATIONAL MAPPINGS JOURNAL OF INFORMATION SCIENCE 11 : 147 1985 SMALL H STRUCTURE OF SCIENTIFIC LITERATURES .1. IDENTIFYING AND GRAPHING SPECIALTIES SCIENCE STUDIES 4 : 17 1974 TIJSSEN R SCIENTOMETRICS 11 : 347 1987 VANDENBESSELAAR P 8 INT C SCI INF ISSI : 2001 VANDENBESSELAAR P Mapping change in scientific specialties: A scientometric reconstruction of The development of artificial intelligence JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 47 : 415 1996 WOUTERS P THESIS U AMSTERDAM A : 1999 From loet at LEYDESDORFF.NET Thu Apr 6 17:34:56 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Thu, 6 Apr 2006 23:34:56 +0200 Subject: Future UK RAEs to be Metrics-Based In-Reply-To: Message-ID: Dear Stephen, Thanks for all this. It seems to me that we have exhaustively discussed the RAE and the problems of replacing it with a metrics. With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Thursday, April 06, 2006 4:09 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Loet, > We go back to the frequency theory of probability, which was > best set forth by Richard Von Mises in his Wahr, Wahrheit, > und Statistik (Truth, Probability, and Statistics). Von > Mises states that no probability can calculated until a > proper set--or "kollektiv" in his language--has been defined. > Karl Pearson operated within the frequency theory, and he > stated in his Grammar of Science that classification is the > basis of all science, and he dismisses any study that is not > based on classification as not science. So if you are going > to rate programs, then you are going to have to establish > precisely what it is you are going to rate and then select > your data and measures accordingly. Take history for an > example. You can rate history as a whole or select a > historical specialty as southern history. Your selection of > variables and bibliometric data will vary according to your > purpose. By this very fact you are in a sense predetermining > the outcome of who is going to come to the top. Now I know > that the NRC was after history as a whole from a close up > analysis of what they did. For bibliometric data they used > the entire SSCI. Moreover, I know which professors at LSU > were selected for the ratings. LSU has its strongest faculty > in Southern history but these were not selected. Instead the > professors in Russian and Chinese were selected for the > ratings. This seems illogical, but then it has to be > remembered that LSU has one of the few programs big enough to > hate specialists in Russian and Chinese history, and these > were needed for an adequate rating sample. > > Now I do not understand exactly how the Brits go about their > RAEs. From what I understand, a prmgram has to volunteer to > be rated, or otherwise it is not rated. So the sample seems > to be self-selecting. The programs then submit publications > to a committee, which uses them as a basis for ratings. > But these programs may have different strengths and agendas, > and this should affect the ratings. But do the ratings have > the purpose of selecting some research areas as more > important than other research areas? > This I do not know, and this would definitely affect the > ratings. Is this what you mean by "circular." > > What I would mean by "circular" is that, due to the stability > of distributions, the same programs would be selected again > and again for funding, reinforcing the hierarchy, and > blocking either lesser programs or better faculty at the > lesser programs from advancing their agendas. > > Did you make any sense out of all of this confusion? > > SB > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > 04/06/2006 > 12:13:37 AM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > If I correctly understand, you wish to say that any ranking > of authors or institutions is ultimately dependent on how the > sets are defined. A definition of sets on the basis of > institutions would thus make the RAE operation circular. > > With best wishes, Loet > > ________________________________ > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > > Sent: Wednesday, April 05, 2006 11:16 PM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Loet, > > The principle in real estate is "location, location, > location." The > > principle in program evaluation is "set definition, set definition, > > set definition." I pointed out in another posting that a major > > discovery of the 1993 NRC rating was that all previous > ratings in the > > biosciences were incorrect due to an incorrect method of > > classification resulting in non-comparable sets. I am > somewhat proud > > that I was able to show to the NRC people how a change in > > classification method had an enormous impact on the ratings of LSU, > > turning us from a nonentity into something quite > respectable and more > > in line with Louisiana's pioneering role in medicine through mainly > > the Ochsner Clinic and the first attempt at a charity > hospital system. > > > > SB > > > > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > > 03/31/2006 > > 11:57:46 AM > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Dear Stephen, > > > > Although I am politically at the other end of the spectrum, I fully > > agree with your critique of the RAE. But the critique would equally > > hold for a "metric" that would rate departments against > each other as > > proposed by some of our colleagues. The problem is to take > departments > > as units of analysis. > > > > With best wishes, > > > > > > Loet > > > > > -----Original Message----- > > > From: ASIS&T Special Interest Group on Metrics > > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stephen > J Bensman > > > Sent: Thursday, March 30, 2006 10:32 PM > > > To: SIGMETRICS at listserv.utk.edu > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > Gee, I consider myself anything but a cultural elitist. > > > After all, I work at LSU. The basic problem of the RAE is > > that it is > > > biased against an institution like LSU. At least under the > > American > > > system, good researchers at a place like LSU have an even > chance to > > > obtain research funding, and many take advantage of this > > system. That > > > way a good researcher maintains his independence and advance his > > > career. This way LSU plays a major role as a launch pad > for up and > > > coming scientists. The British RAE always reminded me of > > the Tsarist > > > system of krugovaia poruka, where all the peasants of a > > commune were > > > held liable for communal taxes. This was the taxation system of > > > serfdom, causing peasants to be chained to the commune, stifling > > > individual initiative, thereby causing agricultural > stagnation, and > > > ultimately a violent revolution. > > > If this makes me a cultural elitist, then so be it. > > > > > > SB > > > > > > > > > > > > > > > Phil Davis @LISTSERV.UTK.EDU> on 03/30/2006 > > > 02:09:28 PM > > > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > Stephen, I wouldn't call you a "capitalist pig" but a > > willfully blind, > > > cultural elitist. In countries where education is wholly > > (or mostly) > > > funded by the government -- not just the UK and Europe, but > > Canada and > > > others -- the government is concerned about making sure > > that everyone > > > gets some modicum of funding. That does not mean a completely > > > equitable rationing system, but it ensures a base-level of > > funding. > > > In the United States, this base-level funding often comes > > from one's > > > own department or college. Granted, the > > capitalist-approach you speak > > > of does reward the best and greatest, and this Winner-takes-all > > > approach does result in pioneering research, yet it only > > rewards the > > > few. > > > > > > --Phil Davis > > > > > > > > > > > > Stephen Bensman wrote: > > > > > > >Speaking as a capitalist pig, the entire RAE system is > just another > > > example > > > >of socialists hoisting themselves on their own petards. > > > Point 1 below > > > >contains the essence of the problem. The US has done > > > pioneering work > > > >on the evaluation of research-doctorate programs but was > > never silly > > > >enough > > > to > > > >allocate research resources on the basis of it. Luckily > > > because these > > > >evaluations were usually screwed up in some way. Allocation of > > > >research resources was always done on a project-by-project > > > basis by the > > > >NSF, NIH, and others, with experts in the fields evaluating > > > individual > > > >research proposals. The Europeans have a tendency to overplan > > > >everything with disastrous consequences--the disaster in > > > Eastern Europe > > > >just being the latest example of it. > > > > > > > >SB > > > > > > From harnad at ECS.SOTON.AC.UK Thu Apr 6 17:56:44 2006 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Thu, 6 Apr 2006 22:56:44 +0100 Subject: Future UK RAEs to be Metrics-Based In-Reply-To: Message-ID: The following article is excellent and accurate overall. Cliffe, Rebeca (2006) Research Assessment Exercise: Bowing out in Favour of Metrics. EPS Insights: 3 April 2006 http://technology.guardian.co.uk/weekly/story/0,,1747334,00.html One can hardly quarrel with the following face-valid summary from this article: > "The move to a new metrics based system [for RAE] will no doubt please > those who see a role for institutional repositories in monitoring > research quality. The online environment has thrown up new metrics, > which could be used alongside traditional measures such as citations. > Usage can be measured at the point of consumption -- the number > of "hits" on a particular article can indicate the uptake of the > research. Web usage would be expected to be an early indicator of > how often the article is later cited. Some believe that institutional > repositories should be used as the basis for ongoing assessment of all > UK peer-reviewed research output by mandating that researchers should > place material in repositories. They argue that this would allow > usage to be measured earlier, through downloads of both pre-prints > and post-prints. Of course, this course of action would also advance > the cause of open access by making this research available free." But there are a few points of detail on which this otherwise accurate report could be made even more useful: (1) peer review has already been done for published articles, so the issue is not (i) peer review vs. metrics but (ii) peer review plus metrics vs. peer review plus metrics plus "peer re-review" (by the RAE panels). It is the re-review of already peer-reviewed publications that is the wasteful practice that needs to be scrapped, given that peer review has already been done, and that metrics are already highly correlated with the RAE ranking outcome anyway. (2) For the fields in which the current RAE outcome is not already highly correlated with metrics, further work is needed; obviously works other than peer-reviewed article or books (e.g., artwork, multimedia) will have to be evaluated in other ways, but for science, engineering, biomedicine, social science, and most fields of humanities, books and articles are the form that research output takes, and they will be amenable to the increasingly powerful and diverse forms of metrics that are being devised and tested. (Many will be tested in parallel with the 2008 RAE, which will still be conducted the old, wasteful way; some of the metrics may also be testable retrospectively against prior RAE outcomes.) > "Proponents of a metrics based system point to studies that show > how average citation frequencies of articles can closely predict > the scores given by the RAE for departmental quality, even though > the RAE does not currently count these." True, but the highest metric correlate of the present RAE outcome is reportedly prior research funding (0.98). Yet it would be a big mistake to scrap all other metrics and base the RAE rank on just prior funding. That would just generate a massive Matthew Effect and essentially make top-sliced RAE funding redundant with direct competitive research project funding (thereby essentially "bowing out" of the dual RAE/RCUK funding system altogether, reducing it to just research project funding). What is remarkable about the high correlation between citation counts and RAE ranks (0.7 - 0.9), even though the correlation is not quite as high as with prior funding (0.98), is that citations are not presently counted in the RAE (whereas prior funding is)! Not only are citations a more independent metric of research performance than prior funding, but counting them directly -- along with the many other candidate metrics -- can enrich and diversify the RAE evaluation, rather than just make it into a self-fulfilling prophecy. > "However, metrics tend to work better for the sciences than the > humanities. Whereas the citation of science research is seen as an > indicator of the quality and impact of the research, in the humanities > this is not the case. Humanities research is based around critical > discourse and an author may be citing an article simply to disagree > with its argument." I don't think this is quite accurate. It might be true that humanities research makes less explicit use of citation counts today than science research. It might even be true that the correlation between citation counts and research productivity and importance is lower in the humanities than in the sciences (though I am not aware of studies to that effect). And it may also be true that citation counts in the humanities are less correlated with RAE rankings than they are in the sciences. But the familiar canard about articles being cited, not because they are valid but important, but in order to disagree with them, has too much the flavour of the a-priori dismissiveness of citation analysis that we hear in *all* disciplines from those who have not really investigated it, or the evidence for/against it, but are simply expressing their own personal prejudices on the subject. Let's see the citation counts for humanities articles and books, and their correlation with other performance indicators as well as RAE rankings rather than dismissing them a-priori on the basis of anecdotes. > "Also, an analysis of RAE 2001 submissions revealed that while some > 90% of research outputs listed by British researchers in the fields > of Physics and Chemistry were mapped by ISI data, in Law the figure > was below 10%, according to Ian Diamond of the Economic and Social > Research Council (ESRC) (Oxford Workshop on the use of Metrics in > Research Assessment)." I am not sure what "mapped by ISI data" means, but if it means that ISI does not cover enough of the pertinent journals in Law, then the empirical question is: what are the pertinent journals? can citation counts be derived from their online versions, using the publishers' websites and/or subscribing institutions' online versions? how well does this augmented citation count correlate with the ISI subsample (<10%)? and how well do both correlate with RAE ranking? (Surely ISI coverage should not be the determinant of whether or not a metric is valid.) > "Ultimately, a combination of qualitative and quantitative indicators > would seem to be the best approach." What is a "qualitative indicator"? A peer judgment of quality? But that quality judgment has already been made by the peer-reviewers of the journal in which the article was published -- and every field has a hierarchy of quality among journals that is known (and may even sometimes be correlated with the journal's impact factor, if one compares like with like in terms of subject matter). What is the point of repeating the peer review exercise? And especially if here too it turns out to be correlated with metrics? Is it? > "While metrics are likely to be used to simplify the research > assessment process, the merits of a qualitative element would be to > ensure that over-reliance on quantitative factors does not unfairly > discriminate against research which is of good quality but has not > been cited as highly as other research due to factors such as its > local impact." Why not ask the panels first to make quality judgments on the journals in which the papers were published, and then see whether those rankings correlate with the author/article citation metrics? and whether they correlate with the RAE rankings based on the present time-consuming qualitative re-evaluations? If the correlations prove lower than in the other fields (even when augmented by prior funding and other metrics) *then* there may be a case for special treatment of the humanities. Otherwise, the special pleading on behalf of uncited research sounds as anecdotal, arbitrary and ad hoc as the claim that high citations in humanities betoken disagreement rather than usage and importance, as in other fields. > "Unless more appropriate metrics can be developed for the humanities, > it would seem that an element of expert peer review must remain in > whatever metrics based system emerges from the ashes of the RAE." It has not yet been shown whether the same metrics that correlate highly with RAE outcome in other fields (funding, citations) truly fail to do so in the humanities. If they do fail to correlate sufficiently, there are still many candidate metrics to try out (co-citations, downloads, book citations, CiteRank, latency/longevity, exogamy/endogamy, hubs/authorities, co-text, etc.) before having to fall back on repeating, badly, the peer evaluation that should have already have been done, properly, by the journals in which the research first appeared. Stevan Harnad > Research Assessment Exercise: http://www.rae.co.uk > Economic and Social Research Council: http://www.eserc.ac.uk > > Open access: practical matters become the key focus, EPS Insights, 10 > March 2005 > http://www.epsltd.com/accessArticles.asp?articleType=1&updateNoteID=1538 > > Citation Analysis in the Open Access World, imi, September 2004 > http://www.epsltd.com/accessArticles.asp?articleType=2&articleID=236&imiID=294 From garfield at CODEX.CIS.UPENN.EDU Thu Apr 6 17:57:45 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Thu, 6 Apr 2006 17:57:45 -0400 Subject: Della Sala S, Crawford JR "Impact factor as we know it handicaps neuropsychology and neuropsychologists " CORTEX 42 (1): 1-2 JAN 2006 Message-ID: FOR FULL TEXT GO TO : http://www.cortex-online.org/ftpNavigator.asp? myAction=browse&page=1&orderby=3DS&id=175 CLICK ON FIRST ARTICLE Title: Impact factor as we know it handicaps neuropsychology and neuropsychologists Author(s): Della Sala S, Crawford JR Source: CORTEX 42 (1): 1-2 JAN 2006 Document Type: Editorial Material Language: English Cited References: 2 Times Cited: 0 Addresses: Della Sala S (reprint author), Univ Edinburgh, Human Cognit Neurosci Psychol, Edinburgh, Midlothian EH8 9YL Scotland Univ Edinburgh, Human Cognit Neurosci Psychol, Edinburgh, Midlothian EH8 9YL Scotland Univ Aberdeen, Sch Psychol, Aberdeen, AB9 1FX Scotland Publisher: MASSON DIVISIONE PERIODICI, VIA MUZIO ATTENDOLO DETTO SFORZA 79, 20141 MILANO, ITALY Subject Category: BEHAVIORAL SCIENCES; NEUROSCIENCES IDS Number: 015WZ ISSN: 0010-9452 CITED REFERENCES : COHEN J STAT POWER ANAL BEHA : 1988 GARFIELD E CORTEX 37 : 575 2001 From gwhitney at UTK.EDU Thu Apr 6 20:14:44 2006 From: gwhitney at UTK.EDU (Gretchen Whitney) Date: Thu, 6 Apr 2006 20:14:44 -0400 Subject: Race, Facebook, and Community Analysis Message-ID: The point here is not the data revealed, but the questions asked. Better methodologies, anyone? --gw <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> Gretchen Whitney, PhD tel 865.974.7919 School of Information Sciences fax 865.974.4967 University of Tennessee, Knoxville TN 37996 USA gwhitney at utk.edu http://web.utk.edu/~gwhitney/ jESSE:http://web.utk.edu/~gwhitney/jesse.html SIGMETRICS:http://web.utk.edu/~gwhitney/sigmetrics.html <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> ---------- Forwarded message ---------- Date: Sun, 26 Mar 2006 11:57:15 -0500 (EST) From: Fred Stutzman Reply-To: air-l at listserv.aoir.org To: air-l at listserv.aoir.org Subject: Re: [Air-l] myspace and race Nicolas Hamatake, David Lifson and Saket Navlakha prepared a paper for a graduate level class that deals heavily with race. The paper is entitled: The Facebook: Analysis of a Cornell Community Social Network. The paper can be downloaded here: http://www.people.cornell.edu/pages/nh39/papers/cs685.pdf The authors acknowledge the methodological shortcomings of the study, and it has not been peer-reviewed. And also, its not about Myspace. That said, it should be interested to a few of us who haven't come across the paper yet. -Fred On Sat, 25 Mar 2006, danah boyd wrote: > I know of none but would be very interested in this. I am paying > attention to the (lack of) diversity in youth networks on these > sites. For example, even in schools that are racially mixed, the > profiles people connect to on MySpace are very homogeneous. > > On Mar 24, 2006, at 7:36 PM, Greg Wise wrote: > >> Hi folks, >> Just following up on the myspace thread that bounced around here >> last month: anyone know of any work regarding myspace and race? In >> particular representation, self-presentation, etc. of race on >> myspace (or similar sites). The article in the NYT a few weeks ago >> on online self-portraits got me thinking about identity performance >> and negotiation on such sites. >> >> cheers, >> >> Greg Wise >> _______________________________________________ >> The air-l at listserv.aoir.org mailing list >> is provided by the Association of Internet Researchers http://aoir.org >> Subscribe, change options or unsubscribe at: http:// >> listserv.aoir.org/listinfo.cgi/air-l-aoir.org >> >> Join the Association of Internet Researchers: >> http://www.aoir.org/ > > - - - - - - - - - - d a n a h ( d o t ) o r g - - - - - - - - - - > "taken out of context i must seem so strange" > > musings :: http://www.zephoria.org/thoughts > > > > > _______________________________________________ > The air-l at listserv.aoir.org mailing list > is provided by the Association of Internet Researchers http://aoir.org > Subscribe, change options or unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org > > Join the Association of Internet Researchers: > http://www.aoir.org/ > -- Fred Stutzman http://claimID.com AIM: chimprawk 919-260-8508 _______________________________________________ The air-l at listserv.aoir.org mailing list is provided by the Association of Internet Researchers http://aoir.org Subscribe, change options or unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org Join the Association of Internet Researchers: http://www.aoir.org/ From notsjb at LSU.EDU Fri Apr 7 10:09:11 2006 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Fri, 7 Apr 2006 09:09:11 -0500 Subject: HistCite and RAEs Message-ID: Loet, There is one point that I want to make that I should have made long ago but somehow did not make the necessary connections. There is excellent software that can perform very sophisticated program evaluations for RAEs. It has all the necessary "metrics." It is HistCite that has been developed by Sasha Pudovkin and Gene Garfield. Sasha gave me a lengthy demonstration of this software at the ASIST Conference in Providence, RI, in 2004, and I was mightly impressed. The program allows you first to define very specific subject sets; it then picks out the key articles in these subject sets and maps them with very informative graphics; it then allows you to do institutional and national rankings. It compensates for all the faults of the NRC meat axe approach. Moreover, since the UK and the US act as a cultural unit, the correlations of ISI citations with British expert ratings are very high, and I have proven the strong association of ISI citations with British supralibrary use. Therefore, HistCite analyses should conform to British concepts of program importance. From your perspective, I suppose, the main fault is that it works on ISI subject sets, but, in my opinion, these subject sets are about as good as can be expected. One interesting experiment that HistCite might make possible to test how a subject structure matches institutional structures. I am sure that Gene Garfield would allow the UK government to have access to the program for a reasonable price. That said, there still remains one problem. Even with such a sophisticated method of program evaluation, should the UK government allocate research money on the basis of it, collectively punishing scientists not part of the selected programs, or should the UK government remain neutral and allocate money on the basis of the evaluation of individual projects? In either case most of the money will go to the same programs, but at least the others have a chance. Therefore, ideologically, I still favor the latter approach. SB Loet Leydesdorff @LISTSERV.UTK.EDU> on 04/06/2006 04:34:56 PM Please respond to ASIS&T Special Interest Group on Metrics Sent by: ASIS&T Special Interest Group on Metrics To: SIGMETRICS at LISTSERV.UTK.EDU cc: (bcc: Stephen J Bensman/notsjb/LSU) Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based Dear Stephen, Thanks for all this. It seems to me that we have exhaustively discussed the RAE and the problems of replacing it with a metrics. With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Thursday, April 06, 2006 4:09 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Loet, > We go back to the frequency theory of probability, which was > best set forth by Richard Von Mises in his Wahr, Wahrheit, > und Statistik (Truth, Probability, and Statistics). Von > Mises states that no probability can calculated until a > proper set--or "kollektiv" in his language--has been defined. > Karl Pearson operated within the frequency theory, and he > stated in his Grammar of Science that classification is the > basis of all science, and he dismisses any study that is not > based on classification as not science. So if you are going > to rate programs, then you are going to have to establish > precisely what it is you are going to rate and then select > your data and measures accordingly. Take history for an > example. You can rate history as a whole or select a > historical specialty as southern history. Your selection of > variables and bibliometric data will vary according to your > purpose. By this very fact you are in a sense predetermining > the outcome of who is going to come to the top. Now I know > that the NRC was after history as a whole from a close up > analysis of what they did. For bibliometric data they used > the entire SSCI. Moreover, I know which professors at LSU > were selected for the ratings. LSU has its strongest faculty > in Southern history but these were not selected. Instead the > professors in Russian and Chinese were selected for the > ratings. This seems illogical, but then it has to be > remembered that LSU has one of the few programs big enough to > hate specialists in Russian and Chinese history, and these > were needed for an adequate rating sample. > > Now I do not understand exactly how the Brits go about their > RAEs. From what I understand, a prmgram has to volunteer to > be rated, or otherwise it is not rated. So the sample seems > to be self-selecting. The programs then submit publications > to a committee, which uses them as a basis for ratings. > But these programs may have different strengths and agendas, > and this should affect the ratings. But do the ratings have > the purpose of selecting some research areas as more > important than other research areas? > This I do not know, and this would definitely affect the > ratings. Is this what you mean by "circular." > > What I would mean by "circular" is that, due to the stability > of distributions, the same programs would be selected again > and again for funding, reinforcing the hierarchy, and > blocking either lesser programs or better faculty at the > lesser programs from advancing their agendas. > > Did you make any sense out of all of this confusion? > > SB > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > 04/06/2006 > 12:13:37 AM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > If I correctly understand, you wish to say that any ranking > of authors or institutions is ultimately dependent on how the > sets are defined. A definition of sets on the basis of > institutions would thus make the RAE operation circular. > > With best wishes, Loet > > ________________________________ > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > > Sent: Wednesday, April 05, 2006 11:16 PM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Loet, > > The principle in real estate is "location, location, > location." The > > principle in program evaluation is "set definition, set definition, > > set definition." I pointed out in another posting that a major > > discovery of the 1993 NRC rating was that all previous > ratings in the > > biosciences were incorrect due to an incorrect method of > > classification resulting in non-comparable sets. I am > somewhat proud > > that I was able to show to the NRC people how a change in > > classification method had an enormous impact on the ratings of LSU, > > turning us from a nonentity into something quite > respectable and more > > in line with Louisiana's pioneering role in medicine through mainly > > the Ochsner Clinic and the first attempt at a charity > hospital system. > > > > SB > > > > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > > 03/31/2006 > > 11:57:46 AM > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Dear Stephen, > > > > Although I am politically at the other end of the spectrum, I fully > > agree with your critique of the RAE. But the critique would equally > > hold for a "metric" that would rate departments against > each other as > > proposed by some of our colleagues. The problem is to take > departments > > as units of analysis. > > > > With best wishes, > > > > > > Loet > > > > > -----Original Message----- > > > From: ASIS&T Special Interest Group on Metrics > > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stephen > J Bensman > > > Sent: Thursday, March 30, 2006 10:32 PM > > > To: SIGMETRICS at listserv.utk.edu > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > Gee, I consider myself anything but a cultural elitist. > > > After all, I work at LSU. The basic problem of the RAE is > > that it is > > > biased against an institution like LSU. At least under the > > American > > > system, good researchers at a place like LSU have an even > chance to > > > obtain research funding, and many take advantage of this > > system. That > > > way a good researcher maintains his independence and advance his > > > career. This way LSU plays a major role as a launch pad > for up and > > > coming scientists. The British RAE always reminded me of > > the Tsarist > > > system of krugovaia poruka, where all the peasants of a > > commune were > > > held liable for communal taxes. This was the taxation system of > > > serfdom, causing peasants to be chained to the commune, stifling > > > individual initiative, thereby causing agricultural > stagnation, and > > > ultimately a violent revolution. > > > If this makes me a cultural elitist, then so be it. > > > > > > SB > > > > > > > > > > > > > > > Phil Davis @LISTSERV.UTK.EDU> on 03/30/2006 > > > 02:09:28 PM > > > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > Stephen, I wouldn't call you a "capitalist pig" but a > > willfully blind, > > > cultural elitist. In countries where education is wholly > > (or mostly) > > > funded by the government -- not just the UK and Europe, but > > Canada and > > > others -- the government is concerned about making sure > > that everyone > > > gets some modicum of funding. That does not mean a completely > > > equitable rationing system, but it ensures a base-level of > > funding. > > > In the United States, this base-level funding often comes > > from one's > > > own department or college. Granted, the > > capitalist-approach you speak > > > of does reward the best and greatest, and this Winner-takes-all > > > approach does result in pioneering research, yet it only > > rewards the > > > few. > > > > > > --Phil Davis > > > > > > > > > > > > Stephen Bensman wrote: > > > > > > >Speaking as a capitalist pig, the entire RAE system is > just another > > > example > > > >of socialists hoisting themselves on their own petards. > > > Point 1 below > > > >contains the essence of the problem. The US has done > > > pioneering work > > > >on the evaluation of research-doctorate programs but was > > never silly > > > >enough > > > to > > > >allocate research resources on the basis of it. Luckily > > > because these > > > >evaluations were usually screwed up in some way. Allocation of > > > >research resources was always done on a project-by-project > > > basis by the > > > >NSF, NIH, and others, with experts in the fields evaluating > > > individual > > > >research proposals. The Europeans have a tendency to overplan > > > >everything with disastrous consequences--the disaster in > > > Eastern Europe > > > >just being the latest example of it. > > > > > > > >SB > > > > > > From loet at LEYDESDORFF.NET Fri Apr 7 10:53:16 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 7 Apr 2006 16:53:16 +0200 Subject: HistCite and RAEs In-Reply-To: Message-ID: So, there we are! Some of us seem to have the metrics on the shelf. Congratulation to Sasha Pudovkin, Eugene Garfield, and Steven Harnad! Unless, Stephen, you happen to have shares in the ISI? :-) I was at the meeting in Prudence and got the same demonstration, but I obviously did not see what you saw. Is there a publication which you would advice, to understand these correlations? With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Friday, April 07, 2006 4:09 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] HistCite and RAEs > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Loet, > There is one point that I want to make that I should have > made long ago but somehow did not make the necessary > connections. There is excellent software that can perform > very sophisticated program evaluations for RAEs. > It has all the necessary "metrics." It is HistCite that has > been developed by Sasha Pudovkin and Gene Garfield. Sasha > gave me a lengthy demonstration of this software at the ASIST > Conference in Providence, RI, in 2004, and I was mightly > impressed. The program allows you first to define very > specific subject sets; it then picks out the key articles in > these subject sets and maps them with very informative > graphics; it then allows you to do institutional and national > rankings. It compensates for all the faults of the NRC meat > axe approach. Moreover, since the UK and the US act as a > cultural unit, the correlations of ISI citations with British > expert ratings are very high, and I have proven the strong > association of ISI citations with British supralibrary use. > Therefore, HistCite analyses should conform to British > concepts of program importance. From your perspective, I > suppose, the main fault is that it works on ISI subject sets, > but, in my opinion, these subject sets are about as good as > can be expected. One interesting experiment that HistCite > might make possible to test how a subject structure matches > institutional structures. I am sure that Gene Garfield would > allow the UK government to have access to the program for a > reasonable price. > > That said, there still remains one problem. Even with such a > sophisticated method of program evaluation, should the UK > government allocate research money on the basis of it, > collectively punishing scientists not part of the selected > programs, or should the UK government remain neutral and > allocate money on the basis of the evaluation of individual > projects? In either case most of the money will go to the > same programs, but at least the others have a chance. > Therefore, ideologically, I still favor the latter approach. > > SB > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > 04/06/2006 > 04:34:56 PM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear Stephen, > > Thanks for all this. It seems to me that we have exhaustively > discussed the RAE and the problems of replacing it with a metrics. > > With best wishes, > > > Loet > > ________________________________ > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > > Sent: Thursday, April 06, 2006 4:09 PM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Loet, > > We go back to the frequency theory of probability, which > was best set > > forth by Richard Von Mises in his Wahr, Wahrheit, und Statistik > > (Truth, Probability, and Statistics). Von Mises states that no > > probability can calculated until a proper set--or > "kollektiv" in his > > language--has been defined. > > Karl Pearson operated within the frequency theory, and he > stated in > > his Grammar of Science that classification is the basis of all > > science, and he dismisses any study that is not based on > > classification as not science. So if you are going to rate > programs, > > then you are going to have to establish precisely what it > is you are > > going to rate and then select your data and measures accordingly. > > Take history for an example. You can rate history as a whole or > > select a historical specialty as southern history. Your > selection of > > variables and bibliometric data will vary according to your > purpose. > > By this very fact you are in a sense predetermining the > outcome of who > > is going to come to the top. Now I know that the NRC was after > > history as a whole from a close up analysis of what they did. For > > bibliometric data they used the entire SSCI. Moreover, I > know which > > professors at LSU were selected for the ratings. LSU has its > > strongest faculty in Southern history but these were not selected. > > Instead the professors in Russian and Chinese were selected for the > > ratings. This seems illogical, but then it has to be > remembered that > > LSU has one of the few programs big enough to hate specialists in > > Russian and Chinese history, and these were needed for an adequate > > rating sample. > > > > Now I do not understand exactly how the Brits go about their RAEs. > > From what I understand, a prmgram has to volunteer to be rated, or > > otherwise it is not rated. So the sample seems to be > self-selecting. > > The programs then submit publications to a committee, which > uses them > > as a basis for ratings. > > But these programs may have different strengths and > agendas, and this > > should affect the ratings. But do the ratings have the purpose of > > selecting some research areas as more important than other research > > areas? > > This I do not know, and this would definitely affect the > ratings. Is > > this what you mean by "circular." > > > > What I would mean by "circular" is that, due to the stability of > > distributions, the same programs would be selected again > and again for > > funding, reinforcing the hierarchy, and blocking either lesser > > programs or better faculty at the lesser programs from > advancing their > > agendas. > > > > Did you make any sense out of all of this confusion? > > > > SB > > > > > > > > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > > 04/06/2006 > > 12:13:37 AM > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > If I correctly understand, you wish to say that any ranking > of authors > > or institutions is ultimately dependent on how the sets are > defined. A > > definition of sets on the basis of institutions would thus make the > > RAE operation circular. > > > > With best wishes, Loet > > > > ________________________________ > > Loet Leydesdorff > > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal > > 48, 1012 CX Amsterdam. > > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; > > http://www.leydesdorff.net/ > > > > > > > > > -----Original Message----- > > > From: ASIS&T Special Interest Group on Metrics > > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen > J Bensman > > > Sent: Wednesday, April 05, 2006 11:16 PM > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > Loet, > > > The principle in real estate is "location, location, > > location." The > > > principle in program evaluation is "set definition, set > definition, > > > set definition." I pointed out in another posting that a major > > > discovery of the 1993 NRC rating was that all previous > > ratings in the > > > biosciences were incorrect due to an incorrect method of > > > classification resulting in non-comparable sets. I am > > somewhat proud > > > that I was able to show to the NRC people how a change in > > > classification method had an enormous impact on the > ratings of LSU, > > > turning us from a nonentity into something quite > > respectable and more > > > in line with Louisiana's pioneering role in medicine > through mainly > > > the Ochsner Clinic and the first attempt at a charity > > hospital system. > > > > > > SB > > > > > > > > > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > > > 03/31/2006 > > > 11:57:46 AM > > > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > Dear Stephen, > > > > > > Although I am politically at the other end of the > spectrum, I fully > > > agree with your critique of the RAE. But the critique > would equally > > > hold for a "metric" that would rate departments against > > each other as > > > proposed by some of our colleagues. The problem is to take > > departments > > > as units of analysis. > > > > > > With best wishes, > > > > > > > > > Loet > > > > > > > -----Original Message----- > > > > From: ASIS&T Special Interest Group on Metrics > > > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stephen > > J Bensman > > > > Sent: Thursday, March 30, 2006 10:32 PM > > > > To: SIGMETRICS at listserv.utk.edu > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > > > Gee, I consider myself anything but a cultural elitist. > > > > After all, I work at LSU. The basic problem of the RAE is > > > that it is > > > > biased against an institution like LSU. At least under the > > > American > > > > system, good researchers at a place like LSU have an even > > chance to > > > > obtain research funding, and many take advantage of this > > > system. That > > > > way a good researcher maintains his independence and > advance his > > > > career. This way LSU plays a major role as a launch pad > > for up and > > > > coming scientists. The British RAE always reminded me of > > > the Tsarist > > > > system of krugovaia poruka, where all the peasants of a > > > commune were > > > > held liable for communal taxes. This was the taxation > system of > > > > serfdom, causing peasants to be chained to the commune, > stifling > > > > individual initiative, thereby causing agricultural > > stagnation, and > > > > ultimately a violent revolution. > > > > If this makes me a cultural elitist, then so be it. > > > > > > > > SB > > > > > > > > > > > > > > > > > > > > Phil Davis @LISTSERV.UTK.EDU> on 03/30/2006 > > > > 02:09:28 PM > > > > > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > > > Stephen, I wouldn't call you a "capitalist pig" but a > > > willfully blind, > > > > cultural elitist. In countries where education is wholly > > > (or mostly) > > > > funded by the government -- not just the UK and Europe, but > > > Canada and > > > > others -- the government is concerned about making sure > > > that everyone > > > > gets some modicum of funding. That does not mean a completely > > > > equitable rationing system, but it ensures a base-level of > > > funding. > > > > In the United States, this base-level funding often comes > > > from one's > > > > own department or college. Granted, the > > > capitalist-approach you speak > > > > of does reward the best and greatest, and this Winner-takes-all > > > > approach does result in pioneering research, yet it only > > > rewards the > > > > few. > > > > > > > > --Phil Davis > > > > > > > > > > > > > > > > Stephen Bensman wrote: > > > > > > > > >Speaking as a capitalist pig, the entire RAE system is > > just another > > > > example > > > > >of socialists hoisting themselves on their own petards. > > > > Point 1 below > > > > >contains the essence of the problem. The US has done > > > > pioneering work > > > > >on the evaluation of research-doctorate programs but was > > > never silly > > > > >enough > > > > to > > > > >allocate research resources on the basis of it. Luckily > > > > because these > > > > >evaluations were usually screwed up in some way. > Allocation of > > > > >research resources was always done on a project-by-project > > > > basis by the > > > > >NSF, NIH, and others, with experts in the fields evaluating > > > > individual > > > > >research proposals. The Europeans have a tendency to overplan > > > > >everything with disastrous consequences--the disaster in > > > > Eastern Europe > > > > >just being the latest example of it. > > > > > > > > > >SB > > > > > > > > > > From notsjb at LSU.EDU Fri Apr 7 11:23:04 2006 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Fri, 7 Apr 2006 10:23:04 -0500 Subject: HistCite and RAEs Message-ID: Loet, I do not have shares in Thomson Scientific. You did not see this aspect of HistCite, because there are many aspects, and you were not interested in this one. Given my experience with the NRC, I was, and I specifically asked for demonstrations of this aspect of it. The demonstration specifically concerned "Population Genetics," and the top institution in this subfield was the University of Edinburgh, where, I think, the first successful clone of Dolly the Sheep was performed. It showed British universities extremely strong in this field, and this fits in with the history of Britain, where most most major breakthroughs in the biosciences were made due to the Darwinian Revolution. That was enough to convince me. There are many studies confirming the high correlation of ISI citations with British RAE ratings. As for my proofs of the strong association of ISI citations with British supralibrary use, go to Gene Garfield's web site, read the third part of the artricle "Urquhart's Law" and the article entitled "Urquhart's and Garfield's Laws." SB Loet Leydesdorff @LISTSERV.UTK.EDU> on 04/07/2006 09:53:16 AM Please respond to ASIS&T Special Interest Group on Metrics Sent by: ASIS&T Special Interest Group on Metrics To: SIGMETRICS at LISTSERV.UTK.EDU cc: (bcc: Stephen J Bensman/notsjb/LSU) Subject: Re: [SIGMETRICS] HistCite and RAEs So, there we are! Some of us seem to have the metrics on the shelf. Congratulation to Sasha Pudovkin, Eugene Garfield, and Steven Harnad! Unless, Stephen, you happen to have shares in the ISI? :-) I was at the meeting in Prudence and got the same demonstration, but I obviously did not see what you saw. Is there a publication which you would advice, to understand these correlations? With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > Sent: Friday, April 07, 2006 4:09 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] HistCite and RAEs > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Loet, > There is one point that I want to make that I should have > made long ago but somehow did not make the necessary > connections. There is excellent software that can perform > very sophisticated program evaluations for RAEs. > It has all the necessary "metrics." It is HistCite that has > been developed by Sasha Pudovkin and Gene Garfield. Sasha > gave me a lengthy demonstration of this software at the ASIST > Conference in Providence, RI, in 2004, and I was mightly > impressed. The program allows you first to define very > specific subject sets; it then picks out the key articles in > these subject sets and maps them with very informative > graphics; it then allows you to do institutional and national > rankings. It compensates for all the faults of the NRC meat > axe approach. Moreover, since the UK and the US act as a > cultural unit, the correlations of ISI citations with British > expert ratings are very high, and I have proven the strong > association of ISI citations with British supralibrary use. > Therefore, HistCite analyses should conform to British > concepts of program importance. From your perspective, I > suppose, the main fault is that it works on ISI subject sets, > but, in my opinion, these subject sets are about as good as > can be expected. One interesting experiment that HistCite > might make possible to test how a subject structure matches > institutional structures. I am sure that Gene Garfield would > allow the UK government to have access to the program for a > reasonable price. > > That said, there still remains one problem. Even with such a > sophisticated method of program evaluation, should the UK > government allocate research money on the basis of it, > collectively punishing scientists not part of the selected > programs, or should the UK government remain neutral and > allocate money on the basis of the evaluation of individual > projects? In either case most of the money will go to the > same progpams, but at least the others have a chance. > Therefore, ideologically, I still favor the latter approach. > > SB > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > 04/06/2006 > 04:34:56 PM > > Please respond to ASIS&T Special Interest Group on Metrics > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear Stephen, > > Thanks for all this. It seems to me that we have exhaustively > discussed the RAE and the problems of replacing it with a metrics. > > With best wishes, > > > Loet > > ________________________________ > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman > > Sent: Thursday, April 06, 2006 4:09 PM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > Loet, > > We go back to the frequency theory of probability, which > was best set > > forth by Richard Von Mises in his Wahr, Wahrheit, und Statistik > > (Truth, Probability, and Statistics). Von Mises states that no > > probability can calculated until a proper set--or > "kollektiv" in his > > language--has been defined. > > Karl Pearson operated within the frequency theory, and he > stated in > > his Grammar of Science that classification is the basis of all > > science, and he dismisses any study that is not based on > > classification as not science. So if you are going to rate > programs, > > then you are going to have to establish precisely what it > is you are > > going to rate and then select your data and measures accordingly. > > Take history for an example. You can rate history as a whole or > > select a historical specialty as southern history. Your > selection of > > variables and bibliometric data will vary according to your > purpose. > > By this very fact you are in a sense predetermining the > outcome of who > > is going to come to the top. Now I know that the NRC was after > > history as a whole from a close up analysis of what they did. For > > bibliometric data they used the entire SSCI. Moreover, I > know which > > professors at LSU were selected for the ratings. LSU has its > > strongest faculty in Southern history but these were not selected. > > Instead the professors in Russian and Chinese were selected for the > > ratings. This seems illogical, but then it has to be > remembered that > > LSU has one of the few programs big enough to hate specialists in > > Russian and Chinese history, and these were needed for an adequate > > rating sample. > > > > Now I do not understand exactly how the Brits go about their RAEs. > > From what I understand, a prmgram has to volunteer to be rated, or > > otherwise it is not rated. So the sample seems to be > self-selecting. > > The programs then submit publications to a committee, which > uses them > > as a basis for ratings. > > But these programs may have different strengths and > agendas, and this > > should affect the ratings. But do the ratings have the purpose of > > selecting some research areas as more important than other research > > areas? > > This I do not know, and this would definitely affect the > ratings. Is > > this what you mean by "circular." > > > > What I would mean by "circular" is that, due to the stability of > > distributions, the same programs would be selected again > and again for > > funding, reinforcing the hierarchy, and blocking either lesser > > programs or better faculty at the lesser programs from > advancing their > > agendas. > > > > Did you make any sense out of all of this confusion? > > > > SB > > > > > > > > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > > 04/06/2006 > > 12:13:37 AM > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > If I correctly understand, you wish to say that any ranking > of authors > > or institutions is ultimately dependent on how the sets are > defined. A > > definition of sets on the basis of institutions would thus make the > > RAE operation circular. > > > > With best wishes, Loet > > > > ________________________________ > > Loet Leydesdorff > > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal > > 48, 1012 CX Amsterdam. > > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; > loet at leydesdorff.net ; > > http://www.leydesdorff.net/ > > > > > > > > > -----Original Message----- > > > From: ASIS&T Special Interest Group on Metrics > > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen > J Bensman > > > Sent: Wednesday, April 05, 2006 11:16 PM > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > Loet, > > > The principle in real estate is "location, location, > > location." The > > > principle in program evaluation is "set definition, set > definition, > > > set definition." I pointed out in another posting that a major > > > discovery of the 1993 NRC rating was that all previous > > ratings in the > > > biosciences were incorrect due to an incorrect method of > > > classification resulting in non-comparable sets. I am > > somewhat proud > > > that I was able to show to the NRC people how a change in > > > classification method had an enormous impact on the > ratings of LSU, > > > turning us from a nonentity into something quite > > respectable and more > > > in line with Louisiana's pioneering role in medicine > through mainly > > > the Ochsner Clinic and the first attempt at a charity > > hospital system. > > > > > > SB > > > > > > > > > > > > > > > Loet Leydesdorff @LISTSERV.UTK.EDU> on > > > 03/31/2006 > > > 11:57:46 AM > > > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based < > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > Dear Stephen, > > > > > > Although I am politically at the other end of the > spectrum, I fully > > > agree with your critique of the RAE. But the critique > would equally > > > hold for a "metric" that would rate departments against > > each other as > > > proposed by some of our colleagues. The problem is to take > > departments > > > as units of analysis. > > > > > > With best wishes, > > > > > > > > > Loet > > > > > > > -----Original Message----- > > > > From: ASIS&T Special Interest Group on Metrics > > > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stephen > > J Bensman > > > > Sent: Thursday, March 30, 2006 10:32 PM > > > > To: SIGMETRICS at listserv.utk.edu > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > > > Gee, I consider myself anything but a cultural elitist. > > > > After all, I work at LSU. The basic problem of the RAE is > > > that it is > > > > biased against an institution like LSU. At least under the > > > American > < > > system, good researchers at a place like LSU have an even > > chance to > > > > obtain research funding, ald many take advantage of this > > > system. That > > > > way a good researcher maintains his independence and > advance his > > > > career. This way LSU plays a major role as a launch pad > > for up and > > > > coming scientists. The British RAE always reminded me of > > > the Tsarist > > > > system of krugovaia poruka, where all the peasants of a > > > commune were > > > > held liable for communal taxes. This was the taxation > system of > > > > serfdom, causing peasants to be chained to the commune, > stifling > > > > individual initiative, thereby causing agricultural > > stagnation, and > > > > ultimately a violent revolution. > > > > If this makes me a cultural elitist, then so be it. > > > > > > > > SB > > > > > > > > > > > > > > > > > > > > Phil Davis @LISTSERV.UTK.EDU> on 03/30/2006 > > > > 02:09:28 PM > > > > > > > > Please respond to ASIS&T Special Interest Group on Metrics > > > > > > > > > > > > Sent by: ASIS&T Special Interest Group on Metrics > > > > > > > > > > > > > > > > To: SIGMETRICS at LISTSERV.UTK.EDU > > > > cc: (bcc: Stephen J Bensman/notsjb/LSU) > > > > > > > > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > > > Stephen, I wouldn't call you a "capitalist pig" but a > > > willfully blind, > > > > cultural elitist. In countries where education is wholly > > > (or mostly) > > > > funded by the government -- not just the UK and Europe, but > > > Canada and > > > > others -- the government is concerned about making sure > > > that everyone > > > > gets some modicum of funding. That does not mean a completely > > > > equitable rationing system, but it ensures a base-level of > > > funding. > > > > In the United States, this base-level funding often comes > > > from one's > > > > own department or college. Granted, the > > > capitalist-approach you speak > > > > of does reward the best and greatest, and this Winner-takes-all > > > > approach does result in pioneering research, yet it only > > > rewards the > > > > few. > > > > > > > > --Phil Davis > > > > > > > > > > > > > > > > Stephen Bensman wrote: > > > > > > > > >Speaking as a capitalist pig, the entire RAE system is > > just another > > > > example > > > > >of socialists hoisting themselves on their own petards. > > > > Point 1 below > > > > >contains the essence of the problem. The US has done > > > > pioneering work > > > > >on the evaluation of research-doctorate programs but was > > > never silly > > > > >enough > > > > to > > > > >allocate research resources on the basis of it. Luckily > > > > because these > > > > >evaluations were usually screwed up in some way. > Allocation of > > > > >research resources was always done on a project-by-project > > > > basis by the > > > > >NSF, NIH, and others, with experts in the fields evaluating > > > > individual > > > > >research proposals. The Europeans have a tendency to overplan > > > > >everything with disastrous consequences--the disaster in > > > > Eastern Europe > > > > >just being the latest example of it. > > > > > > > > > >SB > > > > > > > > > > From eugene.garfield at THOMSON.COM Fri Apr 7 17:53:02 2006 From: eugene.garfield at THOMSON.COM (Eugene Garfield) Date: Fri, 7 Apr 2006 17:53:02 -0400 Subject: FW: [CHMINF-L] GENERAL: "ghosts" of retracted papers Message-ID: Retractions are linked by the Web of Science to original reports. 23 March 2006 Eugene Garfield, Ph.D. Thomson ISI, Marie McVeigh and Marion Muff Send rapid response to journal: Re: Retractions are linked by the Web of Science to original reports. In response to: Harold C. Sox, and Drummond Rennie "Research Misconduct, Retraction, and Cleansing the Medical Literature: Lessons from the Poehlman Case" Annals of Internal Medicine (18 April 2006 Volume 144 Issue 8) Dear Editor: The recent article about the importance of integrating retraction notices with their original reports noted their treatment in PubMed, but failed to take into account the procedures followed in the Science Citation Index (SCI) the electronic version of which is included in the Web of Science. Ever since the SCI was launched in the sixties, published retractions have been indexed by SCI. In each case a citation link was established between the retraction, that is "correction," and the original source article. To find retractions, like all other corrections, all one had to do was conduct a cited reference search based on the author, journal and year of the retracted paper. You would then see a list of all items that cited the original work including the retractions, which like all other corrections would be coded as such. However, since 1996 the treatment of retractions was amplified by including the notation for the retraction together with the bibliographic citation for the source item. If one does a search on a subject or an author and finds a paper which has been retracted, the retraction can be seen immediately adjacent to the source entry. Thus, the retraction entry for WS Hwang's paper on "Patient- specific embryonic stem cells derived from human SCNT blastocyst (Retraction of vol 308, pg 1777, 2005) is followed by SCIENCE 311 (5759): 335-335 JAN 20 2006. When you conduct a cited reference search on the original paper at Hwang HS, Science, 2005, you immediately see the statement that "this article was retracted see Science 311, 335, Jan. 20, 2006" In previous generations authors often unwittingly cited retracted research because they did not or could check citation indexes. Today there is no excuse since access to PubMed and Web of Science is widely available. Eugene Garfield, Chairman Emeritus Marie McVeigh, Senior Manager, JCR & Bibliographic Policy Marion Muff, Bibliographic Policy Manager Institute for Scientific Information Thomson Scientific 3501 Market Street Philadelphia, PA 19104 -----Original Message----- From: CHEMICAL INFORMATION SOURCES DISCUSSION LIST [mailto:CHMINF-L at LISTSERV.INDIANA.EDU] On Behalf Of Robert Michaelson Sent: Friday, April 07, 2006 11:56 AM To: CHMINF-L at LISTSERV.INDIANA.EDU Subject: [CHMINF-L] GENERAL: "ghosts" of retracted papers Please excuse duplicate posting. The current issue of Science (7 April 2006) has two "News Focus" items regarding retracted papers: on pages 38-43 titled "Cleaning up the paper trail" by Jennifer Couzin and Katherine Unger, the subtitle reads "Once an investigation is completed and the publicity dies down, what happens to fraudulent or suspect papers? In many cases, not much." One paragraph says "An examination by Science of more than a dozen fraud or suspected fraud cases spanning 20 years reveals uneven and often chaotic efforts to correct the scientific literature. Every case has its own peculiarities. Whether wayward authors confess to fraud; whether investigations are launched at all, and if they are, whether their scope is broad or narrow; whether fraud findings are clearly communicated to journals--each of these helps determine how thorough a mop-up ensues." A side-bar "News Focus" item on pages 40-41, "Even retracted papers endure" by the same two authors (but in reverse order) says in part: "Like ghosts riffling the pages of journals, retracted papers live on. Using Thomson Scientific's ISI Web of Knowledge and Google Scholar, Science found dozens of citations of retracted papers in fields from physics to cancer research to plant biology. "Seventeen of 19 retracted papers co-authored by German cancer researcher Friedhelm Herrmann have been cited since being retracted, in some cases nearly a decade after they were pulled. Together, two of those papers were cited roughly 60 times. Examination of one Nature paper by former Bell Labs physicist Jan Hendrik Sch?n, published in 2000 and retracted in 2003, revealed that it's been noted in research papers 17 times since, although the drop-off after retraction was steep: Prior to being pulled, the paper was cited 153 times. "It's "quite embarrassing," says Richard Smith, former editor of the British Medical Journal, of references to retracted publications. "If people cite fraudulent articles, then either their research is going to be thrown off or something will be wasted," says Paul Friedman, a former dean at the University of California, San Diego, who oversaw an investigation into papers by radiologist Robert Slutsky in the mid-1980s. "In some cases, citations are "negative": The paper is cited precisely because it was retracted, and the retraction duly noted in the text. But those familiar with postretraction citation consider that rare..." See this issue of Science for the rest. Bob Michaelson Northwestern University Library Evanston, Illinois 60208 USA rmichael at northwestern.edu CHMINF-L Archives (also to join or leave CHMINF-L, etc.) http://listserv.indiana.edu/archives/chminf-l.html Search the CHMINF-L archives at: https://listserv.indiana.edu/cgi-bin/wa-iub.exe?S1=chminf-l Sponsors of CHMINF-L: http://www.indiana.edu/~libchem/chminfsupport.htm CHMINF-L Archives (also to join or leave CHMINF-L, etc.) http://listserv.indiana.edu/archives/chminf-l.html Search the CHMINF-L archives at: https://listserv.indiana.edu/cgi-bin/wa-iub.exe?S1=chminf-l Sponsors of CHMINF-L: http://www.indiana.edu/~libchem/chminfsupport.htm From jonathan at LEVITT.NET Sun Apr 9 10:02:37 2006 From: jonathan at LEVITT.NET (Jonathan Levitt) Date: Sun, 9 Apr 2006 15:02:37 +0100 Subject: Future UK RAEs: Peer review or Metrics-Based Message-ID: Hi, Steven wrote "1) peer review has already been done for published articles, so the issue is not (i) peer review vs. metrics but (ii) peer review plus metrics vs. peer review plus metrics plus 'peer re-review' (by the RAE panels)." . To me the issue is not so much peer review vs. metrics, but finding a combination of peer review and citation/usage metrics that seems particularly likely to be effective at measuring research quality. In principle, the sequel to the RAE could take into account the accrual reviews from the journal that published the article (rather than conduct their own reviews), but in my view there would need to be stringent checks on their authenticity. Best regards, Jonathan. ----- Original Message ----- From: "Stevan Harnad" To: Sent: Thursday, April 06, 2006 10:56 PM Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > The following article is excellent and accurate overall. > > Cliffe, Rebeca (2006) Research Assessment Exercise: Bowing > out in Favour of Metrics. EPS Insights: 3 April 2006 > http://technology.guardian.co.uk/weekly/story/0,,1747334,00.html > > One can hardly quarrel with the following face-valid summary from > this article: > >> "The move to a new metrics based system [for RAE] will no doubt please >> those who see a role for institutional repositories in monitoring >> research quality. The online environment has thrown up new metrics, >> which could be used alongside traditional measures such as citations. >> Usage can be measured at the point of consumption -- the number >> of "hits" on a particular article can indicate the uptake of the >> research. Web usage would be expected to be an early indicator of >> how often the article is later cited. Some believe that institutional >> repositories should be used as the basis for ongoing assessment of all >> UK peer-reviewed research output by mandating that researchers should >> place material in repositories. They argue that this would allow >> usage to be measured earlier, through downloads of both pre-prints >> and post-prints. Of course, this course of action would also advance >> the cause of open access by making this research available free." > > But there are a few points of detail on which this otherwise accurate > report could be made even more useful: > > (1) peer review has already been done for published articles, so > the issue is not (i) peer review vs. metrics but (ii) peer review > plus metrics vs. peer review plus metrics plus "peer re-review" > (by the RAE panels). It is the re-review of already peer-reviewed > publications that is the wasteful practice that needs to be scrapped, > given that peer review has already been done, and that metrics are > already highly correlated with the RAE ranking outcome anyway. > > (2) For the fields in which the current RAE outcome is not already > highly correlated with metrics, further work is needed; obviously > works other than peer-reviewed article or books (e.g., artwork, > multimedia) will have to be evaluated in other ways, but for > science, engineering, biomedicine, social science, and most fields > of humanities, books and articles are the form that research output > takes, and they will be amenable to the increasingly powerful and > diverse forms of metrics that are being devised and tested. (Many > will be tested in parallel with the 2008 RAE, which will still be > conducted the old, wasteful way; some of the metrics may also be > testable retrospectively against prior RAE outcomes.) > >> "Proponents of a metrics based system point to studies that show >> how average citation frequencies of articles can closely predict >> the scores given by the RAE for departmental quality, even though >> the RAE does not currently count these." > > True, but the highest metric correlate of the present RAE outcome > is reportedly prior research funding (0.98). Yet it would be a big > mistake to scrap all other metrics and base the RAE rank on just > prior funding. That would just generate a massive Matthew Effect and > essentially make top-sliced RAE funding redundant with direct competitive > research project funding (thereby essentially "bowing out" of the dual > RAE/RCUK funding system altogether, reducing it to just research project > funding). What is remarkable about the high correlation between citation > counts and RAE ranks (0.7 - 0.9), even though the correlation is not quite > as high as with prior funding (0.98), is that citations are not presently > counted in the RAE (whereas prior funding is)! Not only are citations a > more independent metric of research performance than prior funding, but > counting them directly -- along with the many other candidate metrics -- > can enrich and diversify the RAE evaluation, rather than just make it > into a self-fulfilling prophecy. > >> "However, metrics tend to work better for the sciences than the >> humanities. Whereas the citation of science research is seen as an >> indicator of the quality and impact of the research, in the humanities >> this is not the case. Humanities research is based around critical >> discourse and an author may be citing an article simply to disagree >> with its argument." > > I don't think this is quite accurate. It might be true that humanities > research makes less explicit use of citation counts today than science > research. It might even be true that the correlation between citation > counts and research productivity and importance is lower in the > humanities than in the sciences (though I am not aware of studies to that > effect). And it may also be true that citation counts in the humanities > are less correlated with RAE rankings than they are in the sciences. But > the familiar canard about articles being cited, not because they are > valid but important, but in order to disagree with them, has too much > the flavour of the a-priori dismissiveness of citation analysis that we > hear in *all* disciplines from those who have not really investigated > it, or the evidence for/against it, but are simply expressing their own > personal prejudices on the subject. > > Let's see the citation counts for humanities articles and books, and > their correlation with other performance indicators as well as RAE > rankings rather than dismissing them a-priori on the basis of anecdotes. > >> "Also, an analysis of RAE 2001 submissions revealed that while some >> 90% of research outputs listed by British researchers in the fields >> of Physics and Chemistry were mapped by ISI data, in Law the figure >> was below 10%, according to Ian Diamond of the Economic and Social >> Research Council (ESRC) (Oxford Workshop on the use of Metrics in >> Research Assessment)." > > I am not sure what "mapped by ISI data" means, but if it means that ISI > does not > cover enough of the pertinent journals in Law, then the empirical question > is: > what are the pertinent journals? can citation counts be derived from their > online > versions, using the publishers' websites and/or subscribing institutions' > online > versions? how well does this augmented citation count correlate with the > ISI > subsample (<10%)? and how well do both correlate with RAE ranking? (Surely > ISI > coverage should not be the determinant of whether or not a metric is > valid.) > >> "Ultimately, a combination of qualitative and quantitative indicators >> would seem to be the best approach." > > What is a "qualitative indicator"? A peer judgment of quality? But > that quality judgment has already been made by the peer-reviewers of > the journal in which the article was published -- and every field has a > hierarchy of quality among journals that is known (and may even sometimes > be correlated with the journal's impact factor, if one compares like > with like in terms of subject matter). What is the point of repeating > the peer review exercise? And especially if here too it turns out to be > correlated with metrics? Is it? > >> "While metrics are likely to be used to simplify the research >> assessment process, the merits of a qualitative element would be to >> ensure that over-reliance on quantitative factors does not unfairly >> discriminate against research which is of good quality but has not >> been cited as highly as other research due to factors such as its >> local impact." > > Why not ask the panels first to make quality judgments on the journals in > which > the papers were published, and then see whether those rankings correlate > with the > author/article citation metrics? and whether they correlate with the RAE > rankings > based on the present time-consuming qualitative re-evaluations? If the > correlations prove lower than in the other fields (even when augmented by > prior > funding and other metrics) *then* there may be a case for special > treatment of > the humanities. Otherwise, the special pleading on behalf of uncited > research > sounds as anecdotal, arbitrary and ad hoc as the claim that high citations > in humanities betoken disagreement rather than usage and importance, > as in other fields. > >> "Unless more appropriate metrics can be developed for the humanities, >> it would seem that an element of expert peer review must remain in >> whatever metrics based system emerges from the ashes of the RAE." > > It has not yet been shown whether the same metrics that correlate highly > with RAE outcome in other fields (funding, citations) truly fail to > do so in the humanities. If they do fail to correlate sufficiently, > there are still many candidate metrics to try out (co-citations, > downloads, book citations, CiteRank, latency/longevity, exogamy/endogamy, > hubs/authorities, co-text, etc.) before having to fall back on repeating, > badly, the peer evaluation that should have already have been done, > properly, by the journals in which the research first appeared. > > Stevan Harnad > >> Research Assessment Exercise: http://www.rae.co.uk >> Economic and Social Research Council: http://www.eserc.ac.uk >> >> Open access: practical matters become the key focus, EPS Insights, 10 >> March 2005 >> http://www.epsltd.com/accessArticles.asp?articleType=1&updateNoteID=1538 >> >> Citation Analysis in the Open Access World, imi, September 2004 >> http://www.epsltd.com/accessArticles.asp?articleType=2&articleID=236&imiID=294 From harnad at ECS.SOTON.AC.UK Sun Apr 9 10:56:47 2006 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Sun, 9 Apr 2006 10:56:47 -0400 Subject: Future UK RAEs: Peer review or Metrics-Based In-Reply-To: <007001c65bde$42c8c8c0$0302a8c0@DELL> Message-ID: On Sun, 9 Apr 2006, Jonathan Levitt wrote: > In principle, the sequel to the RAE could take into account the > accrual reviews from the journal that published the article > (rather than conduct their own reviews), but in my view there > would need to be stringent checks on their authenticity. Submit the referee reports from the journals in which the articles were published? No harm in that, I suppose, but what on earth for? The journals did the peer review, and the attestation to that fact is the published article, the journal name, and the journal's established track record for quality. The rest is down to metrics (journal impact factors, author/article citation counts, downloads, co-citation fan-in/out quality, recursively weighted authority CiteRank, latency/longevity, co-text, etc. etc.). After 25 years of editing a very high impact peer-reviewed journal, Behavioral and Brain Sciences (BBS) -- http://www.bbsonline.org/ -- I cannot see any added benefit (though no harm) from forwarding the referee reports -- and, presumably, the editorial disposition letter -- to a tenure/ promotion committee or RAE panel. The peer-review has already performed its function in getting the article suitably revised, accepted, and tagged with the journal's established quality-standard. The referee reports are informative to the editor, but not to others. On the other hand, the next phase -- which my own journal, BBS, pursued very actively and explicitly, namely open peer commentary -- would be extremely informative for those with the time, patience and expertise to read and weigh it. Otherwise, there too, commentary metrics, including commentary-content metrics (+/-/=) will without the slightest doubt emerge from an Open Access full-text database. http://www.ecs.soton.ac.uk/%7Eharnad/Temp/Kata/bbs.editorial.html http://www.ecs.soton.ac.uk/%7Eharnad/Temp/bbs.valedict.html Stevan Harnad On 9-Apr-06, at 10:02 AM, Jonathan Levitt wrote: > > Steven wrote "1) peer review has already been done for published > articles, > so the issue is not (i) peer review vs. metrics but (ii) peer > review plus > metrics vs. peer review plus metrics plus 'peer re-review' (by the RAE > panels)." . To me the issue is not so much peer review vs. > metrics, but > finding a combination of peer review and citation/usage metrics > that seems > particularly likely to be effective at measuring research quality. > > In principle, the sequel to the RAE could take into account the > accrual > reviews from the journal that published the article (rather than > conduct > their own reviews), but in my view there would need to be stringent > checks > on their authenticity. > > Best regards, > Jonathan. > > > ----- Original Message ----- > From: "Stevan Harnad" > To: > Sent: Thursday, April 06, 2006 10:56 PM > Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based > > >> Adminstrative info for SIGMETRICS (for example unsubscribe): >> http://web.utk.edu/~gwhitney/sigmetrics.html >> >> The following article is excellent and accurate overall. >> >> Cliffe, Rebeca (2006) Research Assessment Exercise: Bowing >> out in Favour of Metrics. EPS Insights: 3 April 2006 >> http://technology.guardian.co.uk/weekly/story/0,,1747334,00.html >> >> One can hardly quarrel with the following face-valid summary from >> this article: >> >>> "The move to a new metrics based system [for RAE] will no doubt >>> please >>> those who see a role for institutional repositories in monitoring >>> research quality. The online environment has thrown up new >>> metrics, >>> which could be used alongside traditional measures such as >>> citations. >>> Usage can be measured at the point of consumption -- the number >>> of "hits" on a particular article can indicate the uptake of the >>> research. Web usage would be expected to be an early indicator of >>> how often the article is later cited. Some believe that >>> institutional >>> repositories should be used as the basis for ongoing assessment >>> of all >>> UK peer-reviewed research output by mandating that researchers >>> should >>> place material in repositories. They argue that this would allow >>> usage to be measured earlier, through downloads of both pre-prints >>> and post-prints. Of course, this course of action would also >>> advance >>> the cause of open access by making this research available free." >> >> But there are a few points of detail on which this otherwise accurate >> report could be made even more useful: >> >> (1) peer review has already been done for published articles, so >> the issue is not (i) peer review vs. metrics but (ii) peer review >> plus metrics vs. peer review plus metrics plus "peer re-review" >> (by the RAE panels). It is the re-review of already peer-reviewed >> publications that is the wasteful practice that needs to be >> scrapped, >> given that peer review has already been done, and that metrics are >> already highly correlated with the RAE ranking outcome anyway. >> >> (2) For the fields in which the current RAE outcome is not already >> highly correlated with metrics, further work is needed; obviously >> works other than peer-reviewed article or books (e.g., artwork, >> multimedia) will have to be evaluated in other ways, but for >> science, engineering, biomedicine, social science, and most fields >> of humanities, books and articles are the form that research >> output >> takes, and they will be amenable to the increasingly powerful and >> diverse forms of metrics that are being devised and tested. (Many >> will be tested in parallel with the 2008 RAE, which will still be >> conducted the old, wasteful way; some of the metrics may also be >> testable retrospectively against prior RAE outcomes.) >> >>> "Proponents of a metrics based system point to studies that show >>> how average citation frequencies of articles can closely predict >>> the scores given by the RAE for departmental quality, even though >>> the RAE does not currently count these." >> >> True, but the highest metric correlate of the present RAE outcome >> is reportedly prior research funding (0.98). Yet it would be a big >> mistake to scrap all other metrics and base the RAE rank on just >> prior funding. That would just generate a massive Matthew Effect and >> essentially make top-sliced RAE funding redundant with direct >> competitive >> research project funding (thereby essentially "bowing out" of the >> dual >> RAE/RCUK funding system altogether, reducing it to just research >> project >> funding). What is remarkable about the high correlation between >> citation >> counts and RAE ranks (0.7 - 0.9), even though the correlation is >> not quite >> as high as with prior funding (0.98), is that citations are not >> presently >> counted in the RAE (whereas prior funding is)! Not only are >> citations a >> more independent metric of research performance than prior >> funding, but >> counting them directly -- along with the many other candidate >> metrics -- >> can enrich and diversify the RAE evaluation, rather than just make it >> into a self-fulfilling prophecy. >> >>> "However, metrics tend to work better for the sciences than the >>> humanities. Whereas the citation of science research is seen as an >>> indicator of the quality and impact of the research, in the >>> humanities >>> this is not the case. Humanities research is based around >>> critical >>> discourse and an author may be citing an article simply to >>> disagree >>> with its argument." >> >> I don't think this is quite accurate. It might be true that >> humanities >> research makes less explicit use of citation counts today than >> science >> research. It might even be true that the correlation between citation >> counts and research productivity and importance is lower in the >> humanities than in the sciences (though I am not aware of studies >> to that >> effect). And it may also be true that citation counts in the >> humanities >> are less correlated with RAE rankings than they are in the >> sciences. But >> the familiar canard about articles being cited, not because they are >> valid but important, but in order to disagree with them, has too much >> the flavour of the a-priori dismissiveness of citation analysis >> that we >> hear in *all* disciplines from those who have not really investigated >> it, or the evidence for/against it, but are simply expressing >> their own >> personal prejudices on the subject. >> >> Let's see the citation counts for humanities articles and books, and >> their correlation with other performance indicators as well as RAE >> rankings rather than dismissing them a-priori on the basis of >> anecdotes. >> >>> "Also, an analysis of RAE 2001 submissions revealed that while >>> some >>> 90% of research outputs listed by British researchers in the >>> fields >>> of Physics and Chemistry were mapped by ISI data, in Law the >>> figure >>> was below 10%, according to Ian Diamond of the Economic and Social >>> Research Council (ESRC) (Oxford Workshop on the use of Metrics in >>> Research Assessment)." >> >> I am not sure what "mapped by ISI data" means, but if it means >> that ISI >> does not >> cover enough of the pertinent journals in Law, then the empirical >> question >> is: >> what are the pertinent journals? can citation counts be derived >> from their >> online >> versions, using the publishers' websites and/or subscribing >> institutions' >> online >> versions? how well does this augmented citation count correlate >> with the >> ISI >> subsample (<10%)? and how well do both correlate with RAE ranking? >> (Surely >> ISI >> coverage should not be the determinant of whether or not a metric is >> valid.) >> >>> "Ultimately, a combination of qualitative and quantitative >>> indicators >>> would seem to be the best approach." >> >> What is a "qualitative indicator"? A peer judgment of quality? But >> that quality judgment has already been made by the peer-reviewers of >> the journal in which the article was published -- and every field >> has a >> hierarchy of quality among journals that is known (and may even >> sometimes >> be correlated with the journal's impact factor, if one compares like >> with like in terms of subject matter). What is the point of repeating >> the peer review exercise? And especially if here too it turns out >> to be >> correlated with metrics? Is it? >> >>> "While metrics are likely to be used to simplify the research >>> assessment process, the merits of a qualitative element would >>> be to >>> ensure that over-reliance on quantitative factors does not >>> unfairly >>> discriminate against research which is of good quality but has not >>> been cited as highly as other research due to factors such as its >>> local impact." >> >> Why not ask the panels first to make quality judgments on the >> journals in >> which >> the papers were published, and then see whether those rankings >> correlate >> with the >> author/article citation metrics? and whether they correlate with >> the RAE >> rankings >> based on the present time-consuming qualitative re-evaluations? If >> the >> correlations prove lower than in the other fields (even when >> augmented by >> prior >> funding and other metrics) *then* there may be a case for special >> treatment of >> the humanities. Otherwise, the special pleading on behalf of uncited >> research >> sounds as anecdotal, arbitrary and ad hoc as the claim that high >> citations >> in humanities betoken disagreement rather than usage and importance, >> as in other fields. >> >>> "Unless more appropriate metrics can be developed for the >>> humanities, >>> it would seem that an element of expert peer review must remain in >>> whatever metrics based system emerges from the ashes of the RAE." >> >> It has not yet been shown whether the same metrics that correlate >> highly >> with RAE outcome in other fields (funding, citations) truly fail to >> do so in the humanities. If they do fail to correlate sufficiently, >> there are still many candidate metrics to try out (co-citations, >> downloads, book citations, CiteRank, latency/longevity, exogamy/ >> endogamy, >> hubs/authorities, co-text, etc.) before having to fall back on >> repeating, >> badly, the peer evaluation that should have already have been done, >> properly, by the journals in which the research first appeared. >> >> Stevan Harnad >> >>> Research Assessment Exercise: http://www.rae.co.uk >>> Economic and Social Research Council: http://www.eserc.ac.uk >>> >>> Open access: practical matters become the key focus, EPS >>> Insights, 10 >>> March 2005 >>> http://www.epsltd.com/accessArticles.asp? >>> articleType=1&updateNoteID=1538 >>> >>> Citation Analysis in the Open Access World, imi, September 2004 >>> http://www.epsltd.com/accessArticles.asp? >>> articleType=2&articleID=236&imiID=294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at LEVITT.NET Mon Apr 10 14:30:01 2006 From: jonathan at LEVITT.NET (Jonathan Levitt) Date: Mon, 10 Apr 2006 19:30:01 +0100 Subject: Future UK RAEs: Peer review or Metrics-Based Message-ID: Stavan asked for the rationale for evaluating the referee reports from the journals in which the articles were published. I suggest that it could provide additional guidance on the quality of the paper paper. This is akin to judging university entry not solely on quantitative data such as grades, but also on qualitative items such as references. Best regards, Jonathan. ----- Original Message ----- From: Stevan Harnad To: SIGMETRICS at LISTSERV.UTK.EDU Sent: Sunday, April 09, 2006 3:56 PM Subject: Re: [SIGMETRICS] Future UK RAEs: Peer review or Metrics-Based Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html On Sun, 9 Apr 2006, Jonathan Levitt wrote: > In principle, the sequel to the RAE could take into account the > accrual reviews from the journal that published the article > (rather than conduct their own reviews), but in my view there > would need to be stringent checks on their authenticity. Submit the referee reports from the journals in which the articles were published? No harm in that, I suppose, but what on earth for? The journals did the peer review, and the attestation to that fact is the published article, the journal name, and the journal's established track record for quality. The rest is down to metrics (journal impact factors, author/article citation counts, downloads, co-citation fan-in/out quality, recursively weighted authority CiteRank, latency/longevity, co-text, etc. etc.). After 25 years of editing a very high impact peer-reviewed journal, Behavioral and Brain Sciences (BBS) -- http://www.bbsonline.org/ -- I cannot see any added benefit (though no harm) from forwarding the referee reports -- and, presumably, the editorial disposition letter -- to a tenure/promotion committee or RAE panel. The peer-review has already performed its function in getting the article suitably revised, accepted, and tagged with the journal's established quality-standard. The referee reports are informative to the editor, but not to others. On the other hand, the next phase -- which my own journal, BBS, pursued very actively and explicitly, namely open peer commentary -- would be extremely informative for those with the time, patience and expertise to read and weigh it. Otherwise, there too, commentary metrics, including commentary-content metrics (+/-/=) will without the slightest doubt emerge from an Open Access full-text database. http://www.ecs.soton.ac.uk/%7Eharnad/Temp/Kata/bbs.editorial.html http://www.ecs.soton.ac.uk/%7Eharnad/Temp/bbs.valedict.html Stevan Harnad On 9-Apr-06, at 10:02 AM, Jonathan Levitt wrote: Steven wrote "1) peer review has already been done for published articles, so the issue is not (i) peer review vs. metrics but (ii) peer review plus metrics vs. peer review plus metrics plus 'peer re-review' (by the RAE panels)." . To me the issue is not so much peer review vs. metrics, but finding a combination of peer review and citation/usage metrics that seems particularly likely to be effective at measuring research quality. In principle, the sequel to the RAE could take into account the accrual reviews from the journal that published the article (rather than conduct their own reviews), but in my view there would need to be stringent checks on their authenticity. Best regards, Jonathan. ----- Original Message ----- From: "Stevan Harnad" To: Sent: Thursday, April 06, 2006 10:56 PM Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html The following article is excellent and accurate overall. Cliffe, Rebeca (2006) Research Assessment Exercise: Bowing out in Favour of Metrics. EPS Insights: 3 April 2006 http://technology.guardian.co.uk/weekly/story/0,,1747334,00.html One can hardly quarrel with the following face-valid summary from this article: "The move to a new metrics based system [for RAE] will no doubt please those who see a role for institutional repositories in monitoring research quality. The online environment has thrown up new metrics, which could be used alongside traditional measures such as citations. Usage can be measured at the point of consumption -- the number of "hits" on a particular article can indicate the uptake of the research. Web usage would be expected to be an early indicator of how often the article is later cited. Some believe that institutional repositories should be used as the basis for ongoing assessment of all UK peer-reviewed research output by mandating that researchers should place material in repositories. They argue that this would allow usage to be measured earlier, through downloads of both pre-prints and post-prints. Of course, this course of action would also advance the cause of open access by making this research available free." But there are a few points of detail on which this otherwise accurate report could be made even more useful: (1) peer review has already been done for published articles, so the issue is not (i) peer review vs. metrics but (ii) peer review plus metrics vs. peer review plus metrics plus "peer re-review" (by the RAE panels). It is the re-review of already peer-reviewed publications that is the wasteful practice that needs to be scrapped, given that peer review has already been done, and that metrics are already highly correlated with the RAE ranking outcome anyway. (2) For the fields in which the current RAE outcome is not already highly correlated with metrics, further work is needed; obviously works other than peer-reviewed article or books (e.g., artwork, multimedia) will have to be evaluated in other ways, but for science, engineering, biomedicine, social science, and most fields of humanities, books and articles are the form that research output takes, and they will be amenable to the increasingly powerful and diverse forms of metrics that are being devised and tested. (Many will be tested in parallel with the 2008 RAE, which will still be conducted the old, wasteful way; some of the metrics may also be testable retrospectively against prior RAE outcomes.) "Proponents of a metrics based system point to studies that show how average citation frequencies of articles can closely predict the scores given by the RAE for departmental quality, even though the RAE does not currently count these." True, but the highest metric correlate of the present RAE outcome is reportedly prior research funding (0.98). Yet it would be a big mistake to scrap all other metrics and base the RAE rank on just prior funding. That would just generate a massive Matthew Effect and essentially make top-sliced RAE funding redundant with direct competitive research project funding (thereby essentially "bowing out" of the dual RAE/RCUK funding system altogether, reducing it to just research project funding). What is remarkable about the high correlation between citation counts and RAE ranks (0.7 - 0.9), even though the correlation is not quite as high as with prior funding (0.98), is that citations are not presently counted in the RAE (whereas prior funding is)! Not only are citations a more independent metric of research performance than prior funding, but counting them directly -- along with the many other candidate metrics -- can enrich and diversify the RAE evaluation, rather than just make it into a self-fulfilling prophecy. "However, metrics tend to work better for the sciences than the humanities. Whereas the citation of science research is seen as an indicator of the quality and impact of the research, in the humanities this is not the case. Humanities research is based around critical discourse and an author may be citing an article simply to disagree with its argument." I don't think this is quite accurate. It might be true that humanities research makes less explicit use of citation counts today than science research. It might even be true that the correlation between citation counts and research productivity and importance is lower in the humanities than in the sciences (though I am not aware of studies to that effect). And it may also be true that citation counts in the humanities are less correlated with RAE rankings than they are in the sciences. But the familiar canard about articles being cited, not because they are valid but important, but in order to disagree with them, has too much the flavour of the a-priori dismissiveness of citation analysis that we hear in *all* disciplines from those who have not really investigated it, or the evidence for/against it, but are simply expressing their own personal prejudices on the subject. Let's see the citation counts for humanities articles and books, and their correlation with other performance indicators as well as RAE rankings rather than dismissing them a-priori on the basis of anecdotes. "Also, an analysis of RAE 2001 submissions revealed that while some 90% of research outputs listed by British researchers in the fields of Physics and Chemistry were mapped by ISI data, in Law the figure was below 10%, according to Ian Diamond of the Economic and Social Research Council (ESRC) (Oxford Workshop on the use of Metrics in Research Assessment)." I am not sure what "mapped by ISI data" means, but if it means that ISI does not cover enough of the pertinent journals in Law, then the empirical question is: what are the pertinent journals? can citation counts be derived from their online versions, using the publishers' websites and/or subscribing institutions' online versions? how well does this augmented citation count correlate with the ISI subsample (<10%)? and how well do both correlate with RAE ranking? (Surely ISI coverage should not be the determinant of whether or not a metric is valid.) "Ultimately, a combination of qualitative and quantitative indicators would seem to be the best approach." What is a "qualitative indicator"? A peer judgment of quality? But that quality judgment has already been made by the peer-reviewers of the journal in which the article was published -- and every field has a hierarchy of quality among journals that is known (and may even sometimes be correlated with the journal's impact factor, if one compares like with like in terms of subject matter). What is the point of repeating the peer review exercise? And especially if here too it turns out to be correlated with metrics? Is it? "While metrics are likely to be used to simplify the research assessment process, the merits of a qualitative element would be to ensure that over-reliance on quantitative factors does not unfairly discriminate against research which is of good quality but has not been cited as highly as other research due to factors such as its local impact." Why not ask the panels first to make quality judgments on the journals in which the papers were published, and then see whether those rankings correlate with the author/article citation metrics? and whether they correlate with the RAE rankings based on the present time-consuming qualitative re-evaluations? If the correlations prove lower than in the other fields (even when augmented by prior funding and other metrics) *then* there may be a case for special treatment of the humanities. Otherwise, the special pleading on behalf of uncited research sounds as anecdotal, arbitrary and ad hoc as the claim that high citations in humanities betoken disagreement rather than usage and importance, as in other fields. "Unless more appropriate metrics can be developed for the humanities, it would seem that an element of expert peer review must remain in whatever metrics based system emerges from the ashes of the RAE." It has not yet been shown whether the same metrics that correlate highly with RAE outcome in other fields (funding, citations) truly fail to do so in the humanities. If they do fail to correlate sufficiently, there are still many candidate metrics to try out (co-citations, downloads, book citations, CiteRank, latency/longevity, exogamy/endogamy, hubs/authorities, co-text, etc.) before having to fall back on repeating, badly, the peer evaluation that should have already have been done, properly, by the journals in which the research first appeared. Stevan Harnad Research Assessment Exercise: http://www.rae.co.uk Economic and Social Research Council: http://www.eserc.ac.uk Open access: practical matters become the key focus, EPS Insights, 10 March 2005 http://www.epsltd.com/accessArticles.asp?articleType=1&updateNoteID=1538 Citation Analysis in the Open Access World, imi, September 2004 http://www.epsltd.com/accessArticles.asp?articleType=2&articleID=236&imiID=294 ------------------------------------------------------------------------------ No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.385 / Virus Database: 268.4.0/306 - Release Date: 09/04/2006 -------------- next part -------------- An HTML attachment was scrubbed... URL: From David.Watkins at SOLENT.AC.UK Tue Apr 11 05:08:56 2006 From: David.Watkins at SOLENT.AC.UK (David Watkins) Date: Tue, 11 Apr 2006 10:08:56 +0100 Subject: No subject Message-ID: RAE and Efficiency It does not need a switch to metrics-based analysis to generate a Matthew Effect in the UK RAE. It already exists, because prior research funding is considered an 'output' rather than an 'input'; hence one clear reason for the strong correlation between prior funding and RAE rank. The logic of this is baffling (except as a political power play). Any switch to a more metrics-based approach to 'quality', 'impact' etc. opens up the possibility of using the research funding element to arrive at an efficiency measure by the simple expedient of dividing the chosen rating through by the resource input (in particular, by the previous RAE funding, but since all outputs are normally taken into consideration, so should all input funding). That would really level up the playing field - which is why it is never done - and a reason why metrics-based approaches are really contested by the winners in the current system. As a taxpayer, I am at least as interested in the relative efficiency with which two similar departments have used my largesse as in the relative peer esteem. One suspects that outside the few areas where there are genuine economies of scale or scope in research ('Big Science'), there is a strong Pareto effect in operation here, with small amounts of funding at the individual, departmental or institutional level producing most of the benefits, and the marginal advantage of increasing concentration being vanishingly small. DW ************************************************ Professor David Watkins Postgraduate Research Centre Southampton Business School East Park Terrace Southampton SO14 0RH David.Watkins at solent.ac.uk 023 80 319610 (Tel) +44 23 80 31 96 10 (Tel) 02380 33 26 27 (fax) +44 23 80 33 26 27 (fax) From loet at LEYDESDORFF.NET Tue Apr 11 05:59:26 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Tue, 11 Apr 2006 11:59:26 +0200 Subject: In-Reply-To: Message-ID: Dear David, Why don't you make the division? We once did it for the major Dutch universities -- unfortunately the paper is in Dutch at http://www.leydesdorff.net/uva -- and to our surprise the Free University came on top for the social sciences and the humanities. I suppose that input figures (fte) are easily available for the UK, at different levels. Output data can easily be gathered at the Web-of-Science using postal codes. If the results are different from the RAE, it is worth publishing. With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681; loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David Watkins > Sent: Tuesday, April 11, 2006 11:09 AM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > RAE and Efficiency > > > It does not need a switch to metrics-based analysis to > generate a Matthew Effect in the UK RAE. It already exists, > because prior research funding is considered an 'output' > rather than an 'input'; hence one clear reason for the strong > correlation between prior funding and RAE rank. > > The logic of this is baffling (except as a political power > play). Any switch to a more metrics-based approach to > 'quality', 'impact' etc. opens up the possibility of using > the research funding element to arrive at an efficiency > measure by the simple expedient of dividing the chosen rating > through by the resource input (in particular, by the previous > RAE funding, but since all outputs are normally taken into > consideration, so should all input funding). > > That would really level up the playing field - which is why > it is never done - and a reason why metrics-based approaches > are really contested by the winners in the current system. > > As a taxpayer, I am at least as interested in the relative > efficiency with which two similar departments have used my > largesse as in the relative peer esteem. One suspects that > outside the few areas where there are genuine economies of > scale or scope in research ('Big Science'), there is a strong > Pareto effect in operation here, with small amounts of > funding at the individual, departmental or institutional > level producing most of the benefits, and the marginal > advantage of increasing concentration being vanishingly small. > > > > > DW > > ************************************************ > Professor David Watkins > Postgraduate Research Centre > Southampton Business School > East Park Terrace > Southampton SO14 0RH > > David.Watkins at solent.ac.uk > 023 80 319610 (Tel) > +44 23 80 31 96 10 (Tel) > > 02380 33 26 27 (fax) > +44 23 80 33 26 27 (fax) > From malena at IDICT.CU Tue Apr 11 12:24:27 2006 From: malena at IDICT.CU (malena) Date: Tue, 11 Apr 2006 11:24:27 -0500 Subject: FW: [CHMINF-L] GENERAL: "ghosts" of retracted papers Message-ID: Cambi? de direcci?n de correo ahora es yaniris at idict.cu c?mo hacer que los mensajes me lleguen ahora a mi nueva direcci?n ----- Original Message ----- From: "Eugene Garfield" To: Sent: Friday, April 07, 2006 4:53 PM Subject: [SIGMETRICS] FW: [CHMINF-L] GENERAL: "ghosts" of retracted papers > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Retractions are linked by the Web of Science to original reports. 23 March 2006 > > Eugene Garfield, > Ph.D. > Thomson ISI, > Marie McVeigh and Marion Muff > Send rapid response to journal: > Re: Retractions are linked by the Web of Science to original reports. > > > > > > In response to: Harold C. Sox, and Drummond Rennie "Research Misconduct, Retraction, and Cleansing the Medical Literature: Lessons from the Poehlman Case" Annals of Internal Medicine (18 April 2006 Volume 144 Issue 8) > > Dear Editor: > > The recent article about the importance of integrating retraction notices with their original reports noted their treatment in PubMed, but failed to take into account the procedures followed in the Science Citation Index (SCI) the electronic version of which is included in the Web of Science. > > Ever since the SCI was launched in the sixties, published retractions have been indexed by SCI. In each case a citation link was established between the retraction, that is "correction," and the original source article. To find retractions, like all other corrections, all one had to do was conduct a cited reference search based on the author, journal and year of the retracted paper. You would then see a list of all items that cited the original work including the retractions, which like all other corrections would be coded as such. However, since 1996 the treatment of retractions was amplified by including the notation for the retraction together with the bibliographic citation for the source item. If one does a search on a subject or an author and finds a paper which has been retracted, the retraction can be seen immediately adjacent to the source entry. > > Thus, the retraction entry for WS Hwang's paper on "Patient- specific embryonic stem cells derived from human SCNT blastocyst (Retraction of vol 308, pg 1777, 2005) is followed by SCIENCE 311 (5759): 335-335 JAN 20 2006. > > When you conduct a cited reference search on the original paper at Hwang HS, Science, 2005, you immediately see the statement that "this article was retracted see Science 311, 335, Jan. 20, 2006" > > In previous generations authors often unwittingly cited retracted research because they did not or could check citation indexes. Today there is no excuse since access to PubMed and Web of Science is widely available. > > Eugene Garfield, Chairman Emeritus Marie McVeigh, Senior Manager, JCR & Bibliographic Policy Marion Muff, Bibliographic Policy Manager Institute for Scientific Information Thomson Scientific 3501 Market Street Philadelphia, PA 19104 > > > > > > > > > > -----Original Message----- > From: CHEMICAL INFORMATION SOURCES DISCUSSION LIST [mailto:CHMINF-L at LISTSERV.INDIANA.EDU] On Behalf Of Robert Michaelson > Sent: Friday, April 07, 2006 11:56 AM > To: CHMINF-L at LISTSERV.INDIANA.EDU > Subject: [CHMINF-L] GENERAL: "ghosts" of retracted papers > > Please excuse duplicate posting. > > The current issue of Science (7 April 2006) has two "News Focus" items > regarding retracted papers: on pages 38-43 titled "Cleaning up the paper > trail" by Jennifer Couzin and Katherine Unger, the subtitle reads "Once an > investigation is completed and the publicity dies down, what happens to > fraudulent or suspect papers? In many cases, not much." One paragraph says > "An examination by Science of more than a dozen fraud or suspected fraud > cases spanning 20 years reveals uneven and often chaotic efforts to correct > the scientific literature. Every case has its own peculiarities. Whether > wayward authors confess to fraud; whether investigations are launched at > all, and if they are, whether their scope is broad or narrow; whether fraud > findings are clearly communicated to journals--each of these helps > determine how thorough a mop-up ensues." > > A side-bar "News Focus" item on pages 40-41, "Even retracted papers endure" > by the same two authors (but in reverse order) says in part: > "Like ghosts riffling the pages of journals, retracted papers live on. > Using Thomson Scientific's ISI Web of Knowledge and Google Scholar, Science > found dozens of citations of retracted papers in fields from physics to > cancer research to plant biology. > > "Seventeen of 19 retracted papers co-authored by German cancer researcher > Friedhelm Herrmann have been cited since being retracted, in some cases > nearly a decade after they were pulled. Together, two of those papers were > cited roughly 60 times. Examination of one Nature paper by former Bell Labs > physicist Jan Hendrik Sch?n, published in 2000 and retracted in 2003, > revealed that it's been noted in research papers 17 times since, although > the drop-off after retraction was steep: Prior to being pulled, the paper > was cited 153 times. > > "It's "quite embarrassing," says Richard Smith, former editor of the > British Medical Journal, of references to retracted publications. "If > people cite fraudulent articles, then either their research is going to be > thrown off or something will be wasted," says Paul Friedman, a former dean > at the University of California, San Diego, who oversaw an investigation > into papers by radiologist Robert Slutsky in the mid-1980s. > > "In some cases, citations are "negative": The paper is cited precisely > because it was retracted, and the retraction duly noted in the text. But > those familiar with postretraction citation consider that rare..." > > See this issue of Science for the rest. > > Bob Michaelson > Northwestern University Library > Evanston, Illinois 60208 > USA > rmichael at northwestern.edu > > > CHMINF-L Archives (also to join or leave CHMINF-L, etc.) > http://listserv.indiana.edu/archives/chminf-l.html > Search the CHMINF-L archives at: > https://listserv.indiana.edu/cgi-bin/wa-iub.exe?S1=chminf-l > Sponsors of CHMINF-L: > http://www.indiana.edu/~libchem/chminfsupport.htm > > CHMINF-L Archives (also to join or leave CHMINF-L, etc.) > http://listserv.indiana.edu/archives/chminf-l.html > Search the CHMINF-L archives at: > https://listserv.indiana.edu/cgi-bin/wa-iub.exe?S1=chminf-l > Sponsors of CHMINF-L: > http://www.indiana.edu/~libchem/chminfsupport.htm > > > __________ NOD32 1.1475 (20060406) Information __________ > > This message was checked by NOD32 antivirus system. > http://www.eset.com > > From malena at IDICT.CU Tue Apr 11 12:27:42 2006 From: malena at IDICT.CU (malena) Date: Tue, 11 Apr 2006 11:27:42 -0500 Subject: No subject Message-ID: Sorry for cross-pasting. Does anyone know if there is a technique or software that can prevent a document from being downloaded? Thanks in advance for providing the information! -------------- next part -------------- An HTML attachment was scrubbed... URL: From harnad at ECS.SOTON.AC.UK Tue Apr 11 12:30:44 2006 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Tue, 11 Apr 2006 17:30:44 +0100 Subject: Uploading and Downloading: Preventing what's up from being got down In-Reply-To: <002401c65d84$dbbbc4e0$1202a8c0@capitolio.cu> Message-ID: On Tue, 11 Apr 2006, malena wrote: > Does anyone know if there is a technique or software that can prevent > a document from being downloaded? It is not clear to me that your query is appropriate for sigmetrics, which is about measuring access and usage, not blocking it.. Nor is it at all clear why one would want to upload a document so that it could not be downloaded! I'd say the surest way to prevent an article from being downloaded is not to upload it onto the web in the first place (just as the surest way to prevent plagiarism or misquotation is not to publish one's text at all!). Having said that, it *is* possible to upload an article into an OAI-compliant Institutional Repository and to set access to its bibliographic metadata as Open Access but access to its full-text as Restricted Access of No Access, so it cannot be downloaded. The free GNU Eprints software (and also soon the OS Dspace software) will allow you to do that. See: https://secure.ecs.soton.ac.uk/notices/publicnotices.php?notice=902 Publishers have proprietary ways of blocking or hobbling access too. See also Open Text Mining: http://hublog.hubmed.org/archives/001345.html Stevan Harnad American Scientist Open Access Forum http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html From garfield at CODEX.CIS.UPENN.EDU Tue Apr 11 17:11:11 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 11 Apr 2006 17:11:11 -0400 Subject: Henige D. "Discouraging verification: Citation practices across the disciplines" Journal of Scholarly Publishing 37(2): 99-118, January 2006. Message-ID: As an advocate for standardized documentation for fifty years.. I thought the following paper by David Henige would be of interest to SIG-Metrics readers. In this connection see my essay on ?Pageless Documentation: or, What a Difference a Page Makes? http://www.garfield.library.upenn.edu/essays/v8p160y1985.pdf which includes a reprint of ?The Implications of Pageless Documentation? by Roy P. Fairfield http://www.garfield.library.upenn.edu/essays/v8p162y1985.pdf _______________________________________________________________ David Henige : e-mail : dhenige at library.wisc.edu AUTHOR : Henige, David TITLE : Discouraging verification: Citation practices across the disciplines SOURCE: Journal of Scholarly Publishing 37(2): 99-118, January 2006. University of Toronto Press Inc. Toronto Cited References: 40 Times Cited: 0 Abstract: The purpose of reference notes in scholarly writing is to provide readers with the opportunity to learn more about an issue or to test an author's credibility. As such, they need to include whatever details are necessary to ensure that access be maximally efficient. These data should always include page numbers for both quotes and close paraphrases. Unfortunately, this practice is remarkably uncommon in the sciences and even the social sciences. Failure to include these data is also a failure of good epistemological practice. In the mathematically-predictive hard sciences, citation is usually 2 viewed as a necessary evil(2). When I failed to locate the cited proposition, I read the book again. Still unable to find it, I tracked down the author and begged her to give me the page numbers. With apologies, she admitted that what I sought was not in the book(3). Which system is selected by an editor or publisher should be determined by judgments on its advantages and disadvantages for the expected readership (4). Publisher: UNIV TORONTO PRESS INC, JOURNALS DIVISION, 5201 DUFFERIN ST, DOWNSVIEW, TORONTO, ON M3H 5T8, CANADA IDS Number: 014AD ISSN: 1198-9742 CITED REFERENCES : BRIT MED J 1 : 1334 1978 CHICAGO MANUAL STYLE : 598 2003 J FIELD ARCHAEOLOGY 28 : 477 2001 J INFORMATION ETHICS 3 : 8 1994 *AM PSYCH ASS PUBL MAN : 120 2001 ADAM D Journals under pressure: Publish, and be damned... NATURE 419 : 772 2002 ASANO M IMPROVEMENT OF THE ACCURACY OF REFERENCES IN THE CANADIAN-JOURNAL-OF- ANESTHESIA CANADIAN JOURNAL OF ANAESTHESIA-JOURNAL CANADIEN D ANESTHESIE 42 : 370 1995 BROSS IDJ 50 YEARS FOLLY FRAUD : 1994 CHERNIN E BRIT MED J 297 : 1062 1988 GEHANNO JFO Major inaccuracies in articles citing occupational or environmental medicine papers and their implications JOURNAL OF THE MEDICAL LIBRARY ASSOCIATION 93 : 118 2005 GIBALDI J MLA HDB WRITERS RES : 211 1999 GILBERT GN REFERENCING AS PERSUASION SOCIAL STUDIES OF SCIENCE 7 : 113 1977 GREGORY J The popularization and excommunication of Fred Hoyle's "life-from-space" theory PUBLIC UNDERSTANDING OF SCIENCE 12 : 25 2003 HENIGE D HIST AFR 28 : 95 2001 HINCHCLIFF KW ACCURACY OF REFERENCES AND QUOTATIONS IN VETERINARY JOURNALS JOURNAL OF THE AMERICAN VETERINARY MEDICAL ASSOCIATION 202 : 397 1993 HOFFER PC PAST IMPERFECT : 141 2004 HOUSTON CS CBE VIEWS 5 : 13 1982 JEUKEN M EARTH LIFE SCI EDITI 10 : 6 1980 JUDSON HF GREAT BETRAYAL FRAUD : 2004 KOCHEN M HOW WELL DO WE ACKNOWLEDGE INTELLECTUAL DEBTS JOURNAL OF DOCUMENTATION 43 : 54 1987 LIEBMANN M Pueblo settlement, architecture, and social change in the Pueblo Revolt era, AD 1680 to 1696 JOURNAL OF FIELD ARCHAEOLOGY 30 : 45 2005 MACINTYRE S HIST WARS : 162 2003 MANNE R WHITEWASH KEITH WIND : 232 2003 MARK EL B MUS COMP ZOOL HARV 6 : 173 1881 MCLELLAN MF TRUST, BUT VERIFY - THE ACCURACY OF REFERENCES IN 4 ANESTHESIA JOURNALS ANESTHESIOLOGY 77 : 185 1992 NISHINA K ACTA ANAESTHSIOLOGIC 5 : 577 1995 NOVICK P NOBLE DREAM OBJECTIV : 612 1988 OATES S MALICE NONE : 1977 OCONNOR M EDITING SCI BOOKS J : 1978 POPE NN ACCURACY OF REFERENCES IN 10 LIBRARY-SCIENCE JOURNALS RQ 32 : 240 1992 From leo.egghe at UHASSELT.BE Wed Apr 12 06:00:23 2006 From: leo.egghe at UHASSELT.BE (Leo Egghe) Date: Wed, 12 Apr 2006 12:00:23 +0200 Subject: Call for papers new "Journal of Informetrics Message-ID: Dear Colleague, please find in attachment the call for papers for the new Elsevier journal: "Journal of Informetrics". Regards, Leo Egghe -------------- next part -------------- A non-text attachment was scrubbed... Name: 2006014.not.doc Type: application/msword Size: 26112 bytes Desc: not available URL: From bernies at UILLINOIS.EDU Wed Apr 12 13:23:38 2006 From: bernies at UILLINOIS.EDU (Sloan, Bernie) Date: Wed, 12 Apr 2006 12:23:38 -0500 Subject: Microsoft Academic Search vs. Google Scholar Message-ID: Microsoft is unveiling their "Academic Search" product, which looks to be a competitor to Google Scholar, which some people compare to Web of Science, CiteSeer, etc. Just interested to see if anyone has any early comments... >From today's online Chronicle of Higher Education (subscription required): Carlson, Scott. Challenging Google, Microsoft Unveils a Search Tool for Online Scholarly Articles. Today's News. Chronicle of Higher Education. April 12, 2005. (Subscription required). http://chronicle.com/daily/2006/04/2006041201t.htm Also: Lombardi, Candace. Microsoft reveals answer to Google Scholar. CNET News. April 12, 2006. http://tinyurl.com/ghrjd And: Sherman, Chris. Microsoft Launches Windows Live Academic Search. Search Engine Watch. April 12, 2006. http://searchenginewatch.com/searchday/article.php/3598376 Bernie Sloan Senior Information Systems Consultant Consortium of Academic & Research Libraries in Illinois 616 E. Green Street, Suite 213 Champaign, IL 61820-5752 Phone: (217) 333-4895 Fax: (217) 265-0454 E-mail: bernies at uillinois.edu From garfield at CODEX.CIS.UPENN.EDU Wed Apr 12 13:36:02 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Wed, 12 Apr 2006 13:36:02 -0400 Subject: Cornelius B, Persson O "Who's who in venture capital research " TECHNOVATION 26 (2): 142-150 FEB 2006 Message-ID: E-mail Addresses: Barbara Cornelius : barbara.cornelius at fek.umu.se Olle Persson : olle.person at soc.umu.se Title: Who's who in venture capital research Author(s): Cornelius B, Persson O Source: TECHNOVATION 26 (2): 142-150 FEB 2006 Document Type: Article Language: English Cited References: 16 Times Cited: 0 Abstract: A bibliometric analysis of research papers in venture capital reveals an increasing interest over time by researchers across a broad spectrum of business disciplines. It also reveals the dominance of North American, particularly American researchers who entered the field early. Interestingly, the analysis demonstrates that two schools of entrepreneurial research compete for dominance in the venture capital framework. Much of the core research, the knowledge base, crosses disciplinary lines but is developed, from there-on, in a discipline specific fashion. Researchers whose primary interest is in finance and economics use quantitative, neo-classical models almost exclusively and publish, with the exception of the most cited authors, solely in economics and finance journals. These researchers tend to be more successful at achieving internal university funding for their projects while the second group, publishing in journals dedicated to management and entrepreneurship research, uses a broader array of theoretical techniques, apply both quantitative and qualitative methodologies and are more often funded externally. The core group of researchers, with reputations supported by large numbers of citations, appear to be able to raise funds both internally (through university bodies) and externally. (c) 2005 Elsevier Ltd. All rights reserved. Author Keywords: venture capital; bibliometrics; research front; knowledge base; research funding; citation analysis KeyWords Plus: AUTHOR COCITATION ANALYSIS Addresses: Cornelius B (reprint author), Umea Univ, Dept Business Adm, Umea, SE-90187 Sweden Umea Univ, Dept Business Adm, Umea, SE-90187 Sweden Umea Univ, Inforsk Sociolog Inst, Umea, SE-90187 Sweden E-mail Addresses: barbara.cornelius at fek.umu.se, olle.person at soc.umu.se Publisher: ELSEVIER SCI LTD, THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, OXON, ENGLAND Subject Category: ENGINEERING, INDUSTRIAL; OPERATIONS RESEARCH & MANAGEMENT SCIENCE IDS Number: 017YJ ISSN: 0166-4972 CITED REFERENCES : BASCHA A Convertible securities and optimal exit decisions in venture capital finance JOURNAL OF CORPORATE FINANCE 7 : 285 2001 BEATTIE V BRIT ACCOUNTING REV 37 : 85 2005 EOM SB Mapping the intellectual structure of research in decision support systems through author cocitation analysis (1971-1993) DECISION SUPPORT SYSTEMS 16 : 315 1996 GOLDFARB B Bottom-up versus top-down policies towards the commercialization of university intellectual property RESEARCH POLICY 32 : 639 2003 HE YL Mining a Web Citation Database for author co-citation analysis INFORMATION PROCESSING & MANAGEMENT 38 : 491 2002 LANDSTROM H SCANDINAVIAN J MANAG 17 : 225 2001 LAWRENCE S CRIT PERSPECT 13 : 661 2002 LERNER J The illiquidity puzzle: theory and evidence from private equity JOURNAL OF FINANCIAL ECONOMICS 72 : 3 2004 OKUBO Y The changing pattern of industrial scientific research collaboration in Sweden RESEARCH POLICY 29 : 81 2000 PERSSON O THE INTELLECTUAL BASE AND RESEARCH FRONTS OF JASIS 1986-1990 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 45 : 31 1994 RINIA EJ Influence of interdisciplinarity on peer-review and bibliometric evaluations in physics research RESEARCH POLICY 30 : 357 2001 SCHUMPETER JA THEORY EC DEV : 1934 SWYGARTHOBAUGH AJ A citation analysis of the quantitative/qualitative methods debate's reflection in sociology research: Implications for library collection development LIBRARY COLLECTIONS ACQUISITIONS & TECHNICAL SERVICES 28 : 180 2004 THOMAS PR Institutional research rankings via bibliometric analysis and direct peer review: A comparative case study with policy implications SCIENTOMETRICS 41 : 335 1998 VASTAG G INT J PROD ECON 81 : 115 2003 WHITE HD Visualizing a discipline: An author co-citation analysis of information science, 1972-1995 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 49 : 327 1998 From garfield at CODEX.CIS.UPENN.EDU Wed Apr 12 13:39:18 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Wed, 12 Apr 2006 13:39:18 -0400 Subject: Egghe L, Rao IKR, Sahoo BB "Proof of a conjecture of Moed and Garfield on authoritative " Scientometrics 66 (3): 537-549 FEB 2006 Message-ID: E-Mail : Leo Egghe : leo.egghe at uhasselt.be Title: Proof of a conjecture of Moed and Garfield on authoritative references and extension to non-authoritative references Author(s): Egghe L, Rao IKR, Sahoo BB Source: SCIENTOMETRICS 66 (3): 537-549 FEB 2006 Document Type: Article Language: English Cited References: 6 Times Cited: 0 Abstract: In a recent paper [H. F. MOED, E. GARFIELD: In basic science the percentage of "authoritative"references decreases as bibliographies become shorter. Scientometrics 60 (3) (2004) 295-303] the authors show, experimentally, the validity of the statement in the title of their paper. In this paper we give a general informetric proof of it, under certain natural conditions. The proof is given both in the discrete and the continuous setting. An easy corollary of this result is that the fraction of non-authoritative references increases as bibliographies become shorter. This finding is supported by a set of data of the journal Information Processing and Management (2002 + 2003) with respect to the fraction of conference proceedings articles in reference lists. KeyWords Plus: SCIENCE Addresses: Egghe L (reprint author), Univ Hasselt, Diepenbeek, B-3590 Belgium Univ Hasselt, Diepenbeek, B-3590 Belgium Indian Stat Inst, DRTC, Bangalore, Karnataka India Univ Antwerp, Antwerp, Belgium E-mail Addresses: leo.egghe at uhasselt.be Publisher: SPRINGER, VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS IDS Number: 008TG ISSN: 0138-9130 CITED REFERENCES EGGHE L INTRO INFORM QUANTIT : 1990 EGGHE L THE DUALITY OF INFORMETRIC SYSTEMS WITH APPLICATIONS TO THE EMPIRICAL LAWS JOURNAL OF INFORMATION SCIENCE 16 : 17 1990 EGGHE L POWER LAWS INFORM PR : 2005 MERTON RK THE MATTHEW EFFECT IN SCIENCE .2. CUMULATIVE ADVANTAGE AND THE SYMBOLISM OF INTELLECTUAL PROPERTY ISIS 79 : 606 1988 MOED HF In basic science the percentage of 'authoritative' references decreases as bibliographies become shorter SCIENTOMETRICS 60 : 295 2004 WEINSTOCK M ENCY LIBRARY INFORMA 5 : 16 1971 From garfield at CODEX.CIS.UPENN.EDU Wed Apr 12 14:06:33 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Wed, 12 Apr 2006 14:06:33 -0400 Subject: Franks AL, Simoes EJ, Singh R, Gray BS "Assessing prevention research impact - A bibliometric analysis " AMERICAN JOURNAL OF PREVENTIVE MEDICINE 30 (3): 211-216 MAR 2006 Message-ID: E-mail Addresses: afranks at cdc.gov Title: Assessing prevention research impact - A bibliometric analysis Author(s): Franks AL, Simoes EJ, Singh R, Gray BS Source: AMERICAN JOURNAL OF PREVENTIVE MEDICINE 30 (3): 211-216 MAR 2006 Document Type: Article Language: English Cited References: 13 Times Cited: 0 Abstract: Background: This study was undertaken to explore a bibliometric approach to assessing the impact of selected prevention research center (PRC) peer- reviewed publications. Methods: The 25 eligible PRCs were asked to submit 15 papers that they considered the most important to be published in the decade 1994-2004. journal articles (n =227) were verified in 2004 and categorized: 73% were research reports, 10% discussion articles, 9% dissemination articles, and 7% review articles. Results: Only 189 articles (83%) were searchable via the Institute of Scientific Information (ISI), Web of Science databases for citation tracking in 2004. These 189 articles were published in 76 distinct journals and subsequently, cited 4628 times (range 0 to 1523) in 1013 journals. Articles published before 2001 were cited a median of 14 times each. Publishing journals had a median ISI impact factor of 2.6, and ISI half- life of 7.2. No suitable benchmarks were available for comparison. The PRC influence factor (number of PRCs that considered a journal highly influential) was only weakly correlated with the ISI impact factor and was not correlated with half-life. Conclusions: Conventional bibliometric analysis to assess the scientific impact of public health prevention research is feasible, but of limited utility because of omissions from ISI's databases, and because citation benchmarks for prevention research have not been established: these problems can and should be addressed. Assessment of impact on public health practice, policy, or on the health of populations, will require more than a bibliometric approach. Addresses: Franks AL (reprint author), Ctr Dis Control & Prevent, Natl Ctr Chron Dis Prevent & Hlth Promot, Div Adult & Community Hlth, MS K-45,4770 Buford Highway, Atlanta, GA 30341 USA Ctr Dis Control & Prevent, Natl Ctr Chron Dis Prevent & Hlth Promot, Div Adult & Community Hlth, Atlanta, GA 30341 USA Prevent Res Ctr Program, Atlanta, GA USA Off Workforce & Career Dev, Atlanta, GA USA E-mail Addresses: afranks at cdc.gov Publisher: ELSEVIER SCIENCE INC, 360 PARK AVE SOUTH, NEW YORK, NY 10010- 1710 USA IDS Number: 016YN ISSN: 0749-3797 CITED REFERENCES : *COSMOS CORP NAT EV PLAN CDCS PRE : 2003 *I MED LINK RES PUBL HLTH P : 1997 *THOMS SCI GLOSS THOMS SCI TERM *THOMS SCI INF J CIT REP *THOMS SCI ISI J CIT REP ANDERSON LA AM PUBL HLTH ASS 130 : 2002 FOSTER WR IMPACT FACTOR AS THE BEST OPERATIONAL MEASURE OF MEDICAL JOURNALS LANCET 346 : 1301 1995 GARFIELD E WHICH MEDICAL JOURNALS HAVE THE GREATEST IMPACT ANNALS OF INTERNAL MEDICINE 105 : 313 1986 GARFIELD E Journal impact factor: a brief review CANADIAN MEDICAL ASSOCIATION JOURNAL 161 : 979 1999 LIU JLY Quality of impact factors of general medical journals - Research quality can be assessed by using combination of approaches BRITISH MEDICAL JOURNAL 326 : 931 2003 NAKAYAMA T Comparison between impact factors and citations in evidence-based practice guidelines JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 290 : 755 2003 PORTA M Commentary I - The bibliographic "impact factor", the total number of citations and related bibliometric indicators: the need to focus on journals of public health and preventive medicine SOZIAL-UND PRAVENTIVMEDIZIN 49 : 15 2004 SEGLEN PO Why the impact factor of journals should not be used for evaluating research BRITISH MEDICAL JOURNAL 314 : 498 1997 From quentinburrell at MANX.NET Wed Apr 12 15:14:01 2006 From: quentinburrell at MANX.NET (Quentin L. Burrell) Date: Wed, 12 Apr 2006 20:14:01 +0100 Subject: Call for papers new "Journal of Informetrics Message-ID: Leo Congratulations! Do you have any early "information for authors" regarding format, etc? Will it be the same as IP&M, for instance? Best wishes Quentin Dr Quentin L Burrell Isle of Man International Business School The Nunnery Old Castletown Road Douglas Isle of Man IM2 1QB via United Kingdom q.burrell at ibs.ac.im www.ibs.ac.im ----- Original Message ----- From: "Leo Egghe" To: Sent: Wednesday, April 12, 2006 11:00 AM Subject: [SIGMETRICS] Call for papers new "Journal of Informetrics > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear Colleague, > > please find in attachment the call for papers for the new Elsevier > journal: > "Journal of Informetrics". > > Regards, Leo Egghe > From Christina.Pikas at JHUAPL.EDU Wed Apr 12 15:28:36 2006 From: Christina.Pikas at JHUAPL.EDU (Pikas, Christina K.) Date: Wed, 12 Apr 2006 15:28:36 -0400 Subject: Microsoft Academic Search vs. Google Scholar Message-ID: My early comment as related to this forum is that it doesn't attempt citations, I believe. It looks at "authority", but I'm not exactly sure how that works. See this interesting quote from their page: How do you determine relevance? Are you using citation counts in the relevance ranking? We are determining relevance based on the following two areas, as determined by a Microsoft algorithm: # Quality of match of the search term with the content of the paper # Authoritativeness of the paper. Currently, we are not using citation count as a factor in determining relevance. Among the many reasons that led us to this decision was the fact that we wanted to have an accurate and credible citation count to be used in the relevance ranking. User trust of the relevance ranking algorithm requires a very credible, trusted citation count, and we will revisit the inclusion of Microsoft derived citation count in the relevance ranking algorithm at a later date as our technology improves further. It is encouraging, from a librarian standpoint, that they are working with the Open URL resolvers -- even if JHU isn't included yet. Christina K. Pikas, MLS R.E. Gibson Library & Information Center The Johns Hopkins University Applied Physics Laboratory Voice 240.228.4812 (Washington), 443.778.4812 (Baltimore) Fax 443.778.5353 -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Sloan, Bernie Sent: Wednesday, April 12, 2006 1:24 PM To: SIGMETRICS at listserv.utk.edu Subject: [SIGMETRICS] Microsoft Academic Search vs. Google Scholar Microsoft is unveiling their "Academic Search" product, which looks to be a competitor to Google Scholar, which some people compare to Web of Science, CiteSeer, etc. Just interested to see if anyone has any early comments... >From today's online Chronicle of Higher Education (subscription required): Carlson, Scott. Challenging Google, Microsoft Unveils a Search Tool for Online Scholarly Articles. Today's News. Chronicle of Higher Education. April 12, 2005. (Subscription required). http://chronicle.com/daily/2006/04/2006041201t.htm Also: Lombardi, Candace. Microsoft reveals answer to Google Scholar. CNET News. April 12, 2006. http://tinyurl.com/ghrjd And: Sherman, Chris. Microsoft Launches Windows Live Academic Search. Search Engine Watch. April 12, 2006. http://searchenginewatch.com/searchday/article.php/3598376 Bernie Sloan Senior Information Systems Consultant Consortium of Academic & Research Libraries in Illinois 616 E. Green Street, Suite 213 Champaign, IL 61820-5752 Phone: (217) 333-4895 Fax: (217) 265-0454 E-mail: bernies at uillinois.edu From ailin_martinez at YAHOO.ES Thu Apr 13 07:44:33 2006 From: ailin_martinez at YAHOO.ES (=?iso-8859-1?q?Ail=FFffffedn=20Mart=FFffffednez=20Rodr=FFffffedguez?=) Date: Thu, 13 Apr 2006 13:44:33 +0200 Subject: In-Reply-To: <002401c65d84$dbbbc4e0$1202a8c0@capitolio.cu> Message-ID: Malena, yo estoy suscrita a Sigmetrics, la verdad es que son tantos los emnsajes que llegan que a veces no me da tiempo a leerlo todo, pero bueno, al menso los tengo ah?. cu?ntame de la sede y el curso. lamento que no hubi?semos coincidido el otor dia besos, ailin --------------------------------- LLama Gratis a cualquier PC del Mundo. Llamadas a fijos y m?viles desde 1 c?ntimo por minuto. http://es.voice.yahoo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From enrique.wulff at ICMAN.CSIC.ES Thu Apr 13 12:11:30 2006 From: enrique.wulff at ICMAN.CSIC.ES (=?iso-8859-1?Q?=22Enrique_Wulff=2E_C=E1diz_=28CSIC=29=2E=22?=) Date: Thu, 13 Apr 2006 18:11:30 +0200 Subject: Microsoft Academic Search vs. Google Scholar In-Reply-To: Message-ID: Dear Friends, Also I have a reference that reports on Goggle Scholar performances (perhaps interesting for those in the marine science information area). Pauly,Daniel & Stergiou, Konstantinos. Equivalence of results from two citation analyses: Thomson ISI's Citation Index and Google's Scholar service [pdf]. Ethics in Science and Environmental Politics. 2005:33-35. Best regards Enrique Wulff. __________________________________________________________ Marine Sciences Institute from Andalusia (ICMAN) Spanish Council for Scientific Research (CSIC) Campus Univ. R?o San Pedro 11510 Puerto Real (C?diz) Spain Tel: 34-956832612 Fax:34-956834701 C.Elect.: Bibmar at cica.es URL: http://www.icman.csic.es/ __________________________________________________________ Cadiz: http://www.cadiz-virtual.com/web_ing/home.htm At 19:23 12/04/2006, you wrote: >Adminstrative info for SIGMETRICS (for example unsubscribe): >http://web.utk.edu/~gwhitney/sigmetrics.html > >Microsoft is unveiling their "Academic Search" product, which looks to >be a competitor to Google Scholar, which some people compare to Web of >Science, CiteSeer, etc. > >Just interested to see if anyone has any early comments... > > From today's online Chronicle of Higher Education (subscription >required): > >Carlson, Scott. Challenging Google, Microsoft Unveils a Search Tool for >Online Scholarly Articles. Today's News. Chronicle of Higher Education. >April 12, 2005. (Subscription required). >http://chronicle.com/daily/2006/04/2006041201t.htm > >Also: > >Lombardi, Candace. Microsoft reveals answer to Google Scholar. CNET >News. April 12, 2006. >http://tinyurl.com/ghrjd > >And: > >Sherman, Chris. Microsoft Launches Windows Live Academic Search. Search >Engine Watch. April 12, 2006. >http://searchenginewatch.com/searchday/article.php/3598376 > >Bernie Sloan >Senior Information Systems Consultant >Consortium of Academic & Research Libraries in Illinois >616 E. Green Street, Suite 213 >Champaign, IL 61820-5752 > >Phone: (217) 333-4895 >Fax: (217) 265-0454 >E-mail: bernies at uillinois.edu From dgoodman at PRINCETON.EDU Thu Apr 13 14:00:00 2006 From: dgoodman at PRINCETON.EDU (David Goodman) Date: Thu, 13 Apr 2006 14:00:00 -0400 Subject: Microsoft Academic Search vs. Google Scholar In-Reply-To: <934BB0B6D8A02C42BC6099FDE8149CCD3B4941@aplesjustice.dom1.jhuapl.edu> Message-ID: Preliminary data only: There is substantial online help and information The link to the specific help for this point is: They do claim to use citations for ranking, if you mean links from other sites, not references within the paper. (btw, references within the paper do not seem to count in the ranking, but they do seem to be searched.) "Using results ranking, you can put emphasis on different factors to get a different set of results for the same search. Type your search terms in the search text box, and then click Search Builder. Click Results ranking, and then move the sliders in the direction you want: [3 choices] Updated recently To modify your search to add emphasis to sites that have been recently added to the search index, move the left slider up. Very popular To add emphasis to sites by the number of other sites that link to them, move the middle slider up. Approximate match To put the most emphasis on the match between your exact search words and your results, move the right slider down. Note This will de-emphasize the first two slider rankings." ================== My confidence in their help screens is somewhat shaken by "Query rules"--same page "All searches are "AND" searches. Pages that contain at least one of the terms you type are returned." >From limited experiments, they seem to mean 'AND searches, and only pages with both terms are retrieved' rather than 'OR searches, and only pages with both terms are retireved' I reported it,and will keep track to see if they fix it. Dr. David Goodman Associate Professor Palmer School of Library and Information Science Long Island University and formerly Princeton University Library dgoodman at liu.edu dgoodman at princeton.edu ----- Original Message ----- From: "Pikas, Christina K." Date: Wednesday, April 12, 2006 3:32 pm Subject: Re: [SIGMETRICS] Microsoft Academic Search vs. Google Scholar To: SIGMETRICS at listserv.utk.edu > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > My early comment as related to this forum is that it doesn't attempt > citations, I believe. It looks at "authority", but I'm not exactly > surehow that works. See this interesting quote from their page: > > How do you determine relevance? Are you using citation counts in the > relevance ranking? > We are determining relevance based on the following two areas, as > determined by a Microsoft algorithm: > # Quality of match of the search term with the content of the paper > # Authoritativeness of the paper. > Currently, we are not using citation count as a factor in determining > relevance. Among the many reasons that led us to this decision was the > fact that we wanted to have an accurate and credible citation count to > be used in the relevance ranking. User trust of the relevance ranking > algorithm requires a very credible, trusted citation count, and we > willrevisit the inclusion of Microsoft derived citation count in the > relevance ranking algorithm at a later date as our technology improves > further. > > It is encouraging, from a librarian standpoint, that they are working > with the Open URL resolvers -- even if JHU isn't included yet. > > Christina K. Pikas, MLS > R.E. Gibson Library & Information Center > The Johns Hopkins University Applied Physics Laboratory > Voice 240.228.4812 (Washington), 443.778.4812 (Baltimore) > Fax 443.778.5353 > > > > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Sloan, Bernie > Sent: Wednesday, April 12, 2006 1:24 PM > To: SIGMETRICS at listserv.utk.edu > Subject: [SIGMETRICS] Microsoft Academic Search vs. Google Scholar > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Microsoft is unveiling their "Academic Search" product, which looks to > be a competitor to Google Scholar, which some people compare to Web of > Science, CiteSeer, etc. > > Just interested to see if anyone has any early comments... > > From today's online Chronicle of Higher Education (subscription > required): > > Carlson, Scott. Challenging Google, Microsoft Unveils a Search Tool > forOnline Scholarly Articles. Today's News. Chronicle of Higher > Education.April 12, 2005. (Subscription required). > http://chronicle.com/daily/2006/04/2006041201t.htm > > Also: > > Lombardi, Candace. Microsoft reveals answer to Google Scholar. CNET > News. April 12, 2006. > http://tinyurl.com/ghrjd > > And: > > Sherman, Chris. Microsoft Launches Windows Live Academic Search. > SearchEngine Watch. April 12, 2006. > http://searchenginewatch.com/searchday/article.php/3598376 > > Bernie Sloan > Senior Information Systems Consultant > Consortium of Academic & Research Libraries in Illinois > 616 E. Green Street, Suite 213 > Champaign, IL 61820-5752 > > Phone: (217) 333-4895 > Fax: (217) 265-0454 > E-mail: bernies at uillinois.edu > From harnad at ECS.SOTON.AC.UK Thu Apr 13 18:37:55 2006 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Thu, 13 Apr 2006 23:37:55 +0100 Subject: Future UK RAEs to be Metrics-Based In-Reply-To: Message-ID: The following is a comment on an article that appeared in today's Independent about the RAE and Metrics (followed by a response to another piece in the Independent about Web Metrics). Re: Hodges, L. (2006) The RAE is dead - long live metrics The Independent April 13 2006 http://education.independent.co.uk/higher/article357343.ece Absolutely no one can justify (on the basis of anything but superstition) holding onto an expensive, time-wasting assessment system such as the RAE, which produces rankings that are almost perfectly correlated with, hence almost exactly predictable from, inexpensive objective metrics such as prior funding, citations and research student counts. http://www.hm-treasury.gov.uk/media/1E1/5E/bud06_science_332.pdf Hence the only two points worth discussing are (1) which metrics to use and (2) how to adapt them to each discipline. The web has opened up a vast and rich universe of potential metrics that can be tested for their validity and predictive power: citations, downloads, co-citations, immediacy, growth-rate, longevity, interdisciplinarity, user tags/commentaries and much, much more. These are all measures of research uptake, usage, impact, progress and influence. They have to be tested and weighted according to the unique profile of each discipline (or even subdiscipline). Prior funding is highly predictive, but it also generates a Matthew Effect: a self-fulfilling, self-perpetuating prophecy. I would not for a moment believe, however, that any (research) discipline lacks predictive metrics of research performance altogether. Even less credible is the superstitious notion that the only way (or the best) to evaluate research is for RAE panels to re-do, needlessly, locally, the peer review that has already been done, once, by the journals in which the research has already been published. The urgent feeling that this human re-review is necessary has nothing to do with the RAE or metrics in particular; it is just a generic human superstition (and irrationality) about population statistics versus my own unique, singular case: http://www.ecs.soton.ac.uk/~harnad/Hypermail/Explaining.Mind96/0221.html http://en.wikipedia.org/wiki/Base_rate_fallacy ----- Re: Diary (13 April 2006, other article, same issue) http://education.independent.co.uk/higher/ > 'A new international university ranking has been launched and > the UK has 25 universities in the world's top 300. The results > are based on the popularity of the content of their websites on > other university campuses. The G Factor is the measure of how > many links exist to each university's website from the sites > of 299 other research-based universities, as measured by 90,000 > google searches. No British university makes it into the Top 10; > Cambridge sits glumly just outside at no 11. Oxford languishes at > n.20. In a shock Southampton University is at no.25 and third in > Britain. Can anyone explain this? Answers on a postcard. The rest > of the UK Top 10, is UCL, Kings, Imperial, Sheffield, Edinburgh, > Bristol and Birmingham.' The reasons for the University of Southampton's extremely high overall webmetric rating are four: (1) U. Southampton's university-wide research performance (2) U. Southampton's Electronics and Computer Science (ECS) Department's involvement in many high-profile web projects and activities (among them the semantic web work of the web's inventor, ECS Prof. Tim Berners-Lee, the Advanced Knowledge Technologies (AKT) work of Prof. Nigel Shadbolt, and the pioneering web linking contributions of Prof. Wendy Hall) (3) The fact that since 2001 U. Southampton's ECS has had a mandate requiring that all of its research output be made Open Access on the web, and that Southampton has a university-wide self-archiving policy (soon to become a mandate) too (4) The fact that maximising access to research (by self-archiving it free for all on the web) maximises research usage and impact (and hence web impact) This all makes for an extremely strong Southampton web presence, as reflected in such metrics as the "G factor". http://www.universitymetrics.com/tiki-index.php?page=G-Factor which places Southampton 3rd in the UK and 25th among the world's top 300 universities or http://www.webometrics.info/ which places Southampton 6th in UK, 9th in Europe, and 80th among the top 3000 universities it indexes. Of course, these are extremely crude metrics, but Southampton itself is developing more powerful and diverse metrics for all Universities in preparation for the newly announced metrics-only Research Assessment Exercise. http://openaccess.eprints.org/index.php?/archives/75-guid.html Stevan Harnad American Scientist Open Access Forum http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html ------------------------- Some references: Harnad, S. (2001) Why I think that research access, impact and assessment are linked. Times Higher Education Supplement 1487: p. 16. http://cogprints.org/1683/ Hitchcock, S., Brody, T., Gutteridge, C., Carr, L., Hall, W., Harnad, S., Bergmark, D. and Lagoze, C. (2002) Open Citation Linking: The Way Forward. D-Lib Magazine 8(10). http://eprints.ecs.soton.ac.uk/7717/ Harnad, S. (2003) Why I believe that all UK research output should be online. Times Higher Education Supplement. Friday, June 6 2003. http://eprints.ecs.soton.ac.uk/7728/ Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. http://www.ariadne.ac.uk/issue35/harnad/ Berners-Lee, T., De Roure, D., Harnad, S. and Shadbolt, N. (2005) Journal publishing and author self-archiving: Peaceful Co-Existence and Fruitful Collaboration. http://eprints.ecs.soton.ac.uk/11160/ Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST). http://eprints.ecs.soton.ac.uk/10713/ Shadbolt, N., Brody, T., Carr, L. & Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable. In: Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects Chandos. http://www.ecs.soton.ac.uk/~harnad/Temp/shad-bch.doc Citebase impact ranking engine http://citebase.eprints.org/ Beans and Bean Counters http://www.thes.co.uk/search/story.aspx?story_id=2023828 Bibliography of Findings on the Open Access Impact Advantage http://opcit.eprints.org/oacitation-biblio.html Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: A complete Hypermail archive of the ongoing discussion of providing open access to the peer-reviewed research literature online (1998-2005) is available at: http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ To join or leave the Forum or change your subscription address: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html Post discussion to: american-scientist-open-access-forum at amsci.org UNIVERSITIES: If you have adopted or plan to adopt an institutional policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php UNIFIED DUAL OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("gold"): Publish your article in a open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ From eugene.garfield at THOMSON.COM Fri Apr 14 13:45:26 2006 From: eugene.garfield at THOMSON.COM (Eugene Garfield) Date: Fri, 14 Apr 2006 13:45:26 -0400 Subject: FW: Ebsco adds citation indexing links Message-ID: company links: EBSCO , , Ipswich, Mass., March 16, 2006 EBSCO has improved access to citation indexing on EBSCOhost databases for literature in the following disciplines: business, communication & mass media, nursing & allied health and sociology. New searchable cited references Business Source(r) Complete and Business Source(r) Premier provides searchable cited references for 1,200 journals, backfiles have also been added to this resource. Communication & Mass Media Complete(tm) (CMMC), scholarly communication and communications database provides searchable cited references for 307 journals. Citation indexing backfiles were also created. All subscribers to CMMC receive this feature at no additional charge. CINAHL(r) Plus with Full Text, nursing and allied health research database contains searchable cited references for 1,174 journals. SocINDEX and SocINDEX with Full Text, sociology journals index provides citation indexing for nearly 2,000 titles, many back to volume 1, issue 1 (1800s). Subscribers to SocINDEX or SocINDEX with Full Text receive this data at no additional charge. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 918 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 1032 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 993 bytes Desc: image003.gif URL: From eugene.garfield at THOMSON.COM Tue Apr 18 13:53:47 2006 From: eugene.garfield at THOMSON.COM (Eugene Garfield) Date: Tue, 18 Apr 2006 13:53:47 -0400 Subject: Commission study addresses Europe's scientific publication system Message-ID: Subject: ] Commission study addresses Europe's scientific publication system The European Commission has published a study which examines the scientific publication system in Europe. Scientific publication ensures that research results are made known, which is a pre-condition for further research and for turning this knowledge into innovative products and services. Scientific publication is also an important part of certifying the quality of the work done. Given the scarcity of public money to provide access to scientific publications, there is a strong interest in seeing that Europe has an effective and functioning system for scientific publication that speedily delivers results to a wide audience. The report, drawn up for the Commission by a panel of experts, makes a number of recommendations for future action, including improving access to publicly-funded research. All interested parties are invited to send feedback on the report's findings to the Commission, to provide input for a conference on scientific publication to be held in autumn 2006. European Science and Research Commissioner, Janez Poto?nik said "It is in all our interests to find a model for scientific publication that serves research excellence. We are ready to work with readers, authors, publisher and funding bodies to develop such a model." The study looked at the economic and technical evolution of scientific publication markets in Europe. It was commissioned as a contribution to on-going public debate on the conditions of access to and dissemination of scientific publications. There have been significant changes in the landscape over the last 30 years, in particular the rise of internet use. The study confirms scientific journals as an essential channel for the dissemination of scientific knowledge. With large amounts of public money invested in research, it becomes increasingly important for publications reporting on that research to be accessible to as wide a public as possible. The study therefore makes a number of recommendations for future action, including: * Guaranteed public access to publicly-funded research, at the time of publication and also long-term * A "level-playing field" so that different business models in publishing can compete fairly in the market * Ranking scientific journals by quality, defined more widely than pure scientific excellence, but also taking into account factors such as management of copyright, search facilities and archiving * Developing pricing strategies that promote competition in the journal market * Scrutinising major mergers that may take place in this sector in the future * Promoting the development of electronic publication, for example by eliminating unfavourable tax treatment of electronic publications and encouraging public funding and public-private partnerships to create digital archives in areas with little commercial investment. The European Commission is keen to hear the views of all interested parties. It is therefore calling for reactions to the study, and contributions on other issues linked to scientific publications. Contributions should be sent to rtd-scientific-publication at cec.eu.int by 1st June 2006. The study and its public feedback will be at the centre of a conference on scientific publication to be held in autumn 2006. SINAPSE, the web interface between the scientific community and Europe's policy-makers, will also host a debate on the subject. (SINAPSE's website: http://europa.eu.int/sinapse) The study was carried out by a consortium led by Professor Mathias Dewatripont of the "Universit? Libre de Bruxelles". The study is available for downloading at: http://europa.eu.int/comm/research/science-society/pdf/scientific-publicatio n-study_en.pdf The press release is at http://europa.eu.int/rapid/pressReleasesAction.do?reference=IP/06/414 From harnad at ECS.SOTON.AC.UK Tue Apr 18 15:40:54 2006 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Tue, 18 Apr 2006 15:40:54 -0400 Subject: Commission study addresses Europe's scientific publication system In-Reply-To: <311174B69873F148881A743FCF1EE53701825191@TSHUSPAPHIMBX02.ERF.THOMSON.COM> Message-ID: On 18-Apr-06, at 1:53 PM, Eugene Garfield wrote: > Subject: Commission study addresses Europe's scientific > publication system > The European Commission has published a study > The study... makes a number of recommendations for future action, > including: > * Guaranteed public access to publicly-funded research, at > the time of > publication and also long-term... > http://europa.eu.int/comm/research/science-society/pdf/scientific- > publication-study_en.pdf Given that Gene has posted the above to Sigmetrics, here is some pertinent follow-up: Suggestion for Optimising the European Commission's Recommendation to Mandate Open Access Archiving of Publicly-Funded Research The European Commission "Study on the Economic and Technical Evolution of the Scientific Publication Markets in Europe" has made the following policy recommendation: RECOMMENDATION A1. GUARANTEE PUBLIC ACCESS TO PUBLICLY-FUNDED RESEARCH RESULTS SHORTLY AFTER PUBLICATION. "Research funding agencies have a central role in determining researchers' publishing practices. Following the lead of the NIH and other institutions, they should promote and support the archiving of publications in open repositories, after a (possibly domain-specific) time period to be discussed with publishers. This archiving could become a condition for funding. The following actions could be taken at the European level: (i) Establish a European policy mandating published articles arising from EC-funded research to be available after a given time period in open access archives [emphasis added], and (ii) Explore with Member States and with European research and academic associations whether and how such policies and open repositories could be implemented." The European Commission?s Recommendation A1 is very welcome and potentially very important, but it can be made incomparably more effective with just one very simple but critical revision concerning what needs to be deposited, when (hence what can and cannot be delayed): For the purposes of Open Access, a research paper has two elements ? (i) the whole document itself (called the ?full-text) and (ii) its bibliographic metadata (its title, date, details of the authors, their institutions, the abstract and so forth). This bibliographic information can exist as an independent entity in its own right and serves to alert would-be users to the existence of the full-text article itself. EC Recommendation A1 should distinguish between first (a) depositing the full text of a journal article in the author?s Institutional Repository (preferably, or otherwise any other OAI-compliant Open Access Repository ? henceforth referred to collectively as OARs; see Swan et al. 2005) and then deciding whether to (b1) allow Open Access to that full-text deposit, or to (b2) allow Open Access only to its bibliographic metadata and not the full-text. EC Recommendation A1 should accordingly specify the following: Depositing the full-text of all journal articles in the author's OAR is mandatory immediately upon acceptance for publication for all EC- funded research findings, without exception. In addition, allowing Open Access to the article?s bibliographic metadata at the time of deposit (i.e., immediately upon acceptance for publication) is always mandatory. However, allowing Open Access to the full-text of the article itself immediately upon deposit is merely encouraged wherever possible, but not mandatory; full-text access can be made Open Access at a later time if necessary: The OAR software enables the author to allow Open Access to either the whole article or to its bibliographic metadata only. This separate treatment of the rules for (a) depositing and for (b) access-setting provides authors with the means of abiding by the copyright regulations for the articles published in the 7% of journals that have not yet explicitly given their official green light to authors to provide immediate Open Access through self- archiving (as 93% of journals have already done). Authors can make their full-text Open Access at the time agreed with the publisher simply by changing the access-setting for the deposit at the chosen time. Meanwhile, however, the bibliographic metadata for all articles are and remain openly accessible to everyone from the moment of acceptance for publication, informing users of the existence and whereabouts of the article. During any publisher-imposed embargo period, would-be users who access the metadata and find that they cannot access the full-text can email the author individually to request an eprint -- and the author can then choose to email the eprint to the requester, or not, as he wishes, exactly as authors did in paper reprint days. The European Commission is urged to make this small but extremely important change in its policy recommendation. It means the difference between immediate 100% Open Access and delayed, embargoed access for years to come. Pertinent Prior American Scientist Open Access Forum Topic Threads: 2002: "Evolving Publisher Copyright Policies On Self-Archiving" http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2351 2003: ?Draft Policy for Self-Archiving University Research Output? http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2550 "What Provosts Need to Mandate" http://www.ecs.soton.ac.uk/~harnad/ Hypermail/Amsci/subject.html#3241 "Recommendations for UK Open-Access Provision Policy" http:// www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#3292 2004: "University policy mandating self-archiving of research output" http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ subject.html#3292 "Mandating OA around the corner?" http://www.ecs.soton.ac.uk/~harnad/ Hypermail/Amsci/subject.html#3830 "Implementing the US/UK recommendation to mandate OA Self-Archiving" http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#3892 "A Simple Way to Optimize the NIH Public Access Policy" http:// www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#4092 2005: "Comparing the Wellcome OA Policy and the RCUK (draft) Policy" http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#4549 "New international study demonstrates worldwide readiness for Open Access mandate" http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ subject.html#4605 "DASER 2 IR Meeting and NIH Public Access Policy" http:// www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#4963 "Mandated OA for publicly-funded medical research in the US" http:// www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#4982 2006: "Mandatory policy report" (2) http://www.ecs.soton.ac.uk/ ~harnad/Hypermail/Amsci/subject.html#4979 "The U.S. CURES Act would mandate OA" http://www.ecs.soton.ac.uk/ ~harnad/Hypermail/Amsci/subject.html#5046 "Generic Rationale and Model for University Open Access Mandate"" http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#5216 "U. California: Publishing Reform, University Self-Publishing and Open Access" http://openaccess.eprints.org/index.php?/archives/57- guid.html "A Simple Way to Optimize the NIH Public Access Policy" http:// openaccess.eprints.org/index.php?/archives/64-guid.html "Optimizing Open Access Guidelines of Deutsche Forschungsgemeinschaft" http://openaccess.eprints.org/index.php?/ archives/70-guid.html "Optimizing MIT's Open Access Policy" http://openaccess.eprints.org/ index.php?/archives/74-guid.html Future UK Research Assessment Exercise (RAE) to be Metrics-Based http://openaccess.eprints.org/index.php?/archives/75-guid.html Optimizing the European Commission's Recommendation for Open Access Archiving of Publicly-Funded Research http://openaccess.eprints.org/ index.php?/archives/78-guid.html APPENDIX Why it is so important that research should be deposited immediately, rather than delayed/embargoed The reasons are six: (1) Science is done (and funded) in order to be used, not in order to be embargoed. (2) For fast-moving areas of science especially, the first few months from publication are the most important time for usage and progress through immediate uptake and application to further ongoing research worldwide. Studies show that early usage has a large, permanent effect on research impact (Kurtz et al. 2004; Brody & Harnad 2006). Limiting the possibility of early usage therefore means a large and permanent loss of potential research impact. (3) If the metadata of all Restricted Access articles are visible worldwide immediately alongside all Open Access articles, individual researchers emailing the author for an eprint of the full text will maximise early uptake and usage almost as rapidly and effectively as setting access privileges to Open Access immediately. The OAR software is designed to simplify and accelerate this to just a few keystrokes. (4) For this, it is critical that the deposit of both the full-text and bibliographic metadata should be immediate (upon acceptance for publication) and not delayed. (5) If the EC policy were instead to allow the deposit to be delayed for 6-12 months or more, the result would be to entrench instead of to eliminate usage-denial for research findings that were made and published in order to be used, immediately. (6) Publisher copyright agreements concern making the full text publicly accessible, whereas authors depositing their full-texts in their own OAR without public access -- and emailing individual eprints on request from fellow-researchers -- constitutes Fair Use. (a) Self-archiving increases research usage and impact by 25-250% http://opcit.eprints.org/oacitation-biblio.html (b) But only 15% of researchers as yet self-archive spontaneously http://citebase.eprints.org/isi_study/ (d) 95% of researchers report they will comply if self-archiving is mandated by their institution and/or research funder http:// eprints.ecs.soton.ac.uk/11006/ (d) 93% of journals already officially endorse author self-archiving http://romeo.eprints.org/stats.php (e) For the remaining 7% of articles, immediate deposit can still be mandated, and for the time being access can be provided by emailing the eprinthttp://www.jiscmail.ac.uk/cgi-bin/webadmin? A2=ind0604&L=jisc-repositories&T=0&O=D&P=1908 Open Access maximises research access, usage, impact and progress, maximising benefits to research itself, to researchers, their institutions, their funders, and those who fund the funders, i.e., the tax-paying public for whose ultimate benefit the research is done. Access to the research corpus also provides secondary benefits to students, teachers, the developing world, industry, and the general public. ROAR (Registry of Open Access Repositories) tracks the Institutional and Central Open Access Repositories (OARs) worldwide as well the individual growth of each http://archives.eprints.org/ (see also OpenDOAR (Directory of Open Access Repositories) http:// www.opendoar.org/ , which provides a human-confirmed subset of ROAR plus classification details coverage in alliance with DOAJ, the Directory of Open Access Journals http://www.doaj.org/ ). ROARMAP (Registry of Open Access Repository Material Access Policies) tracks the adoption of Open Access Self-Archiving Policies in institutions worldwide http://www.eprints.org/signup/fulllist.php ROMEO (Directory or Journal Open Access Self-Archiving Policies): tracks the growth in the number of journals giving their ?green light? to author self-archiving: 93% of the over 9000 journals so far endorse some form of immediate author self-archiving: http:// romeo.eprints.org/stats.php REFERENCES Brody, T. and Harnad, S. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology. http:// eprints.ecs.soton.ac.uk/10713 Harnad, S. (2006) Publish or Perish ? Self-Archive to Flourish: The Green Route to Open Access. ERCIM News 6 http://www.ercim.org/ publication/Ercim_News/enw64/harnad.html Kurtz, M. J., Eichhorn, G., Accomazzi, A., Grant, C. S., Demleitner, M., Murray, S. S. (2004) The Effect of Use and Access on Citations Information Processing and Management 41 (6): 1395-1402 http://cfa-www.harvard.edu/~kurtz/IPM- abstract.html Swan, A., Needham, P., Probets, S., Muir, A., Oppenheim, C., O?Brien, A., Hardy, R., Rowland, F. and Brown, S. (2005) Developing a model for e-prints and open access journal content in UK further and higher education. Learned Publishing 18(1) pp. 25-40. http:// eprints.ecs.soton.ac.uk/11000 ABSTRACT: A study carried out for the UK Joint Information Systems Committee examined models for the provision of access to material institutional and subject- based archives and in open access journals. Their relative merits were considered, addressing not only technical concerns but also how e-print provision (by authors) can be achieved -- an essential factor for an effective e-print delivery service (for users). A "harvesting" model is recommended, where the metadata of articles deposited in distributed archives are harvested, stored and enhanced by a national service. This model has major advantages over the alternatives of a national centralized service or a completely decentralized one. Options for the implementation of a service based on the harvesting model are presented. "Central vs. Distributed Archives" (1999-2003) http:// www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#294 "Central versus institutional self-archiving" (2003-2006) http:// www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#3207 -------------- next part -------------- An HTML attachment was scrubbed... URL: From loet at LEYDESDORFF.NET Thu Apr 20 01:49:16 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Thu, 20 Apr 2006 07:49:16 +0200 Subject: Call for Papers ---6th International Triple Helix Conference, 16-18 May 2007, Singapore Message-ID: CALL FOR PAPERS: The 6th International Triple Helix Conference on University-Government-Industry Links, Singapore, 16-18 May 2007 The 6th Biennial International Triple Helix Conference on University-Industry Government-Links will be held in Singapore from 16-18 May 2007 with the theme "Emerging Models for the Entrepreneurial University: Regional Diversities or Global Convergence". The conference will be organized by National University of Singapore (NUS) Enterprise in Singapore. Past Triple-Helix conferences have been held in Amsterdam, New York, Rio de Janeiro, Copenhagen/Lund, and Turin (http://www.triplehelix5.com/triple_helix.htm) Organized for the first time in Asia, Triple Helix VI 2007 will provide a global forum for academic scholars from different disciplinary perspectives as well as policy makers, university administrators and private sector leaders from different countries to exchange and share new learning about the diverse emerging models of the entrepreneurial university, the changing dynamics of University-Industry-Government interactions around the world and the complex roles of the university in local, regional and national economic development. We invite scholarly paper contributions that seek to advance over understanding of the dynamics of University-Industry-Government interactions in general and the emerging entrepreneurial university models in particular. We also welcome practitioner-oriented contributions that provide insights on new policy innovations and share knowledge on practices, as well as proposals for workshops and poster presentations that contribute to promoting exchange and dialogues on how universities in the 21st century can better cope with the challenges of globalizations while serving local and regional goals. We invite submissions of extended abstracts in the following categories: (A) Papers for presentation in Parallel Sessions (B) Papers for Workshop Sessions (C) Poster presentations Papers and poster presentations will be selected based on abstract submissions which should be of a maximum length of two pages including figures and references. Authors of accepted abstract will be required to submit their full papers / poster abstracts according to the submission guidelines which are available in the conference website. Authors of the best papers presented at the conference will be invited to submit their contributions to a number of special issues of relevant international journals. For more details on the conference sub-themes and paper submission procedures and guidelines, please visit http://www.triplehelix6.com. You can also direct any logistics-related query you may have about the conference to organizing chair (infotriplehelix6 at nus.edu.sg). Queries related to abstract/paper submissions and the conference theme can be directed to the organizing chair (papertriplehelix6 at nus.edu.sg). KEY DATES Online submission of abstract Opening Date 1 September 2006 Last Date for online abstract submission 8 January 2007 Notification of Acceptance 16 February 2007 Full paper submission Due Date 16 April 2007 - Papers - Workshop papers Poster Extended Abstracts submission Due Date 30 April 2007 End of Special Rate Registration for Conference 9 March 2007 Participants Your kind help in disseminating this call for papers to interested colleagues is greatly appreciated. In particular, we would like to encourage doctoral students to participate in this conference. Thank you and best regards, Poh-Kam Wong Chair, Organizing Committee Assoc. Prof. WONG, Poh Kam, PhD (MIT) Director, NUS Entrepreneurship Centre ( www.nus.edu.sg/nec) 10 Kent Ridge Crescent, E3A Level 6, Singapore 119260 Tel. 65-6516-6323 Email: pohkam at nus.edu.sg Location: http://www.nus.edu.sg/nec/location/location.htm Personal CV: http://www.bschool.nus.edu.sg/staff_profile/cv.asp?ID=174 -------------- next part -------------- An HTML attachment was scrubbed... URL: From loet at LEYDESDORFF.NET Sun Apr 23 13:31:12 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sun, 23 Apr 2006 19:31:12 +0200 Subject: Visualization of Sets at the Web-of-Science using Bibliographic Coupling among Authors Message-ID: BibCoupl.exe for Bibliographic Coupling among Authors BibCoupl.exe is freely available for academic usage. The program uses a set saved using ISI's Web of Science as input, and generates various forms of output: 1. cosine.dat provides an input file for Pajek as a visual representation of the bibliographic coupling among authors within this set. The matrix is normalized using the cosine. 2. coocc.dat and matrix.dbf are the files which underly cosine.dat. Coocc.dat is the file before normalization; and matrix.dbf the asymmetrical data matrix. The latter file can be used for statistical analysis in SPSS, the former for graph-analytical analysis using UCINet. 3. Like ISI.EXE, the program BibCoupl.EXE produces four databases containing the information in the original input set in relational format: au.dbf with the authors; cs.dbf with the address ("corporate sources"); core.dbf with information which is unique for each record (e.g., the title); and cr.dbf containing the cited references. The files are linked through the numbers in core.dbf. If one needs only these files, one is advised to use ISI.EXE, since the computation of the cosine is computer intensive, and therefore time-consuming. The routine creating the matrix and the cosine-normalized output uses the author names in the file au.dbf as variable names, and the records in cr.dbf as the cases (rows). Initials in au.dbf are not considered because these may vary among publications. The number of authors is limited to 1024, but the number of cited references is unlimited. The program is based on DOS-legacy software. It runs in a MS-Dos Command Box under Windows. The programs and the input files have to be contained in the same folder. The output files are written into this directory. Please, note that existing files from a previous run are overwritten by the program. The user is advised to save output elsewhere if one wishes to continue with these materials. input files The input file has to be saved as a so-called marked list in the tagged format from the Science Citation Index (Social Science Citation Index, Arts & Humanities Citation Index) at the Web-of-Science. The default filename "savedrecs.txt" should not be used, but "data.txt" instead. output files The program produces four output files in dBase IV format. These files can be read into Excel and/or SPSS for further processing. They can also be used in MS Access for relational database management. These files can be produced by using the simpler ISI.EXE (which is much less intensive in the computation). Click here to download ISI.EXE BibCoupl additionally produced two files with the extension ".dat" (cosine.dat and coocc.dat) are in DL-format (ASCII) which can be read directly into Pajek for the visualization (Pajek is freely available at http://vlado.fmf.uni-lj.si/pub/networks/pajek/ ). A number of additional databases are coproduced: a. matrix.dbf contains the matrix of the cited references as the cases and the authors in the set as the variables. The names of the authors before the comma are used as the variables. The authors (column) are sorted alphabetically and each row represents a cited reference in alphabetical order. This file can be imported into SPSS for further analysis. b. coocc.dbf contains a co-occurrence matrix of the authors from this same data. This matrix is symmetrical and it contains the authors both as variables and as labels in the first field. The main diagonal is set to zero. The number of co-occurrences is equal to the multiplication of occurrences in each of the texts. (The procedure is similar to using the file matrix.dbf as input to the routine "affiliations" in UCINet, but the main diagonal is here set to zero in this matrix.) The file coocc.dat contains this information in the DL-format. c. cosine.dbf contains a normalized co-occurrence matrix of the authors from the same data. Normalization is based on the cosine between the variables conceptualized as vectors (Salton & McGill, 1983). (The procedure is similar to using the file matrix.dbf as input to the corresponding routing in SPSS.) The file cosine.dat contains this information in the DL-format. Click here to download BibCoupl.EXE _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ The Self-Organization of the Knowledge-Based Society; The Challenge of Scientometrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From garfield at CODEX.CIS.UPENN.EDU Mon Apr 24 00:16:07 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Mon, 24 Apr 2006 00:16:07 -0400 Subject: Schwartz FW, Fang YC, Parthasarathy S "Patterns of evolution of research strands in the hydrologic sciences " HYDROGEOLOGY JOURNAL 13 (1): 25-36 MAR 2005 Message-ID: E-mail Addresses: frank at geology.ohio.state.edu Title: Patterns of evolution of research strands in the hydrologic sciences Author(s): Schwartz FW, Fang YC, Parthasarathy S Source: HYDROGEOLOGY JOURNAL 13 (1): 25-36 MAR 2005 Document Type: Article Language: English Cited References: 27 Times Cited: 0 Abstract: This paper examines issues of impact and innovation in groundwater research by using bibliometric data and citation analysis. The analysis is based on 3120 papers from the journal Water Resources Research with full contents and their citation data from the ISI Web of Science. The research is designed to develop a better understanding of the way citation numbers can be interpreted by scientists. Not surprisingly, the most highly cited papers appear to be pioneers in the field with papers departing significantly from what has come before and to be effective in creating similar, follow-on papers. Papers that are early contributions to a new research strand that is highly influential will be on average highly cited. However, the importance of a research strand as measured by citations seems to fall with time. The citation patterns of some classic papers show that the activity in the topical area and impact of follow-on papers gradually decline with time, which has similarities with Kuhn's ideas of revolutionary and normal science. The results of this study reinforce the importance of being a pioneer in a research strand, strategically shifting research strands, adopting strategies that can facilitate really major research breakthroughs. Addresses: Schwartz FW (reprint author), Ohio State Univ, Columbus, OH 43210 USA Ohio State Univ, Columbus, OH 43210 USA Publisher: SPRINGER, 233 SPRING STREET, NEW YORK, NY 10013 USA Subject Category: GEOSCIENCES, MULTIDISCIPLINARY; WATER RESOURCES IDS Number: 924KD ISSN: 1431-2174 CITED REFERENCES: NATURE 415 : 101 2002 ABRIOLA LM A MULTIPHASE APPROACH TO THE MODELING OF POROUS-MEDIA CONTAMINATION BY ORGANIC-COMPOUNDS .1. EQUATION DEVELOPMENT WATER RESOURCES RESEARCH 21 : 11 1985 ADAM D The counting house NATURE 415 : 726 2002 BOLEY D Principal direction divisive partitioning DATA MINING AND KNOWLEDGE DISCOVERY 2 : 325 1998 BOLEY D Partitioning-based clustering for Web document categorization DECISION SUPPORT SYSTEMS 27 : 329 1999 BOLEY D TR99029 CSE U MINN : 1999 CHEN C MAPPING SCI FRONTIER : 2003 DAGAN G STOCHASTIC MODELING OF GROUNDWATER-FLOW BY UNCONDITIONAL AND CONDITIONAL PROBABILITIES - THE INVERSE PROBLEM WATER RESOURCES RESEARCH 21 : 65 1985 FREEZE RA THEORETICAL ANALYSIS OF REGIONAL GROUNDWATER FLOW .2. EFFECT OF WATER- TABLE CONFIGURATION AND SUBSURFACE PERMEABILITY VARIATION WATER RESOURCES RESEARCH 3 : 623 1967 FREEZE RA THEORETICAL ANALYSIS OF REGIONAL GROUNDWATER FLOW .1. ANALYTICAL AND NUMERICAL SOLUTIONS TO MATHEMATICAL MODEL WATER RESOURCES RESEARCH 2 : 641 1966 GARFIELD E SCI PUBL POLICY 19 : 321 1992 GARFIELD E USE CITATION DATA WR : 1964 GARVEN G THEORETICAL-ANALYSIS OF THE ROLE OF GROUNDWATER-FLOW IN THE GENESIS OF STRATABOUND ORE-DEPOSITS .1. MATHEMATICAL AND NUMERICAL-MODEL AMERICAN JOURNAL OF SCIENCE 284 : 1085 1984 GARVEN G THEORETICAL-ANALYSIS OF THE ROLE OF GROUNDWATER-FLOW IN THE GENESIS OF STRATABOUND ORE-DEPOSITS .2. QUANTITATIVE RESULTS AMERICAN JOURNAL OF SCIENCE 284 : 1125 1984 GARVEN G CONTINENTAL-SCALE GROUNDWATER-FLOW AND GEOLOGIC PROCESSES ANNUAL REVIEW OF EARTH AND PLANETARY SCIENCES 23 : 89 1995 GARVEN G THE ROLE OF REGIONAL FLUID-FLOW IN THE GENESIS OF THE PINE POINT DEPOSIT, WESTERN CANADA SEDIMENTARY BASIN ECONOMIC GEOLOGY 80 : 307 1985 HARRISON WJ AAPG BULL 75 : 656 1991 HITCHON B FLUID FLOW IN WESTERN CANADA SEDIMENTARY BASIN .1. EFFECT OF TOPOGRAPHY WATER RESOURCES RESEARCH 5 : 186 1969 HORGAN J END SCI FACING LIMIT : 1996 KUHN TS STRUCTURE SCI REVOLU : 1962 PERSON M Basin-scale hydrogeologic modeling (vol 34, pg 61, 1996) REVIEWS OF GEOPHYSICS 34 : 307 1996 PRICE DJD NETWORKS OF SCIENTIFIC PAPERS SCIENCE 149 : 510 1965 SCHWARTZ FW GROUND WATER 40 : 317 2002 SCHWARTZ FW Hydrogeological research: Beginning of the end or end of the beginning? GROUND WATER 39 : 492 2001 TOTH J A THEORETICAL ANALYSIS OF GROUNDWATER FLOW IN SMALL DRAINAGE BASINS JOURNAL OF GEOPHYSICAL RESEARCH 68 : 4795 1963 TOTH J A THEORY OF GROUNDWATER MOTION IN SMALL DRAINAGE BASINS IN CENTRAL ALBERTA, CANADA JOURNAL OF GEOPHYSICAL RESEARCH 67 : 4375 1962 TOTH J GRAVITY-INDUCED CROSS-FORMATIONAL FLOW OF FORMATION FLUIDS, RED EARTH REGION, ALBERTA, CANADA - ANALYSIS, PATTERNS, AND EVOLUTION WATER RESOURCES RESEARCH 14 : 805 1978 From garfield at CODEX.CIS.UPENN.EDU Mon Apr 24 01:19:52 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Mon, 24 Apr 2006 01:19:52 -0400 Subject: Banville DL "Mining chemical structural information from the drug literature" Drug Discovery Today, 11(1-2): 35-42, January 2006. Message-ID: E-Mail: debra.banville at astrazeneca.com FOR PDF FILE COPY ENTIRE URL BELOW (FOLLOWING 4 LINES)AND PASTE IN "ADDRESS" IN YOUR BROWSER: http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6T64-4J853YK-6- 1&_cdi=5020&_user=10&_orig=browse&_coverDate=01%2F31% 2F2006&_sk=999889998&view=c&wchp=dGLzVlz- zSkzV&md5=b150085f11d435e7ee8dacc7c77a2b1f&ie=/sdarticle.pdf AUTHOR : Debra L. Banville, debra.banville at astrazeneca.com TITLE : Mining chemical structural information from the drug literature (Review) SOURCE: Drug Discovery Today, Volume 11, Issues 1-2, January 2006, P.35- 42. ADDRESS: AstraZeneca Pharmaceuticals, 1800 Concord Pike, Wilmington, DE 19850, USA Available online 13 February 2006. It is easier to find too many documents on a life science topic than to find the right information inside these documents. With the application of text data mining to biological documents, it is no surprise that researchers are starting to look at applications that mine out chemical information. The mining of chemical entities ? names and structures ? brings with it some unique challenges, which commercial and academic efforts are beginning to address. Ultimately, life science text data mining applications need to focus on the marriage of biological and chemical information. Addresses: Banville DL (reprint author), AstraZeneca Pharmaceut, 1800 Concord Pike, Wilmington, DE 19850 USA AstraZeneca Pharmaceut, Wilmington, DE 19850 USA E-mail Addresses: Debra.Banville at AstraZeneca.com Publisher: ELSEVIER SCI LTD, THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, OXON, ENGLAND Subject Category: PHARMACOLOGY & PHARMACY IDS Number: 005LJ ISSN: 1359-6446 CITED REFERENCES : AI CS EXTRACTION OF CHEMICAL-REACTION INFORMATION FROM PRIMARY JOURNAL TEXT JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 30 : 163 1990 BORKENT JH CHEMICAL-REACTION SEARCHING COMPARED IN REACCS, SYNLIB, AND ORAC JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 28 : 148 1988 BRECHER J Name=Struct: A practical approach to the sorry state of real-life chemical nomenclature JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 39 : 943 1999 BRUEGGEMANN R CHEMOSPHERE 31 : 3585 1995 CALDWELL GW CURR TOP MED CHEM 1 : 353 2001 CHOWDHURY GG AUTOMATIC INTERPRETATION OF THE TEXTS OF CHEMICAL PATENT ABSTRACTS .1. LEXICAL ANALYSIS AND CATEGORIZATION JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 32 : 463 1992 CHOWDHURY GG AUTOMATIC INTERPRETATION OF THE TEXTS OF CHEMICAL PATENT ABSTRACTS .2. PROCESSING AND RESULTS JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 32 : 468 1992 CLAUS BL Discovery informatics: its evolving role in drug discovery DRUG DISCOVERY TODAY 7 : 957 2002 COOKEFOX DI J CHEM INF COMP SCI 31 : 153 1991 COOKEFOX DI COMPUTER TRANSLATION OF IUPAC SYSTEMATIC ORGANIC-CHEMICAL NOMENCLATURE .4. CONCISE CONNECTION TABLES TO STRUCTURE DIAGRAMS JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 30 : 122 1990 COOKEFOX DI COMPUTER TRANSLATION OF IUPAC SYSTEMATIC ORGANIC-CHEMICAL NOMENCLATURE .5. STEROID NOMENCLATURE JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 30 : 128 1990 COOKEFOX DI COMPUTER TRANSLATION OF IUPAC SYSTEMATIC ORGANIC CHEMICAL NOMENCLATURE .1. INTRODUCTION AND BACKGROUND TO A GRAMMAR-BASED APPROACH JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 29 : 101 1989 COOKEFOX DI COMPUTER TRANSLATION OF IUPAC SYSTEMATIC ORGANIC CHEMICAL NOMENCLATURE .2. DEVELOPMENT OF A FORMAL GRAMMAR JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 29 : 106 1989 COOKEFOX DI COMPUTER TRANSLATION OF IUPAC SYSTEMATIC ORGANIC CHEMICAL NOMENCLATURE .3. SYNTAX ANALYSIS AND SEMANTIC PROCESSING JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 29 : 112 1989 COOPER JW 229 ACN NAT M 13 17 : 2005 DORLAND L J CHEM EDUC 79 : 778 2002 GARFIELD E AN ALGORITHM FOR TRANSLATING CHEMICAL NAMES TO MOLECULAR FORMULAS JOURNAL OF CHEMICAL DOCUMENTATION 2 : 177 1962 GARFIELD E >From laboratory to information explosions ... the evolution of chemical information services at ISI JOURNAL OF INFORMATION SCIENCE 27 : 119 2001 GOLDFARB C SGML HDB : 1990 HAHN U PAC S BIOCOMPUT 7 : 338 2002 HAUSER WC 217 ACS NAT M 21 25 : 1999 HEARLE EM P MONTREUX INT CHEM : 84 1993 HELMA C Data quality in predictive toxicology: Identification of chemical structures and calculation of chemical properties ENVIRONMENTAL HEALTH PERSPECTIVES 108 : 1029 2000 HODGE GM ACS APR 1989 : 197 1989 HODGE GM ACS AUG 1989 : 202 1989 IBISON P CHEMICAL LITERATURE DATA EXTRACTION - THE CLIDE PROJECT JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 33 : 338 1993 JACKSON P NATURAL LANGUAGE PRO : 2002 JONCKHEERE C P INT CHEM INF C 19 : 63 1997 KEMP N J CHEM INF COMP SCI 38 : S44 1998 KONTOSTATHIS A SURVEY TEXT MINING C : CH9 2003 KRALLINGER M Text mining approaches in molecular biology and biomedicine DRUG DISCOVERY TODAY 10 : 439 2005 LAKINGS DB NEW DRUG APPROV 100 : 17 2000 MACK R Text-based knowledge discovery: search and mining of life-sciences documents DRUG DISCOVERY TODAY 7 : S89 2002 MACK R Text analytics for life science using the unstructured information management architecture IBM SYSTEMS JOURNAL 43 : 490 2004 MCDANIEL JR KEKULE - OCR OPTICAL CHEMICAL (STRUCTURE) RECOGNITION JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 32 : 373 1992 POSTEMA PTE DIGESTION S 1 : 36 1996 REDMOND L EUROMAP 1217 : 2002 RICHARD AM 226 ACS NAT M 7 11 S : 2003 ROVNER SL C E NEWS 0516 : 40 2005 RUSSELL J BIOL IT WORLD 0204 : 2005 RZHETSKY A A knowledge model for analysis and simulation of regulatory networks BIOINFORMATICS 16 : 1120 2000 SHABRANG M 226 ACS NAT M 7 11 S : 2003 SIMON A Recent advances in the CLiDE project: Logical layout analysis of chemical documents JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 37 : 109 1997 SINGH SB Text Influenced Molecular Indexing (TIMI): A literature database mining approach that handles text and chemistry JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 43 : 743 2003 SMEATON AF PROGRESS IN THE APPLICATION OF NATURAL-LANGUAGE PROCESSING TO INFORMATION- RETRIEVAL TASKS COMPUTER JOURNAL 35 : 268 1992 STENSOMO M 1110 INF SWANSON DR 2 MEDICAL LITERATURES THAT ARE LOGICALLY BUT NOT BIBLIOGRAPHICALLY CONNECTED JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 38 : 228 1987 SWANSON DR FISH OIL, RAYNAUDS SYNDROME, AND UNDISCOVERED PUBLIC KNOWLEDGE PERSPECTIVES IN BIOLOGY AND MEDICINE 30 : 7 1986 TANABE L MedMiner: An Internet text-mining tool for biomedical information, with application to gene expression profiling BIOTECHNIQUES 27 : 1210 1999 THOMSON MA 215 ACS NAT M MARCH : 1998 VANDERSTOUW GG PROCEDURES FOR CONVERTING SYSTEMATIC NAMES OF ORGANIC COMPOUNDS INTO ATOM- BOND CONNECTION TABLES JOURNAL OF CHEMICAL DOCUMENTATION 7 : 165 1967 WEBER M J AM SOC INF SCI TEC 52 : 548 2001 WEININGER D SPECIAL PUBLICATION 142 : 67 1994 WILBUR WJ P AMIA S : 176 1999 WISNIEWSKI JL AUTONOM CHEM DREAM S 2 : 55 1993 WOLPERT AJ 229 ACS NAT M 13 17 : 2005 ZAMORA EM EXTRACTION OF CHEMICAL-REACTION INFORMATION FROM PRIMARY JOURNAL TEXT USING COMPUTATIONAL-LINGUISTICS TECHNIQUES .1. LEXICAL AND SYNTACTIC PHASES JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 24 : 176 1984 ZAMORA EM EXTRACTION OF CHEMICAL-REACTION INFORMATION FROM PRIMARY JOURNAL TEXT USING COMPUTATIONAL-LINGUISTICS TECHNIQUES .2. SEMANTIC PHASE JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES 24 : 181 1984 From garfield at CODEX.CIS.UPENN.EDU Tue Apr 25 23:11:23 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 25 Apr 2006 23:11:23 -0400 Subject: Garfinkel MS, Sarewitz D, Porter AL "A societal outcomes map for health research and policy " American Journal of Public Health 96 (3): 441-446 MAR 2006 Message-ID: Michele S. Garfinkel E-mail Addresses: mgarfinkel at venterinstitute.org Title: A societal outcomes map for health research and policy Author(s): Garfinkel MS, Sarewitz D, Porter AL Source: AMERICAN JOURNAL OF PUBLIC HEALTH 96 (3): 441-446 MAR 2006 Document Type: Article Language: English Cited References: 34 Times Cited: 0 Abstract: The linkages between decisions about health research and policy and actual health outcomes may be extraordinarily difficult to specify. We performed a pilot application of a "road mapping" and technology assessment technique to perinatal health to illustrate how this technique can clarify the relations between available options and improved health outcomes. We used a combination of datamining techniques and qualitative analyses to set up the underlying structure of a societal health outcomes road map. Societal health outcomes road mapping may be a useful tool for enhancing the ability of the public health community, policymakers, and other stakeholders, such as research administrators, to understand health research and policy options. EXCERPT FROM PAPER: MINING THE RESEARCH LITERATURE Scanning the professional literature for concepts and results is key to building a road map. We drew on scientometrics (tallying activity) and text mining (extracting prevalent terms (refs.16,21,30-32) to explore perinatal health research knowledge. Fortunately, one international database, MEDLINE, compiles a tremendous amount of pertinent research. It does so in the form of a searchable database that provides abstracts of articles (PubMed ref33). It also thoroughly indexes those articles through a hierarchical thesaurus called MeSH (Medical Subject Headings ref.34). It is important to note that we did not track MeSH terms per se. Rather, we searched MEDLINE for articles containing the word perinatal. Addresses: Garfinkel MS (reprint author), 9704 Med Ctr Dr,4th Floor, Rockville, MD 20850 USA Arizona State Univ, Consortium Sci Policy & Outcomes, Tempe, AZ USA Arizona State Univ, Sch Life Sci, Tempe, AZ USA Arizona State Univ, Dept Geol Sci, Tempe, AZ USA Georgia Inst Technol, Technol Policy & Assessment Ctr, Atlanta, GA 30332 USA Search Technol Inc, Norcross, GA USA E-mail Addresses: mgarfinkel at venterinstitute.org Publisher: AMER PUBLIC HEALTH ASSOC INC, 800 I STREET, NW, WASHINGTON, DC 20001-3710 USA Subject Category: PUBLIC, ENVIRONMENTAL & OCCUPATIONAL HEALTH; PUBLIC, ENVIRONMENTAL & OCCUPATIONAL HEALTH IDS Number: 017KA ISSN: 0090-0036 CITED REFERENCES: INSIGHT STAR TREE IL *COMM ENG TECHN SY REV RES PROGR PARTN : 2000 *DEP DEF CONS INV *MARCH DIM PER DAT MAT INF CHILD HLTH : 2001 *NASA SOL SYST EXPL *NAT ASS STAT U LA SCI ROADM AGR *NAT I CHILD HLTH STRAT PLAN CELLS SEL *NAT LIB MED FACT SHEET MED SUBJ *NIH OFF EXTR RES COMP RETR INF SCI PR *ROYAL NETH AC ART SOC IMP APPL HLTH RE *US CDCP NAT CTR B BIRTH DEF *US DEP HHS HLTH PEOPL 2010 *USDA IMM SCREEN REF WIC *USDA WIC HELPS *USDA WIC PART PROGR CHAR *WA TREE FRUIT RES TECHN ROADM *WHO WORLD HLTH REP : 2000 BLENDON RJ The public versus the World Health Organization on health system performance HEALTH AFFAIRS 20 : 10 2001 DALKEY NC DELPHI METHOD EXPT S : 1969 GALVIN R Science roadmaps SCIENCE 280 : 803 1998 GARCIA ML FUNDAMENTALS TECHNOL LOSIEWICZ P Textual data mining to support science and technology management JOURNAL OF INTELLIGENT INFORMATION SYSTEMS 15 : 99 2000 MURRAY CJL People's experience versus people's expectations HEALTH AFFAIRS 20 : 21 2001 PORTER AL Research profiling: Improving the literature review SCIENTOMETRICS 53 : 351 2002 PORTER AL TECH MINING EXPLOITI : 2005 PORTER AL TECHNOLOGY OPPORTUNITIES ANALYSIS TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE 49 : 237 1995 RAYNER S SCI PUBL POLICY 30 : 163 2003 RICHEY JM Evolution of roadmapping at Motorola RESEARCH-TECHNOLOGY MANAGEMENT 47 : 37 2004 SAREWITZ D PHILOS TODAY S 49 : 67 2004 TIJINK D I EL EL ENG INT S TE : 1996 VANRAAN AFJ HDB QUANTITATIVE STU : 1988 VERPLOEG M ESTIMATING ELIGIBILI : 2003 WILLIAMSON M New Zealand's foresight project SCIENCE 280 : 655 1998 YANG QH Trends and patterns of mortality associated with birth defects and genetic diseases in the United States, 1979-1992: An analysis of multiple-cause mortality data GENETIC EPIDEMIOLOGY 14 : 493 1997 From garfield at CODEX.CIS.UPENN.EDU Tue Apr 25 23:16:45 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 25 Apr 2006 23:16:45 -0400 Subject: Carlson J. "An examination of undergraduate student citation behavior " Journal of Academic Librarianship 32 (1): 14-22 JAN 2006 Message-ID: Jake Carlson : E-MAIL : jcarlson at bucknell.edu Title: An examination of undergraduate student citation behavior Author(s): Carlson J. Source: JOURNAL OF ACADEMIC LIBRARIANSHIP 32 (1): 14-22 JAN 2006 Document Type: Article Language: English Cited References: 14 Times Cited: 0 Abstract: Using data collected from 583 bibliographies from student research papers, this study analyzes the effect that the class year of the student, the academic discipline of the course, and the level of the course have on the type and the mean number of sources cited by undergraduates. All three variables had some significant effects on student citation behavior. Additional analysis was done to pinpoint which variable was likely to be the primary cause of the observed effects. Addresses: Carlson L (reprint author), Bucknell Univ, Lewisburg, PA 17837 USA Bucknell Univ, Lewisburg, PA 17837 USA E-mail Addresses: jcarlson at bucknell.edu Publisher: ELSEVIER SCIENCE INC, 360 PARK AVE SOUTH, NEW YORK, NY 10010- 1710 USA Subject Category: INFORMATION SCIENCE & LIBRARY SCIENCE IDS Number: 018UO ISSN: 0099-1333 CITED REFERENCES : *OCLC AC LIBR CAN INFL STU : 2002 DAVIS PM The effect of the web on undergraduate citation behavior: A 2000 update COLLEGE & RESEARCH LIBRARIES 63 : 53 2002 DAVIS PM The effect of the Web on undergraduate citation behavior 1996-1999 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 52 : 309 2001 DAVIS PM Effect of the web on undergraduate citation behavior: Guiding student scholarship in a networked age PORTAL-LIBRARIES AND THE ACADEMY 3 : 41 2003 DESPOSITO JE University students' perceptions of the Internet: An exploratory study JOURNAL OF ACADEMIC LIBRARIANSHIP 25 : 456 1999 DILEVKO J J ACAD LIBR 29 : 381 2002 FISTER B THE RESEARCH PROCESSES OF UNDERGRADUATE STUDENTS JOURNAL OF ACADEMIC LIBRARIANSHIP 18 : 163 1992 FRIEDLANDER A DIMENSIONS USE SCHOL : 2004 GRIMES DJ Worries with the Web: A look at student use of Web resources COLLEGE & RESEARCH LIBRARIES 62 : 11 2001 JEFFERSON T Systematic review of the effects of pertussis vaccines in children VACCINE 21 : 2003 2003 JENKINS PO COLL RES LIB NEWS 63 : 164 2002 MAGRILL RM COLLECTION MANAGEMEN 12 : 25 1990 ROTHENBERG D CHRON HIGHER EDUC 43 : A44 1997 VALENTINE B The legitimate effort in research papers: Student commitment versus faculty expectations JOURNAL OF ACADEMIC LIBRARIANSHIP 27 : 107 2001 From garfield at CODEX.CIS.UPENN.EDU Tue Apr 25 23:19:56 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 25 Apr 2006 23:19:56 -0400 Subject: Gupta P, Choudhury P. "Impact factor and Indian pediatrics" Indian Pediatrics 43 (2): 107-110 FEB 2006 Message-ID: Dr. Piyush Gupta E-mail Addresses: drpiyush at satyam.net.in Title: Impact factor and Indian pediatrics Author(s): Gupta P, Choudhury P Source: INDIAN PEDIATRICS 43 (2): 107-110 FEB 2006 Document Type: Editorial Material Language: English Cited References: 8 Times Cited: 0 EXCERPT FROM PAPER: Indian Pediatrics was successful in its first application only, the only drawback is that our official impact factor will be available in the year 2008. Recently, it has been suggested that ISI should reconsider its policy on citation tracking and should introduce a policy of immediately tracking any peer-reviewed journal that meets basic quality standards and which can provide reference list data in an appropriate form to allow automated analysis (5). This will end the current 3 year-long wait for the established journals to have their impact factors. Addresses: Gupta P (reprint author), R-6-A,MIG Complex,Dilshad Garden, Delhi, 110095 India E-mail Addresses: drpiyush at satyam.net.in Publisher: INDIAN ACADEMY PEDIATRICS, MAULANA AZAD MEDICAL COLLEGE, DEPT PEDIATRICS, NEW DELHI, 110 002, INDIA Subject Category: PEDIATRICS IDS Number: 018SE ISSN: 0019-6061 CITED REFERENCES: J IMPACT FACTORS 200 NATURE 435 : 1003 2005 BOLLEN J GL0601030V1 : 2006 COCKERILL MJ Delayed impact: ISI's citation tracking choices are keeping scientists in the dark BMC BIOINFORMATICS 5 : Art. No. 93 2004 GARFIELD E AGONY ECSTASY HIST M GARFIELD E ISI IMPACT FACTOR HOEFFEL C Journal impact factors ALLERGY 53 : 1225 1998 SAHA S Impact factor: a valid measure of journal quality? JOURNAL OF THE MEDICAL LIBRARY ASSOCIATION 91 : 42 2003 From garfield at CODEX.CIS.UPENN.EDU Tue Apr 25 23:22:58 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 25 Apr 2006 23:22:58 -0400 Subject: Nebelong-Bonnevie E, Frandsen TF "Journal citation identity and journal citation image: a portrait of the Journal of Documentation " Journal of Documentation 62 (1): 30-57 2006 Message-ID: Ellen Nebelong-Bonnevie : E-mail Addresses: en at db.dk Title: Journal citation identity and journal citation image: a portrait of the Journal of Documentation Author(s): Nebelong-Bonnevie E, Frandsen TF Source: JOURNAL OF DOCUMENTATION 62 (1): 30-57 2006 Document Type: Article Language: English Cited References: 31 Times Cited: 0 Abstract: Purpose - The purpose of this paper is to propose a multiple set of journal evaluation indicators using methods and theories from author analysis. Among those are the journal citation identity and the journal citation image. Design/methodology/approach - The Journal of Documentation is celebrating its 60th anniversary, and for that reason it is portrayed in a bibliometric study using the two indicators, based, e.g. on analyses of references in journal articles and journal co-citation analyses. Findings - The Journal of Documentation, which is portrayed in this study is characterized by high impact and high visibility. It publishes a relatively low number of documents with scientific content compared to other journals in the same field. It reaches far into the scientific community and belongs to a field that is more and more visible. The journal is relatively closely bounded to Western Europe, which is an increasing tendency. Research limitations/implications - The research is based on analyses of just three LIS journals. Practical implications - journal citation identity and the journal citation image indicators contribute in giving a more detailed multifaceted picture of a single journal. Originality/value - The multiple set of indicators give rise to a journal evaluation of a more qualitative nature. Author Keywords: serials; publications KeyWords Plus: AUTHOR SELF-CITATIONS; DIFFUSION; IMPACT Addresses: Nebelong-Bonnevie E (reprint author), Royal Sch Lib & Informat Sci, Inst Informat Studies, Copenhagen, Denmark Royal Sch Lib & Informat Sci, Inst Informat Studies, Copenhagen, Denmark E-mail Addresses: en at db.dk Publisher: EMERALD GROUP PUBLISHING LIMITED, 60/62 TOLLER LANE, BRADFORD BD8 9BY, W YORKSHIRE, ENGLAND Subject Category: COMPUTER SCIENCE, INFORMATION SYSTEMS; INFORMATION SCIENCE & LIBRARY SCIENCE IDS Number: 022LE ISSN: 0022-0418 CITED REFERENCES: AKSNES DW A macro study of self-citation SCIENTOMETRICS 56 : 235 2003 AMIN M PERSPECTIVES PUBLISH 1 : 1 2000 BONNEVIE E IN PRESS CONCEPT MAR BONNEVIE E J INF SCI 29 : 1 2003 CHRISTENSEN FH P 6 C INT SOC SCI IN : 45 1997 CHRISTENSEN FH SCIENTOMETRICS 39 : 39 1996 CRONIN B Citation, funding acknowledgement and author nationality relationships in four information science journals JOURNAL OF DOCUMENTATION 55 : 402 1999 CRONIN B DOCUMENTATION NOTE THE TRAJECTORY OF REJECTION JOURNAL OF DOCUMENTATION 48 : 310 1992 CRONK L RES BIOPOLIT 8 : 1 2001 DING Y Journal as markers of intellectual space: Journal co-citation analysis of information Retrieval area, 1987-1997 SCIENTOMETRICS 47 : 55 2000 EGGHE L INTRO INFORMETRICS : 1990 FASSOULAKI A Self-citations in six anaesthesia journals and their significance in determining the impact factor BRITISH JOURNAL OF ANAESTHESIA 84 : 266 2000 FRANCOMUGICA N Ancient pine forest on inland dunes in the Spanish northern meseta QUATERNARY RESEARCH 63 : 1 2005 FRANZ G Physical and chemical properties of the epidote minerals - An introduction EPIDOTES 56 : 1 2004 GLANZEL W A bibliometric approach to the role of author self-citations in scientific communication SCIENTOMETRICS 59 : 63 2004 JACSO P ONLINE INFORM REV 29 : 107 2005 KORTELAINEN T P COLIS 2 : 1999 KORTELAINEN TAM Studying the international diffusion of a national scientific journal SCIENTOMETRICS 51 : 133 2001 LAWANI SM ON THE HETEROGENEITY AND CLASSIFICATION OF AUTHOR SELF-CITATIONS JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 33 : 281 1982 LEYDESDORFF L Can networks of journal-journal citations be used as indicators of change in the social sciences? JOURNAL OF DOCUMENTATION 59 : 84 2003 NIJSSEN D COENOSES 13 : 33 1998 PERITZ BC The sources used by bibliometrics-scientometrics as reflected in references SCIENTOMETRICS 54 : 269 2002 ROUSSEAU R RES U EVALUATION : 2001 ROUSSEAU R Temporal differences in self-citation rates of scientific journals SCIENTOMETRICS 44 : 521 1999 ROWLANDS I Journal diffusion factors: a new approach to measuring research influence ASLIB PROCEEDINGS 54 : 77 2002 SIMPSON EH MEASUREMENT OF DIVERSITY NATURE 163 : 688 1949 SNYDER H Patterns of self-citation across disciplines (1980-1989) JOURNAL OF INFORMATION SCIENCE 24 : 431 1998 TABAH AN ANN REV INFORM SCI T 34 : 1999 VANDENBESSELAAR P Mapping change in scientific specialties: A scientometric reconstruction of the development of artificial intelligence JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 47 : 415 1996 VANRAAN AFJ The influence of international collaboration on the impact of research results - Some simple mathematical considerations concerning the role of self-citations SCIENTOMETRICS 42 : 423 1998 WHITE HD Authors as citers over time JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 52 : 87 2001 From garfield at CODEX.CIS.UPENN.EDU Tue Apr 25 23:24:52 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 25 Apr 2006 23:24:52 -0400 Subject: Bonnevie-Nebelong E "Methods for journal evaluation: Journal Citation Identity, Journal Citation Image and Internationalisation " Scientometrics 66 (2): 411-424 Jan 2006 Message-ID: Ellen Bonnevie-Nebelong : E-mail Addresses: en at db.dk Title: Methods for journal evaluation: Journal Citation Identity, Journal Citation Image and Internationalisation Author(s): Bonnevie-Nebelong E Source: SCIENTOMETRICS 66 (2): 411-424 JAN 2006 Document Type: Article Language: English Cited References: 12 Times Cited: 0 Abstract: Journal Citation Identity, Journal Citation Image, and Internationalisation are methods for journal evaluation used for an analysis of the Journal of Documentation(JDOC) which is compared to JASIS (T) and the Journal of Information Science(JIS). The set of analyses contributes to portrait a journal and gives a multifaceted picture. For instance, the Journal Citation Image by the New Journal Diffusion Factor tells that JDOC reaches farther out into the scientific community than the JASIS(T) and JIS. Comparing New Journal Diffusion Factor and Journal Impact Factor illustrates how new information has been added by the new indicator. Furthermore, JDOC is characterised by a higher rate of journal diversity in the references and has a lower number of scientific publications. JDOC authors and citers are affiliated Western European institutions at an increasing rate. KeyWords Plus: SELF-CITATION Addresses: Bonnevie-Nebelong E (reprint author), Royal Sch Lib & Informat Sci, Inst Informat Studies, Birketinget 6, Copenhagen, DK-2300 S Denmark Royal Sch Lib & Informat Sci, Inst Informat Studies, Copenhagen, DK-2300 S Denmark E-mail Addresses: en at db.dk Publisher: SPRINGER, VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS Subject Category: COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS; INFORMATION SCIENCE & LIBRARY SCIENCE IDS Number: 002IA ISSN: 0138-9130 CITED REFERENCES: AKSNES DW A macro study of self-citation SCIENTOMETRICS 56 : 235 2003 BONNEVIE E J INF SCI : 11 2000 BONNEVIENEBELON.E J DOCUMENTATION 62 : 2006 CHRISTENSEN FH Online citation analysis - A methodological approach SCIENTOMETRICS 37 : 39 1996 CRONIN B P 8 INT C SCIENT BIB : 127 2001 DING Y Journal as markers of intellectual space: Journal co-citation analysis of information Retrieval area, 1987-1997 SCIENTOMETRICS 47 : 55 2000 FRANDSEN TF Journal diffusion factors - a measure of diffusion? ASLIB PROCEEDINGS 56 : 5 2004 LAWANI SM ON THE HETEROGENEITY AND CLASSIFICATION OF AUTHOR SELF-CITATIONS JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 33 : 281 1982 PERITZ BC The sources used by bibliometrics-scientometrics as reflected in references SCIENTOMETRICS 54 : 269 2002 ROUSSEAU R Temporal differences in self-citation rates of scientific journals SCIENTOMETRICS 44 : 521 1999 ROWLANDS I ASLIB P 54 : 74 2002 WHITE HD Authors as citers over time JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 52 : 87 2001 From garfield at CODEX.CIS.UPENN.EDU Tue Apr 25 23:28:53 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 25 Apr 2006 23:28:53 -0400 Subject: Showalter E. "The economy of prestige =?ISO-8859-1?Q?=96_Prizes,_awards,_and_the_circulation_of_cultural_value,_by_J.F._English_(Book_Review)"_TLS_=96?= The Times Literary Supplement (5370) March 3, 2006 p.12. Times Supplements Limited, Karket Harborough Message-ID: AUTHOR: Elaine Showalter Title : The economy of prestige ? Prizes, awards, and the circulation of cultural value, by J.F. English (Book Review) Source : TLS ? The Times Literary Supplement (5370) March 3, 2006 p.12. Times Supplements Limited, Karket Harborough FULL TEXT AVAILABLE AT : http://tls.timesonline.co.uk/article/0,,25829-2068249.html From garfield at CODEX.CIS.UPENN.EDU Tue Apr 25 23:31:52 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 25 Apr 2006 23:31:52 -0400 Subject: Cohen AM, Hersh WR, Peterson K, Yen PY "Reducing workload in systematic review preparation using automated citation classification " Journal of the American Medical Informatics Association 13 (2): 206-219 MAR-APR 2006 Message-ID: A.M. Cohen E-mail : cohenaa at ohsu.edu Title: Reducing workload in systematic review preparation using automated citation classification Author(s): Cohen AM, Hersh WR, Peterson K, Yen PY Source: JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION 13 (2): 206-219 MAR-APR 2006 Document Type: Review Language: English Cited References: 25 Times Cited: 0 Abstract: Objective: To determine whether automated classification of document citations can be useful in reducing the time spent by experts reviewing journal articles for inclusion in updating systematic reviews of drug class efficacy for treatment of disease. Design: A test collection was built using the annotated reference files from 15 systematic drug class reviews. A voting perceptron-based automated citation classification system was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Cross- validation experiments were performed to evaluate performance. Measurements: Precision, recall, and F-measure were evaluated at a range of sample weightings. Work saved over sampling at 95% recall was used as the measure of value to the review process. Results: A reduction in the number of articles needing manual review was found for 11 of the 15 drug review topics studied. For three of the topics, the reduction was 50% or greater. Conclusion: Automated document citation classification could be a useful tool in maintaining systematic reviews of the efficacy of drug therapy. Further work is needed to refine the classification system and determine the best manner to integrate the system into the production of systematic reviews. KeyWords Plus: EVIDENCE-BASED MEDICINE; COMPARATIVE EFFICACY; CATEGORIZATION; PERCEPTRON; RETRIEVAL; SAFETY Addresses: Cohen AM (reprint author), Oregon Hlth & Sci Univ, Sch Med, Dept Med Informat & Clin Epidemiol, 3181 SW Sam Jackson Pk Rd,Mail Code BICC, Portland, OR 97239 USA Oregon Hlth & Sci Univ, Sch Med, Dept Med Informat & Clin Epidemiol, Portland, OR 97239 USA E-mail Addresses: cohenaa at ohsu.edu Publisher: ELSEVIER SCIENCE INC, 360 PARK AVE SOUTH, NEW YORK, NY 10010- 1710 USA Subject Category: COMPUTER SCIENCE, INFORMATION SYSTEMS; COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS; INFORMATION SCIENCE & LIBRARY SCIENCE; MEDICAL INFORMATICS IDS Number: 023HQ ISSN: 1067-5027 CITED REFERENCES : *OR HLTH SCI U DRUG EFF REV PROJ ME : 2003 APHINYANAPHONGS Y Text categorization models for high-quality article retrieval in internal medicine JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION 12 : 207 2005 BLASCHKE C Evaluation of BioCreAtIvE assessment of task 2 BMC BIOINFORMATICS 6 : Art. No. S16 2005 CARROLL JB AM HERITAGE WORD FRE : 1971 CHOU R Comparative efficacy and safety of skeletal muscle relaxants for spasticity and musculoskeletal conditions: a systematic review JOURNAL OF PAIN AND SYMPTOM MANAGEMENT 28 : 140 2004 CHOU R Comparative efficacy and safety of long-acting oral Opioids for chronic non-cancer pain: A systematic review JOURNAL OF PAIN AND SYMPTOM MANAGEMENT 26 : 1026 2003 COHEN AM A categorization and analysis of the criticisms of Evidence-Based Medicine INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS 73 : 35 2004 COHEN AM P 13 TEXT RETR C TRE : 2004 COHEN W P ANN C AM ASS ART I : 335 1999 DIETTERICH TG Approximate statistical tests for comparing supervised classification learning algorithms NEURAL COMPUTATION 10 : 1895 1998 DOBROKHOTOV PB Assisting medical annotation in Swiss-Prot using statistical classifiers INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS 74 : 317 2005 FREUND Y Large margin classification using the perceptron algorithm MACHINE LEARNING 37 : 277 1999 HAYNES RB What kind of evidence is it that Evidence-Based Medicine advocates want health care providers and consumers to pay attention to? BMC HEALTH SERVICES RESEARCH 2 : Art. No. 3 2002 HAYNES RB DEVELOPING OPTIMAL SEARCH STRATEGIES FOR DETECTING CLINICALLY SOUND STUDIES IN MEDLINE JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION 1 : 447 1994 HERSH W "A world of knowledge at your fingertips": The promise, reality, and future directions of on-line information retrieval ACADEMIC MEDICINE 74 : 240 1999 HERSH WR P 13 TEXT RETR C TRE : 2004 JOACHIMS T SVM LIGHT SUPPORT VE : 2004 MULROW C SYSTEMATIC REV SYNTH : 1998 NELSON HD Postmenopausal hormone replacement therapy - Scientific review JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 288 : 872 2002 PORTER MF PROGRAM 14 : 127 1980 ROSENBLATT F THE PERCEPTRON - A PROBABILISTIC MODEL FOR INFORMATION-STORAGE AND ORGANIZATION IN THE BRAIN PSYCHOLOGICAL REVIEW 65 : 386 1958 SACKETT DL Evidence based medicine: What it is and what it isn't - It's about integrating individual clinical expertise and the best external evidence BRITISH MEDICAL JOURNAL 312 : 71 1996 SACKETT DL CLIN EPIDEMIOLOGY BA : 1985 WILCZYNSKI NL AMIA ANN S P : 719 2003 WONG SS AMIA ANN S P : 728 2003 From garfield at CODEX.CIS.UPENN.EDU Tue Apr 25 23:35:47 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 25 Apr 2006 23:35:47 -0400 Subject: Borry P, Schotsmans P, Dierickx K "Empirical research in bioethical journals. A quantitative analysis " JOURNAL OF MEDICAL ETHICS 32 (4): 240-245 APR 1 2006 Message-ID: E-mail Addresses: Pascal.Borry at med.kuleuven.be Title: Empirical research in bioethical journals. A quantitative analysis Author(s): Borry P, Schotsmans P, Dierickx K Source: JOURNAL OF MEDICAL ETHICS 32 (4): 240-245 APR 1 2006 Document Type: Article Language: English Cited References: 24 Times Cited: 0 Abstract: Objectives: The objective of this research is to analyse the evolution and nature of published empirical research in the fields of medical ethics and bioethics. Design: Retrospective quantitative study of nine peer reviewed journals in the field of bioethics and medical ethics ( Bioethics, Cambridge Quarterly of Healthcare Ethics, Hastings Center Report, Journal of Clinical Ethics, Journal of Medical Ethics, Kennedy Institute of Ethics Journal, Nursing Ethics, Christian Bioethics, and Theoretical Medicine and Bioethics). Results: In total, 4029 articles published between 1990 and 2003 were retrieved from the journals studied. Over this period, 435 ( 10.8%) studies used an empirical design. The highest percentage of empirical research articles appeared in Nursing Ethics ( n = 145, 39.5%), followed by the Journal of Medical Ethics ( n = 128, 16.8%) and the Journal of Clinical Ethics ( n = 93, 15.4%). These three journals account for 84.1% of all empirical research in bioethics published in this period. The results of the x 2 test for two independent samples for the entire dataset indicate that the period 1997 - 2003 presented a higher number of empirical studies ( n = 309) than did the period 1990 - 1996 ( n = 126). This increase is statistically significant ( x 2 = 49.0264, p,. 0001). Most empirical studies employed a quantitative paradigm ( 64.6%, n = 281). The main topic of research was prolongation of life and euthanasia ( n = 68). Conclusions: We conclude that the proportion of empirical research in the nine journals increased steadily from 5.4% in 1990 to 15.4% in 2003. It is likely that the importance of empirical methods in medical ethics and bioethics will continue to increase. Addresses: Borry P (reprint author), Katholieke Univ Leuven, Ctr Biomed Eth & Law, Kapucijnenvoer 35-3, Louvain, B-3000 Belgium Katholieke Univ Leuven, Ctr Biomed Eth & Law, Louvain, B-3000 Belgium E-mail Addresses: Pascal.Borry at med.kuleuven.be Publisher: B M J PUBLISHING GROUP, BRITISH MED ASSOC HOUSE, TAVISTOCK SQUARE, LONDON WC1H 9JR, ENGLAND Subject Category: SOCIAL SCIENCES, BIOMEDICAL; ETHICS; MEDICAL ETHICS; SOCIAL ISSUES IDS Number: 027HU ISSN: 0306-6800 CITED REFERENCES: *WORLD MED ASS WORLD MED ASS DECL R : 1995 ARNOLD RM EMPIRICAL-RESEARCH IN MEDICAL-ETHICS - AN INTRODUCTION THEORETICAL MEDICINE 14 : 195 1993 ATTARAN A HLTH HUM RIGHTS 4 : 26 1999 BARRY P MED HLTH CARE PHILOS 7 : 41 2004 BORRY P The birth of the empirical turn in bioethics BIOETHICS 19 : 49 2005 BRODY BA QUALITY OF SCHOLARSHIP IN BIOETHICS JOURNAL OF MEDICINE AND PHILOSOPHY 15 : 161 1990 FANGERAU H J MED ETHICS 30 : 299 2004 FETTERS MD The epidemiology of bioethics JOURNAL OF CLINICAL ETHICS 10 : 107 1999 FOX R BIOETHICS SOC CONSTR : 270 1998 GALLAGHER EB BIOETHICS SOC CONSTR : 166 1998 GARDNER G NURSING INQUIRY 3 : 153 1996 HOFF TJ Exploring the use of qualitative methods in published health services and management research MEDICAL CARE RESEARCH AND REVIEW 57 : 139 2000 HOM S ENGAGING WORLD USE E : 2004 HOPE T Empirical medical ethics JOURNAL OF MEDICAL ETHICS 25 : 219 1999 HULL SCH METHODS MED ETHICS : 147 2001 JENNINGS B SOCIAL SCI PERSPECTI : 261 1990 MCKIBBON KA BMC MED INFORM DECIS 4 : 11 2004 MOLEWIJK AC MED HLTH CARE PHILOS 7 : 85 2004 ROLFE G Faking a difference: evidence-based nursing and the illusion of diversity NURSE EDUCATION TODAY 22 : 3 2002 ROLFE G Insufficient evidence: the problems of evidence-based nursing NURSE EDUCATION TODAY 19 : 433 1999 SUGARMAN J The future of empirical research in bioethics JOURNAL OF LAW MEDICINE & ETHICS 32 : 226 2004 SUGARMAN J METHODS MED ETHICS : 19 2001 TURNER L Bioethics needs to rethink its agenda BRITISH MEDICAL JOURNAL 328 : 175 2004 WYATT JC Reading journals and monitoring the published work JOURNAL OF THE ROYAL SOCIETY OF MEDICINE 93 : 423 2000 From garfield at CODEX.CIS.UPENN.EDU Tue Apr 25 23:37:26 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 25 Apr 2006 23:37:26 -0400 Subject: Gil-Montoya JA, Navarrete-Cortes J, Pulgar R, Santa S, Moya-Anegon F "World dental research production: an ISI database approach (1999-2003) " EUROPEAN JOURNAL OF ORAL SCIENCES 114 (2): 102-108 APR 2006 Message-ID: J.A. Gil-Montoya : E-mail Addresses: jagil at ugr.es Title: World dental research production: an ISI database approach (1999- 2003) Author(s): Gil-Montoya JA, Navarrete-Cortes J, Pulgar R, Santa S, Moya- Anegon F Source: EUROPEAN JOURNAL OF ORAL SCIENCES 114 (2): 102-108 APR 2006 Document Type: Article Language: English Cited References: 19 Times Cited: 0 Abstract: The objective of this study was to obtain a geographic world map of scientific production in dentistry by analysing published papers. Articles and reviews in the Dentistry, Oral Surgery & Medicine category published from 1999 to 2003 were accessed through the ISI database. The data were analyzed quantitatively (number of documents, number of researchers, productivity, interannual variation rate and relative specialization index), qualitatively (weighted impact factor, relative impact factor, citation rate per document and top 5 publications) and socioeconomically (number of documents per inhabitant and per dentist and in relation to the country's GDP). The USA, UK, Japan and Scandinavian countries were found to be the most productive countries (number of publications). Publications from Scandinavian countries were also of high quality as measured by Impact Factor and Citation Rate, while the UK had one of the highest productivity rates (number of documents per researcher). Addresses: Gil-Montoya JA (reprint author), Fac Odontol, Paseo Cartuja S- N, Granada, 18071 Spain Univ Granada, Fac Dent, Granada, Spain Univ Granada, Fac Bibliotecon & Documentat, Granada, Spain E-mail Addresses: jagil at ugr.es Publisher: BLACKWELL PUBLISHING, 9600 GARSINGTON RD, OXFORD OX4 2DQ, OXON, ENGLAND Subject Category: DENTISTRY, ORAL SURGERY & MEDICINE IDS Number: 026GJ ISSN: 0909-8836 NATURE 435 : 1003 2005 BENZER A LANCET 341 : 8839 1993 CLEATONJONES P J DENT EDUC 66 : 690 2002 ELIADES T J OROFAC ORTHOP 62 : 74 2001 GARFIELD E How can impact factors be improved? BRITISH MEDICAL JOURNAL 313 : 411 1996 GARFIELD E CURR CONTENTS 25 : 3 1994 HEFLER L LANCET 353 : 9167 1999 KAWAMURA M Lotka's law and the pattern of scientific productivity in the dental science literature MEDICAL INFORMATICS AND THE INTERNET IN MEDICINE 24 : 309 1999 LINDE A On the pitfalls of journal ranking by impact factor (R) EUROPEAN JOURNAL OF ORAL SCIENCES 106 : 525 1998 MAVROPOULOS A Orthodontic literature: An overview of the last 2 decades AMERICAN JOURNAL OF ORTHODONTICS AND DENTOFACIAL ORTHOPEDICS 124 : 30 2003 MOED HF NEW BIBLIOMETRIC TOOLS FOR THE ASSESSMENT OF NATIONAL RESEARCH PERFORMANCE - DATABASE DESCRIPTION, OVERVIEW OF INDICATORS AND FIRST APPLICATIONS SCIENTOMETRICS 33 : 381 1995 RAHMAN M Biomedical research productivity - Factors across the countries INTERNATIONAL JOURNAL OF TECHNOLOGY ASSESSMENT IN HEALTH CARE 19 : 249 2003 RAHMAN M J EPIDEMIOL 10 : 290 2000 RAHMAN M Biomedical publication - global profile and trend PUBLIC HEALTH 117 : 274 2003 THOMPSON DF Geography of US biomedical publications, 1990 to 1997 NEW ENGLAND JOURNAL OF MEDICINE 340 : 817 1999 UGOLINI D SCIENTOMETRICS 38 : 263 1997 VANRAAN A Advanced bibliometric methods for the evaluation of universities SCIENTOMETRICS 45 : 417 1999 WHITEHOUSE G Citation rates and impact factors: should they matter? BRITISH JOURNAL OF RADIOLOGY 74 : 1 2001 YANG S PEDIAT DENT 23 : 415 2001 From garfield at CODEX.CIS.UPENN.EDU Tue Apr 25 23:47:17 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Tue, 25 Apr 2006 23:47:17 -0400 Subject: Frandsen TF, Rousseau R, Rowlands I "Diffusion factors " JOURNAL OF DOCUMENTATION 62 (1): 58-72 2006 Message-ID: E-mail: kk02fa at db.dk Title: Diffusion factors Author : Frandsen TF, Rousseau R, Rowlands I Source: JOURNAL OF DOCUMENTATION 62 (1): 58-72 2006 Document Type: Article Language: English Cited References: 16 Times Cited: 0 Abstract: Purpose - The purpose of this paper is to clarify earlier work on journal diffusion metrics. Classical journal indicators such as the Garfield impact factor do not measure the breadth of influence across the literature of a particular journal title. As a new approach to measuring research influence, the study complements these existing metrics with a series of formally described diffusion factors. Design/methodology/approach - Using a publication-citation matrix as an organising construct, the paper develops formal descriptions of two forms of diffusion metric: "relative diffusion factors" and "journal diffusion factors" in both their synchronous and diachronous forms. It also provides worked examples for selected library and information science and economics journals, plus a sample of health information papers to illustrate their construction and use. Findings - Diffusion factors capture different aspects of the citation reception process than existing bibliometric measures. The paper shows that diffusion factors can be applied at the whole journal level or for sets of articles and that they provide a richer evidence base for citation analyses than traditional measures alone. Research limitations/implications - The focus of this paper is on clarifying the concepts underlying diffusion factors and there is unlimited scope for further work to apply these metrics to much larger and more comprehensive data sets than has been attempted here. Practical implications - These new tools extend the range of tools available for bibliometric, and possibly webometric, analysis. Diffusion factors might find particular application in studies where the research questions focus on the dynamic aspects of innovation and knowledge transfer. Originality/value - This paper will be of interest to those with theoretical interests in informetric distributions as well as those interested in science policy and innovation studies. Addresses: Frandsen TF (reprint author), Royal Sch Lib & Informat Sci, Copenhagen, Denmark Royal Sch Lib & Informat Sci, Copenhagen, Denmark KHBO, Ind Sci & Technol, Oostende, Belgium City Univ London, Dept Informat Sci, London, EC1V 0HB England E-mail Addresses: ir at soi.city.ac.uk Publisher: EMERALD GROUP PUBLISHING LIMITED, 60/62 TOLLER LANE, BRADFORD BD8 9BY, W YORKSHIRE, ENGLAND Subject Category: COMPUTER SCIENCE, INFORMATION SYSTEMS; INFORMATION SCIENCE & LIBRARY SCIENCE IDS Number: 022LE ISSN: 0022-0418 CITED REFERENCES: EGGHE L MATHEMATICAL RELATIONS BETWEEN IMPACT FACTORS AND AVERAGE NUMBER OF CITATIONS INFORMATION PROCESSING & MANAGEMENT 24 : 567 1988 EGGHE L Average and global impact of a set of journals SCIENTOMETRICS 36 : 97 1996 FRANDSEN TF Article impact calculated over arbitrary periods JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 56 : 58 2005 FRANZ G Physical and chemical properties of the epidote minerals - An introduction EPIDOTES 56 : 1 2004 GARFIELD E NEW FACTORS IN EVALUATION OF SCIENTIFIC LITERATURE THROUGH CITATION INDEXING AMERICAN DOCUMENTATION 14 : 195 1963 INGWERSEN P The publication-citation matrix and its derived quantities CHINESE SCIENCE BULLETIN 46 : 524 2001 JIN BH Chinese science citation database: Its construction and application SCIENTOMETRICS 45 : 325 1999 KING DN THE CONTRIBUTION OF HOSPITAL LIBRARY INFORMATION-SERVICES TO CLINICAL CARE - A STUDY IN 8 HOSPITALS BULLETIN OF THE MEDICAL LIBRARY ASSOCIATION 75 : 291 1987 KLEIN MS EFFECT OF ONLINE LITERATURE SEARCHING ON LENGTH OF STAY AND PATIENT-CARE COSTS ACADEMIC MEDICINE 69 : 489 1994 LINDBERG DAB USE OF MEDLINE BY PHYSICIANS FOR CLINICAL PROBLEM-SOLVING JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 269 : 3124 1993 MARSHALL JG THE IMPACT OF THE HOSPITAL LIBRARY ON CLINICAL DECISION-MAKING - THE ROCHESTER STUDY BULLETIN OF THE MEDICAL LIBRARY ASSOCIATION 80 : 169 1992 ROUSSEAU R INFORMETRICS 87 88 : 249 1988 ROUSSEAU R Robert Fairthorne and the empirical power laws JOURNAL OF DOCUMENTATION 61 : 194 2005 ROWLANDS I Journal diffusion factors: a new approach to measuring research influence ASLIB PROCEEDINGS 54 : 77 2002 SHERWILLNAVARRO PJ Research on the value of medical library services: does it make an impact in the health care literature? JOURNAL OF THE MEDICAL LIBRARY ASSOCIATION 92 : 34 2004 WU YS China Scientific and Technical Papers and Citations (CSTPC): History, impact and outlook SCIENTOMETRICS 60 : 385 2004 From anouruzi at YAHOO.COM Thu Apr 27 04:09:30 2006 From: anouruzi at YAHOO.COM (Alireza Noruzi) Date: Thu, 27 Apr 2006 01:09:30 -0700 Subject: Webology: Volume 3, Number 1, 2006 Message-ID: Dear All, apologies for cross-posting. We are pleased to inform you that Vol. 3, No. 1 of Webology, an OPEN ACCESS journal, is published and is available ONLINE now. This issue contains: ------------------ Editorial: Link Spam and Search Engines -- Alireza Noruzi -- http://www.webology.ir/2006/v3n1/editorial7.html ----------------------------------------- Stemming and root-based approaches to the retrieval of Arabic documents on the Web -- Haidar Moukdad -- http://www.webology.ir/2006/v3n1/a22.html ----------------------------------------- E-marketing, Unsolicited Commercial E-mail, and Legal Solutions -- Li Xingan -- http://www.webology.ir/2006/v3n1/a23.html ----------------------------------------- Environmental Knowledge and Marginalized Communities: The Last Mile Connectivity -- A. Neelameghan and Greg Chester -- http://www.webology.ir/2006/v3n1/a24.html ----------------------------------------- Book Review of 'Digital Libraries: Principles and Practice in a Global Environment'/ Lucy A. Tedd & Andrew Large -- Hamid R. Jamali -- http://www.webology.ir/2006/v3n1/bookreview3.html ========================================= Call for Papers: http://www.webology.ir/callforpapers.html ========================================= Regards, A. Noruzi __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From notsjb at LSU.EDU Thu Apr 27 12:02:52 2006 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Thu, 27 Apr 2006 11:02:52 -0500 Subject: HistCite and Program Evaluation Message-ID: I once recommended the software HistCite developed by Sasha Pudovkin and Gene Garfield for the evaluation of scientific programs. Sasha Pudovikin has kindly supplied me with a description of the way this software works, and I am pasting it below, so that you can better understand its operation. As you can see, the software fulfils the first basic requirement for such evaluations by being very flexible in defining a subject set of papers, upon which to base the evaluation. Once this subject set of papers has been defined, it allows the ranking of institutions strongest in the given field. I have seen it work and was mightily impressed with the results. I would only issue a number of caveats. First, the ISI data, on which the software works, has a lot of biases in it particularly in respect to non-US institutions. Therefore, any rankings derived from it should be carefully checked against the opinions of the pertinent experts in the given subject field. Second, there are subject areas which are not amenable to evaluation by publication and citation counts. For a good overall view of this, one should look at the criteria developed by the US National Research Council to evaluate programs in different academic fields. The HistCite program is much better to use than something like the impact factor. Here you are working with a defined subject set of actual papers, whereas the impact factor is nothing but a crude estimate of the arithmetic mean of the citations to the papers of a given journal. While this has the advantage of identifying journals that have the propensity to publish highly cited articles, the arithmetic mean is in no way a viable estimate of the central tendency of the papers published by these journals, given the type of distributions, with which one is dealing. Sasha's description is below. SB The program HistCite is capable of identifying papers, which are most important for a delimited research area. To find such topically important papers (hence, authors, journals, institutions) one should do the following: 1) perform a topical search within Science Citation Index (using Web of Science portal) and export the generated bibliography (containing cited references for each retrieved paper); 2) enter the bibliography into HistCite software; 3) sort the papers by Local Citation Score. The papers which are at the top of the sorted list will be the most cited papers within the bibliography, that is most cited by the authors working within the research area; thus these papers will be of the most impact for this research area. Besides these high impact topically relevant papers the HistCite software will identify the most important (for this research area) prior literature which lie in the foundation (or background) of the topical research. These will be listed as "Outer References", that is the papers, which are not included in the generated topical bibliography, but are rather inferred from the lists of cited references within each paper in the bibliography. The efficiency of the procedure (of finding the most topically important and relevant papers) strongly depends on the quality of the topical search within the SCI: the better the search query (the better it fits to the topic of research), the better the result. The searches necessary for the generation of topical bibliographies may be key-word searches, or citation searches, or author searches, or a combination of them. >From the identified set of topically relevant papers of high impact the HistCite software is capable to generate lists of authors, institutions or journals, most productive in the research area. From loet at LEYDESDORFF.NET Fri Apr 28 03:40:04 2006 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 28 Apr 2006 09:40:04 +0200 Subject: No subject Message-ID: Anticipation and the Non-linear Dynamics of Meaning-Processing in Social Systems Paper to be presented at the World Congress of Sociology, Durban, July 2006 Abstract Social order does not exist as a stable phenomenon, but can be considered as "an order of reproduced expectations." When anticipations operate upon one another, they can generate a non-linear dynamics which processes meaning. Although specific meanings can be stabilized, for example in social institutions, all meaning arises from a global horizon of possible meanings. Using Luhmann's (1984) social systems theory and Rosen's (1985) theory of anticipatory systems, I submit algorithms for modeling the non-linear dynamics of meaning in social systems. First, a self-referential system can use a model of itself for the anticipation. Under the condition of functional differentiation, the social system can be expected to entertain a set of models; each model can also contain a model of the other models. Two anticipatory mechanisms are then possible: a transversal one between the models, and a longitudinal one providing the system with a variety of meanings. A system containing two anticipatory mechanisms can become hyper-incursive. Without making decisions, however, a hyper-incursive system would be overloaded with uncertainty. Under this pressure, informed decisions tend to replace the "natural preferences" of agents and a knowledge-based order can increasingly be shaped. ** draft available at http://www.leydesdorff.net/durban06/index.htm; apologies for cross-postings _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20-525 6598; fax: +31-20-525 3681 loet at leydesdorff.net; http://www.leydesdorff.net -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 1101 bytes Desc: not available URL: From garfield at CODEX.CIS.UPENN.EDU Fri Apr 28 16:52:23 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Fri, 28 Apr 2006 16:52:23 -0400 Subject: Ranchachari PK "Cite and oversight" Drug Discovery Today 9(22):954-956, November 15,2004 Message-ID: e-mail: patangi.k.rangachari at learnlink.mcmaster.ca Title: Cite and oversight Author(s): Rangachari PK Source: DRUG DISCOVERY TODAY 9 (22): 954-956 NOV 15 2004 Document Type: Editorial Material Language: English Cited References: 12 Times Cited: 0 Addresses: Rangachari PK (reprint author), Univ Calgary, Fac Med, B HSc Program, Calgary, AB Canada Univ Calgary, Fac Med, B HSc Program, Calgary, AB Canada Publisher: ELSEVIER SCI LTD, THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, OXON, ENGLAND Subject Category: PHARMACOLOGY & PHARMACY IDS Number: 870EA ISSN: 1359-6446 THE AUTHOR HAS KINDLY PERMITTED US TO POST THE FULL TEXT OF THIS ARTICLE Cite and oversight P.K.Rangachari Professor of Pharmacology and Therapeutics Director of Inquiry, B.HSc Program Faculty of Medicine University of Calgary Calgary, Canada The research publication is the currency of our scientific lives, and the value of our careers is measured with what we publish. It is not enough to document what we have done, it is also important to know whether we have made a meaning ful difference, hence the desire to measure our worth and value. Enter citation analysis and impact factors. I have been interested in this issue for along time, but the recent stimulation came from reading the recent article in Drug Discovery Today by Raymond C. Rowe entitled Publish or perish [1]. This response to Rowe?s article represents one individual?s opinion on a controversial issue. It is not a meta-analysis or a review, therefore, there will not be a surfeit of references; the citations will be idiosyncratic to suit my purpose. I am following in the footsteps of many, only honest enough to admit it. If at the end of it all, I have provoked a response, I will rest content. The research publication that plays such an inordinate part of our professional lives was largely the creation of a single individual, Henry Oldenberg. His tale has often been told [2?4], therefore, a simple summary will suffice here. Born in Bremen between 1617 and 1620, he settled in England after 1653, becoming friendly with those who were trying to propagate a new way of looking at knowledge and learning. The proponents of this ?new learning? created the Royal Society and Oldenberg became its secretary. One of the tenets of the new way was that knowledge was a public good to be shared by those who contributed to its production. Medieval secrecy gave way to open discussion and dissemination. Individuals were not working in private and in secrecy but were contributing as a community to the creation of an edifice of knowledge. Thus, information that was gathered by a given individual needed to be disseminated and shared. This new approach demanded that information be widely disseminated and the form in which this occurred was a letter. Oldenberg, as the Secretary of the Royal Society, took the new form of communication ? the letter ? codified it and transformed it into the research article, as we know it today. Clearly, this laid the beginnings of the assessment of contributions to knowledge through publication of letters and their acceptance and recognition by a community of scholars. As time went on, much of the paraphernalia of the modern research journal came into being. Oldenberg played a crucial role in all of this. What he set in place has continued to this day, with minor changes. The new learning was a creation of ?dead white European males? of the 17th Century [5] and few perhaps at the time recognized that within a few centuries that approach would become truly international and legitimized as the way of contributing to knowledge. But the instruments of expansion of one generation become the vested interest of the next. Whereas failures can be written off or ignored, success demands accountability; so began the publication game. The early versions merely counted; one who published more was seen as being better than one who did not. That was not enough. After all, did it matter that one published? What if, like the sound of falling trees in a forest, no one heard? Enter citation analysis and impact factors. The intentions were quite honourable [7]. In all the arguments and controversies about this issue, it is important to note that Garfield and others have been scrupulous in pointing out the caveats and pleading for proper usage. If the scientific community has erred it is entirely their responsibility. The impact factor is the ratio of citations to papers published in a given journal to the total number of publications in that journal over a given time. At a simplistic level, the notion makes sense. If a given journal scrupulously publishes only those articles that are of superb quality and those articles are avidly read and properly referred to by the community of scholars, then that journal would have a tremendous impact. Conversely, a journal that accepts everything that is submitted to it might find that most articles remain unread and uncited and would have no impact at all. Scientists would like their observations to count, therefore, they would assiduously seek out journals with better editorial policies and higher impact factors. This seems to be intuitive and obvious, however, a lot of this founders on the bedrock of reality. The arguments that have raged on this issue are well documented and I am not going to cite them all. An easy way to enter the controversy is for readers to refer to a series of articles that appeared in Trends in Biochemical Sciences in 1989 [8,9]. One of the problems with citation analysis and impact factors relates to the numerator: the total number of citations to a given paper or papers in a given journal. This assumes that scientists would be scrupulous in giving credit where credit is due and only cite those papers that have meaning to their own paper. How true is this? The evidence is mixed [10?12]. If the critics are right and the problem is real, are there any solutions? I am going to propose several solutions, in descending order of outrageousness. (1) Return to the ideology of the New Learning, which, in a sense, began it all. If scholars are building the edifice of science brick by brick, why should they not remain anonymous? It is the product alone that matters. There will be no takers for that option. (2) Argue that modern science can rarely be done without institutional support. Therefore, the name of the institutions alone should appear on the paper with the individual contributors in the acknowledgements. I seriously doubt if this option would find much favour either. (3) Take citations seriously and annotate them. Certain journals do, but the practice is not widespread. That way referees can crosscheck whether the citations are real, spurious or unwarranted. However, before one contemplates any such solutions, I would suggest that more experiments be done. Rather than rely on the testimony of experts and metricians of one kind or another, I propose that we decide on our own, based on simple experiments. In a Baconian spirit, we can call these Experimenta Reflecta or Illuminata: Experiment 1: With each of your publications, turn to the reference list, go through each of the references that you have cited and ask the following questions: Why did I cite this paper? Is the person a potential referee? Or even on the editorial board? Did I just happen to have it in my files? Did I find it from the reference list of some other paper? Have I actually read this? Am I citing it to show how current I am in my literature search? Did I just pull it off the web? Is it because it is in English and I cannot read other languages? Is it from a reputed journal and therefore adds credibility to my claims? You can repeat this experiment several times to get more quantitative data. Experiment 2: Select one to five papers that have cited you. Now take a deep breath and read the paper carefully. Ask yourself why was your paper cited? Was it for the sorts of reasons that you found for yourself? Would it have made any material difference to the publication at hand if the author had failed to cite your paper? Be brutally honest with yourself. Remember that you do not have to publish the results or even tell anybody about them. These experiments should help you decide for yourself the meaning of it all. References 1 Rowe, R.C. (2004) Publish or perish. Drug Discov. Today 9, 590?591 2 Porter, J.R. (1964) The Scientific Journal ? 300th Anniversary. Bacteriol. Rev. 28, 211?230 3 Rangachari, P.K. (1994) The Word is the Deed: The ideology of the research paper in experimental science. Am. J. Physiol (Adv. Physiol. Educ.) 12, S120?S136 4 Hall, A.R. and Hall, M. (1965) The Correspondence of Henry Oldenburg. The University of Wisconsin Press, Madison, WI, USA 5 Shapin, S. (1994) A Social History of Truth: Civility and Science in Seventeenth Century England. p xxii, Chicago Press, Chicago, IL, USA 6 Godlee, F. and Jefferson, T. (2003) Peer Review in the Health Sciences. (2nd edn), BMJ Books, London, UK 7 Garfield, E. (1970) Citation indexing for studying science. Nature 227, 669?671 8 MacRoberts, M.H. and MacRoberts, B.R. (1989) Citation Analysis and the science policy arena. Trends Biochem. Sci. 14, 8 9 Cole, S. (1989) Citations and the evaluation of individual scientists. Trends Biochem. Sci. 14, 9?13 10 Siekevitz, P. (1991) Citations and the tenor of the times. FASEB J. 5, 139 11 Moravcsik, M.J. and Murugesan, P. (1975) Some results on the function and quality of citations. Soc. Studies Sci. 5, 86?92 12 Dumont, J.E. (1989) From bad to worse: evaluation by journal impact. Trends Biochem. Sci. 14, 327?328 From garfield at CODEX.CIS.UPENN.EDU Fri Apr 28 17:10:55 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Fri, 28 Apr 2006 17:10:55 -0400 Subject: Soteriades ES, Rosmarakis ES, Paraschakis K, Falagas ME "Research contribution of different world regions in the top 50 biomedical journals (1995-2002) " FASEB JOURNAL 20 (1): 29-34 JAN 2006 Message-ID: E-mail Addresses: matthew.falagas at tufts.edu Title: Research contribution of different world regions in the top 50 biomedical journals (1995-2002) Author(s): Soteriades ES, Rosmarakis ES, Paraschakis K, Falagas ME Source: FASEB JOURNAL 20 (1): 29-34 JAN 2006 Document Type: Article Language: English Cited References: 22 Times Cited: 0 Abstract: We evaluated all articles published by different world regions in the top 50 biomedical journals in the database of the Journal Citation Reports-Institute for Scientific Information for the period between 1995 and 2002. The world was divided into 9 regions [United States of America (the U. S.), Western Europe, Japan, Canada, Asia, Oceania, Latin America, and the Caribbean, Eastern Europe, and Africa] based on a combination of geographic, economic and scientific criteria. The number of articles published by each region, the mean impact factor, and the product of the above two parameters were our main indicators. The above numbers were also adjusted for population size, gross national income per capita of each region, and other factors. Articles published from the U. S. made up about two-thirds of all scientific papers published in the top 50 biomedical journals between 1995 and 2002. Western Europe contributed approximately a quarter of the published papers while the remaining one-tenth of articles came from the rest of the world. Canada, however, ranked second when number of articles was adjusted for population size. The U. S. is by far the highest-ranking country/region in publications in the top 50 biomedical journals even after adjusting for population size, gross national product, and other factors. Canada and Western Europe share the second place while the rest of the world is far behind. Soteriades, E. S., Rosmarakis, E. S., Paraschakis, K., Falagas, M. E. Research contribution of different world regions in the top 50 biomedical journals (1995-2002). Addresses: Falagas ME (reprint author), AIBS, 9 Neapoleos St, Maroussi, 15123 Greece AIBS, Maroussi, 15123 Greece Cyprus Int Inst Environm & Publ Hlth, Nicosia, Cyprus Tufts Univ, Sch Med, Dept Med, Boston, MA 02111 USA E-mail Addresses: matthew.falagas at tufts.edu Publisher: FEDERATION AMER SOC EXP BIOL, 9650 ROCKVILLE PIKE, BETHESDA, MD 20814-3998 USA Subject Category: BIOCHEMISTRY & MOLECULAR BIOLOGY; BIOLOGY; CELL BIOLOGY IDS Number: 021PU ISSN: 0892-6638 CITED REFERENCES: UN STAT YB : 2004 *I SCI INF SCI CIT IND J CIT RE : 2004 *NAT LIB MED IND MED DAT PUBM : 2004 *WORLD BANK WORLD DEV IND 2002 : 2004 BENNER M AUST HLTH REV 28 : 161 2004 BENZER A GEOGRAPHICAL ANALYSIS OF MEDICAL PUBLICATIONS IN 1990 LANCET 341 : 247 1993 COATES R Language and publication in Cardiovascular Research articles CARDIOVASCULAR RESEARCH 53 : 279 2002 GALLAGHER EJ Evidence of methodologic bias in the derivation of the Science Citation Index impact factor ANNALS OF EMERGENCY MEDICINE 31 : 83 1998 GARFIELD E CITATION INDEXES FOR SCIENCE - NEW DIMENSION IN DOCUMENTATION THROUGH ASSOCIATION OF IDEAS SCIENCE 122 : 108 1955 HEFLER L Geography of biomedical publications in the European Union, 1990-98 LANCET 353 : 1856 1999 KEISER J Representation of authors and editors from countries with different human development indexes in the leading literature on tropical medicine: Survey of current evidence BRITISH MEDICAL JOURNAL 328 : 1229 2004 LUUKKONEN T BIBLIOMETRICS AND EVALUATION OF RESEARCH PERFORMANCE ANNALS OF MEDICINE 22 : 145 1990 MELA GS Radiological research in Europe: a bibliometric study EUROPEAN RADIOLOGY 13 : 657 2003 NEUBERGER J Impact factors: uses and abuses EUROPEAN JOURNAL OF GASTROENTEROLOGY & HEPATOLOGY 14 : 209 2002 RAHMAN M Biomedical publication - global profile and trend PUBLIC HEALTH 117 : 274 2003 ROSMARAKIS ES Estimates of global production in cardiovascular diseases research INTERNATIONAL JOURNAL OF CARDIOLOGY 100 : 443 2005 SARAVIA NG Plumbing the brain drain BULLETIN OF THE WORLD HEALTH ORGANIZATION 82 : 608 2004 SEGLEN PO Why the impact factor of journals should not be used for evaluating research BRITISH MEDICAL JOURNAL 314 : 498 1997 THOMPSON DF Geography of US biomedical publications, 1990 to 1997 NEW ENGLAND JOURNAL OF MEDICINE 340 : 817 1999 UGOLINI D Oncological research overview in the European Union. A 5-year survey EUROPEAN JOURNAL OF CANCER 39 : 1888 2003 VERGIDIS PI IN PRESS EUR J CLIN : 2005 WHITEHOUSE GH Impact factors: facts and myths EUROPEAN RADIOLOGY 12 : 715 2002 From garfield at CODEX.CIS.UPENN.EDU Fri Apr 28 17:15:53 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Fri, 28 Apr 2006 17:15:53 -0400 Subject: Sydow M. "Random surfer with back step" FUNDAMENTA INFORMATICAE 68 (4): 379-398 DEC 2005 Message-ID: Marcia Sydow : msyd at pjwstk.edu.pl Title: Random surfer with back step Author(s): Sydow M Source: FUNDAMENTA INFORMATICAE 68 (4): 379-398 DEC 2005 Document Type: Article Language: English Cited References: 32 Times Cited: 0 Abstract: The World Wide Web with its billions of hyperlinked documents is a huge and important resource of information. There is a necessity of filtering this information. Link analysis of the Web graph turned out to be a powerful tool for automatically identifying authoritative documents. One of the best examples is the PageRank algorithm used in Google [1] to rank search results. In this paper we extend the model underlying the PageRank algorithm by incorporating "back button" usage modeling in order to make the model less simplistic. We explain the existence and uniqueness of the ranking induced by the extended model. We also develop and implement an efficient approximation method for computing the novel ranking and present succesful experimental results made on 80- and 50-million page samples of the real Web. Author Keywords: Web information retrieval; link analysis; PageRank; back button modeling Addresses: Sydow M (reprint author), Polish Japanese Inst Informat Technol, Koszykowa 86, Warsaw, PL-02008 Poland Polish Japanese Inst Informat Technol, Warsaw, PL-02008 Poland E-mail Addresses: msyd at pjwstk.edu.pl Publisher: IOS PRESS, NIEUWE HEMWEG 6B, 1013 BG AMSTERDAM, NETHERLANDS Subject Category: COMPUTER SCIENCE, SOFTWARE ENGINEERING; MATHEMATICS, APPLIED IDS Number: 020YQ ISSN: 0169-2968 Cited references : ABITEBOUL S P 12 INT WWW C : 2003 BRODER A P 9 WWW C : 2000 CHAKRABARTI S Surfing the Web backwards COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING 31 : 1679 1999 DING C 49372 LAWR BERK NAT : 2001 FAGIN R ANN APPL PROBABILITY 11 : 2001 GARFIELD E SCIENCE 178 : 1972 GOLUB G MATRIX COMPUTATIONS : 1996 GREENBERG S P 5 ANN HUM FACT WEB : 1999 HAVELIWALA T 2 EIGENVALUE GOOGLE : 2003 HAVELIWALA T COMPUTING PAGERANK U : 2003 HAVELIWALA T EFFICIENT COMPUTATIO : 1999 HENZINGER M LINK ANAL WEB INFORM 23 : 3 2000 KAMVAR S EXPLOITING BLOCK STR : 2003 KAMVAR S P NSMC 03 : 2003 KATZ L PSYCHOMETRIKA 18 : 1953 KESSLER M AM DOCUMENTATION 14 : 1963 KLEINBERG J P 5 ANN INT COMP COM : 1999 KLEINBERG J P 9 ACM SIAM S DISCR : 1998 KLOPOTEK M LECT NOTES COMPUTER 2869 : 2003 LARSON R ANN M AM SOC INF SCI : 1996 LEMPEL R P 9 INT WWW C : 2000 MATHIEU F P 13 WWW C ALT TRACK : 2004 MOTWANI R RANDOMIZED ALGORITHM : 1995 NARIN F INFORMATION PROCESSI 12 : 297 1976 NEWMAN M P NATL ACAD SCI 98 : 2001 PAGE L PAGERANK CITATION RA : 1998 PANDURANGAN G P 8 ANN INT COMP COM : 2002 SMALL H J AM SOC INFO SCI 24 : 1973 SYDOW M ADV SOFT COMPUTING : 2004 SYDOW M IN PRESS P 14 INT WO : 2005 SYDOW M P 13 INT WWW C : 2004 SYDOW M THESIS POLISH ACAD S : 2004 From garfield at CODEX.CIS.UPENN.EDU Fri Apr 28 17:20:50 2006 From: garfield at CODEX.CIS.UPENN.EDU (Eugene Garfield) Date: Fri, 28 Apr 2006 17:20:50 -0400 Subject: Koppenaal, DW "JAAS - 20 years of manuscripts, citations and scientific impact " Journal of Analytical Atomic Spectrometry 21(3):259-262, 2006. Message-ID: D.W. Koppenaal : dw_koppenaal at pnl.gov TITLE : JAAS - 20 years of manuscripts, citations and scientific impact AUTHOR: Koppenaal, DW SOURCE: Journal of Analytical Atomic Spectrometry 21(3):259-262, 2006. Royal Soc. of Chemistry, Cambridge FULL TEXT AVAILABLE IN PDF FORMAT AT : (PLEASE COPY THE ENTIRE TWO FOLLOWING LINES AND PASTE IN "ADDRESS" IN YOUR BROWSER: http://www.rsc.org/delivery/_ArticleLinking/DisplayHTMLArticleforfree.cfm? JournalCode=JA&Year=2006&ManuscriptID=b601801g&Iss=3