• No results found

A Comment on Bibliometric Indicators as Research Performance Evaluation Tools

Dr. van Raan in his paper mentions the value of bibliometrics for assessing the perfonnance of research organizations and individuals, notwithstanding all the pitfalls and problems we just heard about. Let me just descrihe a report that someone on my staff is working on that we expect to publish within a couple of months. At the moment we are trying to sort out methodological problems and also to put it in much more readable fonn for our non-technical audience. We have attempted to use bibliographic methods as a measure of our own perfonnance as a granting organization, specifically by looking at the question how well has the peer review system made its decisions to sort out those who should continue to get NSF grants and those who should not So we were looking at the question of do our divisions renew their hetter grants? And in looking at that do NSF grantees publish in hetter journals? Do they have highly cited papers in the same journals and so forth?

We picked one division from each of our five research support directorates for political reasons. We have astronomy in the physical sciences and in our computer science directorate we picked out computer research, in engineering, electrical and communications systems, and in biology, molecular biology. And in our geoscience area and I think for reasons I '11 mention later this was a mistake, our polar program.

We looked at seventy proposals that had been declined or awarded from each of those divisions and we took the bibliography which is submitted by the researcher when the proposal comes around for review. In that bibllography they are supposed to refer specifically to publications that had been produced on a previous NSF grant. From the seventy proposals in each division (about three hundred and fifty grants altogether) we got altogether fifteen hundred and four papers that acknowledged NSF support.

Computer Horizons in Philadelphia counted the citations for each of those fifteen hundred and four papers and when they found them in a particular journal, they then took the next similar article in the journal that was not supported by NSF as a comparison point. So they created a sort of self-comparing database in this way. They had a reference list of articles supported by NSF, and articles in the same journals not supported by NSF.

The results were that for three of our divisions, namely astronomy, electrical engineering and molecular biology, we saw that in retrospect those divisions had indeed been sorting out the more productive from the less productive researchers

when the time came around to renew their grants. For computer science it was very mixed because one-third of the computer science grantees had not published at all.

And we learned later by talking with several of the computer scientists, back to V an Raan's point about not coming to conclusions without talking to the peers, that in their own view their publication practices in that field are extremely sloppy. For instance, they give talks at workshops, and in conversations or papers people refer to those talks for years and they never bother to publish. Also it is a developing field and there is a lot of turbulence in the field. For our polar program that was a very mixed case, and I think. the reason is that it is not a coherent field of science.

It is an aggregation of various areas where they are working in the Arctic and the Antarctic; earth sciences, biology, social sciences, and so forth, a lot of unrelated fields so we couldn 't tell too much from that.

We did tind in comparing the average citation ratlos from NSF supported papers with other papers in the same journals that for our astronomy programs and computer science and electrical engineering, those papers are cited twice as often as papers supported by agencies other than NSF. So we were supporting the most prolific researchers in these fields. We have a direct comparison there since so much molecular biology is supported by the National Institutes of Health. So that gives us a little bit of an argument again to go to the political system and say that:

"Yes, you puta lot into the Institutes of Health and these other areas, but when you put it into NSF you get a lot more return for your money."

Now we also looked at the question of non-citedness. You may be familiar with this little controversy that started in Science magazine a few months ago by David Pendelburg, who was saying that quite a proportion of scientific papers are not cited at all, and that therefore a lot of what is done in science is worthless. That shocked an awful lot of knowledgeable scientists. Well, it tumed out that he was referring to not only journal articles, but all sorts of letters and notes, and summaries and all sorts of things which you wouldn't expect to be cited. Anyway, after taking those out we did our own comparison and found that in most cases NSF articles were non-cited to a lesser degree.

This is a complicated report, and as I said we are trying to simplify it for our audience which is not only research program managers in our own Foundation but Congressional staff, White House staff, media and so forth. And we did one earlier on behavioral and neural sciences at NSF which bad similar outcomes with regard to comparisons with support from other agencies.

We are doing a couple of things now with bibliometrics, but we are not too far along on them. One is a project which was started several months ago. We are looking at the big amounts of money that were put into computer science in the l 980s in those departments to see what effect that bad and we are trying to use bibliometric data to tell the impact of that funding over the years. Another that we

are now starting on is also going to have a bibliometric component, that is, in our neuroscience area where we 're comparing our role with the National lnstitutes of Health.

We are not doing too much bibliometric work, but we are starting to incorporate it more and more into our various projects and particularly as we develop sources, contractors and so forth who know how to do this kind of work and to develop our own staff expertise in how to use this kind of work and what the problems are. So I think we are pushing into this area rather crudely and we need to develop our own sophistication.

Ken

Guy

Academics and Consultants in the