• No results found

A Comment on Peer Review Evaluation

Gidefeldt has described the procedures applied by the Swedish Natura! Science Research Council, NFR, in evaluations of subdisciplines. NFR has achieved considerable experience in this field and has shown how to balance the various interests involved in an evaluation process. When evaluations have to take place, I believe their way of doing it is suitable. The Norwegian Natura! Science Research Council (RNF) also applies a similar procedure during its evaluations.

"The ultimate aim of the evaluations is to encourage good research" (Gidefeldt's conclusion). Scientific quality is essential, and may be the only criterion when basic science is concemed. High quality science is relevant by nature.

In order to identify high quality science, evaluation in one or another form is necessary. There is no debate about the method. The only competent, and hence acceptable, way of measuring scientific quality is by using experts in the field, so-called peer review.

Still I must admit some resistance to broad subdiscipline evaluations, not against evaluations as such, but as they are used by the research councils as a general procedure to achieve information. And I will give a few arguments for this resistance.

Subdiscipline evaluations are resource demanding. I believe Gidefeldt under-estimates the costs when considering only the direct costs for the research council.

My experience is that these evaluations cost a lot of time and effort for the scien-tific groups involved. We are not used to measuring time consumption in the scientific community, but there is no doubt that evaluation processes take a lot of attention and power away from "production". It is therefore fair to ask - do the benefits justify these costs?

· One argument often used, I do not think Gidefeldt mentioned it, is that the evaluation process is stimulating for the scientists. I will not comment except by saying that active research groups find other and easier ways to get stimulation.

A subdiscipline evaluation ends in a report which is useful both in intemal processes in research councils and in communication between a research council and its surroundings - both the political level and research institutions. Still I think evaluations have had limited influence on decisions.

The challenge for research councils is to make choices. Considering the small size and the transparency of the scientific community in a small country like Norway, I doubt the information value of evaluation reports for the programme committees in research councils. According to Gidefeldt this is also the case in

Sweden: "There are few elements of surprise in the evaluation reports, they more or less confirm the picture the research council already has drawn on the basis of background knowledge and advice from referees on applications for research grants".

Research councils are battle grounds. Although each member is supposed to act independently, he or she has limited insight into branches of science outside their own fields. Somebody else has to judge scientific quality. The more prestige assembled in an evaluation group, the more weight its statements receive. Therefore, evaluations may be used as a weapon in these internal battles. This reveals a deficiency in the research council system which cannot easily be solved. It is certainly not solved by using more and larger evaluations, that correspond only to a change from "conventional weapons to nuclear weapons". What worries me, evaluation report makes it easier fora research council to implement decisions".

The neutral judgement of an international expert group strengtbens the political platform and authority of a research council. Properly used, I think evaluation reports are of great value in implementing decisions.

Although there might be exceptions, the general procedure for evaluating subdisciplines could be the following. Subdiscipline evaluations should be limited to cases where a research council, according to its own strategy, wants to make changes, e.g expand or reduce an activity, funding of expensive instruments, initia-tion of new research programmes, etc. This will certainly reduce the number of evaluations, and at the same time make them much more action-oriented.

Evaluations attempt to place national activity within a discipline relative to the international mainstream and research front. This should not be done without considering other conditions and frameworks; that is simply a question of fairness.

Therefore, I am not convinced about the relevance of the Swedish experiences, as reported by Gidefeldt, for the natural sciences in Norway. One should notice the substantial difference in resources and conditions for science in our two countries.

This difference is evident from officia! research statistics and certainly from the experiences gained in collaborative work.

Several evaluations of natural science subdisciplines in Norway have revealed consistent critiques of certain aspects of the Norwegian support system. Suggestions and advice from expert groups have first of all focused on support. The expert groups have found a lot of scientific talent in Norway, but very little support for these talents.

These recommendations and advice have not been followed up, with only a few exceptions. Observed from a basic science level in a university, the situation for natura! sciences has not improved during the six-year period since the first evaluation report was published in Norway. So, retuming to Gidefeldt's conclusion - the aim of the evaluation is to encourage good research. It is not evident that this aim has been attained in Norway so far.