• No results found

A Comment on Process Evaluation

I have been asked to comment on the use of process evaluation in the work of research councils - specifically in the Norwegian Research Council for Applied Social Science (NORAS). The Council has existed for four and a half years and thus draws upon limited experience.

In total, NORAS has initiated extemal evaluations of four major research programs during these years all conducted after the programs were concluded -and one "on-the-way" evaluation of the beginning -and organization of a major ten-year long investment in research on management, organization, and administrative systems. Finally, NORAS has recently concluded an evaluation of three institutes for applied, regionally-oriented research. In addition, we have instituted a system of intemal process evaluation - anchored in the program steering committees. I'll retum to this.

The ambition of the evaluation activities, of course, is to develop knowledge and insights to make all parties in applied social research do a hetter job - whether in research, as organizers of research, or as "users" of research.

Evaluation studies within the field of applied research have to cover a complex process, focusing not only on questions of scientific quality, but also on questions relating to whether the problems raised are relevant to identified needs for knowledge, whether the results are effectively disseminated to identified target groups outside the research community, and whether the research process has been efficiently conducted.

Thus, it is important to be clear about the more specific purpose of evaluations, and to be emotionally, intellectually, and organizationally prepared to follow up results and recommendations if they are found sound and sensible.

Generally it is NORAS' strategy to be selective. Evaluations take time, effort, resources. We have to be fairly certain that it is worth the effort. We also want models and methods that can be fairly easily applied. And, we have developed a solid respect for the unintended consequences of evaluations - particularly, of course, when it comes to institute evaluations. Such unintended consequences may prove fruitful, but may also be destructive and out of tune with the quality and results of the evaluation study.

So much for general comments. I'll restrict the more specific comments to process evaluations directed at research programs and program organization -simply because NORAS relies heavily on this model of organization. In 1991 NORAS has organized sixteen research programs with program steering committees

in charge. These committees hold key positions in the steering system of the council.

On the program level NORAS has established a system of internal evaluation of research programs on an annual basis, but we also initiate external, independent evaluations of selected programs -focusing on initiation and the first phase.

I will stick to "process evaluations", and not comment on institute evaluations, nor on evaluations of finished programs and products.

Hanne Foss Hansen stresses the importance of being clear about "if and in which respects an evaluation should be a process evaluation", and whether the evaluation is primarily focused on learning or control.

This is a timely reminder.

However, our experience is also in line with her reminder that in practice there are blurred borders between categories.

That also goes for the distinction drawn between leaming and control as a main objective. The purpose both of an internal process evaluation and an extemal, selective "on-the-way" evaluation is, of course, to leam - to establish a basis for adjustments and initiatives. But insights about how a program runs in relation to identified aims obviously also produce important knowledge to be used for control purposes, e.g., when it is found that institutes engaged in program research do not follow up on the obligation to offer professional guidance to the younger researchers engaged - and there prove to be deficiencies in the quality control system. It is the responsibility of the program steering committee to take action in a case like this. The point is that even though there is an analytical distinction between leaming and control purposes, the distinction is not very helpful in practice.

I have mentioned earlier that evaluation studies in the field of applied social research have to cover a broad and complex subject matter, where evaluations of scientific quality alone only make up part of the picture.

One serious shortcoming we have faced in our evaluations so far is related to the lack of established, generally accepted criteria for applied social research.

Criteria that can be related to the work of researchers as well as to the work of research institutes and the running and results of programs.

NORAS is now developing a set of criteria adjusted to the task of applied research. It is our ambition to establish standards that may serve as a common frame of reference for these types of evaluations. And we hope to develop criteria that may be useful both as a learning instrument for institutions and research councils, as well as for identifying areas where correction and control are necessary.

The critical question, of course, is whether, in our experience, these kinds of intemal and extemal process evaluations function. The ambition of the system for intemal program evaluation is to make the program steering committees

- formulate and set goals for the program

- evaluate status annually in relation to the goals - and so establish a basis for changing the course of direction, take initiatives of various kinds - and - generate insights into the annua! "evaluation" NORAS' Board carries out to

establish whether the Council as a whole tackles its tasks in a satisfactory manner.

The internat process evaluation on a program level thus also serves as one of the mechanisms generated to make NORAS function as one institution.

Generally, we feel this system is beginning to function according to intentions.

It also provides background material for the Board's choice of programs for extemal process evaluation. But we are concemed that this annua! planning and evaluating process should not be allowed to disintegrate into an empty ritual.

Our very restricted experience with an "on-the-way" evaluation also gave useful results. The program on "Management, Organization, and Administrative Systems"

benefitted through insights that resulted in a certain reorientation in the role of the steering committee and an increased weighting of strategic functions and a thematic concentration of the research at the program 's Centre. The evaluation also established a basis for new research initiatives.

But process evaluation also involves a question of timing in relation to processes in the program. Jf the evaluation draws a line at a dynamic point in the program development, the results may lose their importance by the time the evaluation report is published. This underlines the importance of dialogue between the object of the evaluation and the person or team doing the evaluation study.

We will continue to do process evaluations of this kind, e.g., in areas where research raises particularly demanding problems (research on "modem crime"f'eco-nomic crime") or where we want to try out a "new" model of organization, and where the program is supposed to run for a fairly long time.

One limitation of process evaluations is often related to the fact that research results - the quality and importance of the products - cannot be satisfactorily · covered. This task for evaluation has to be tak.en care of through other types of evaluations. But it is our experience that the learning involved for the organization is important for giving research hetter conditions in the future. The outcome, of

course~ first and foremost depends on the ability and willingness to draw actively on the insights generated in all parts of the system.

Terttu Luukkonen