• No results found

Academics and Consultants in the Evaluation of R&D Programmes

lntroduction

This presentation sets out to examine the roles of consultants and academics in the evaluation of govemment-sponsored R&D programmes. In order to understand what we mean by the terms 'academic' and 'consultant', some common conceptions are first described. A distinction is made between different evaluation tasks and the institutions or organisational actors normally expected to conduct them, prior to pointing out a number of factors which are currently making it barder to arrive at any simple correspondence. Critical variables in the choice of academics and/or consultants in programme evaluations are then described in more detail, with actual roles discussed via reference to five specimen evaluations. Finally, lessons from each of these evaluations for the involvement of academics and/or consultants are drawn, together with some very general lessons for the commissioners of evaluations.

Academics versus Consultants

Govemment-sponsored R&D programmes are now a common feature of the science and technology policy landscape. Policy makers are also coming to understand that systematic evaluations of these programmes can also aid future policy formulation and implementation. Much attention has therefore focused on how best to conduct such evaluations, and one item under this heading concerns the choice of who should conduct them. Often this has been expressed in terms of a choice between academics or consultants, and it is this topic which constitutes the focus of this presentation. In particular, it shall be argued that much of this debate is far too simplistic in nature, depending in large part on crude caricatures of the nature and roles of academics and consultants, and on a weak appreciation of the factors influencing the choice of evaluators for different types of R&D programmes.

Exhibit 1 depicts some commonly accepted characteristics of academics and consultants respectively. Some of them deserve little elaboration. For example, academics are usually employed in the public sector, consultants in the private.

Similarly, academics are normally expected to charge less for their services than consultants for an equivalent period of time, and this in part affects their relative availability. It is often presumed that consultants can only be afforded in short bursts, whereas academics can be utilised on a longer-term basis, though a corollary

of this is that the commitment of consultants to an evaluation can often be more focused and complete than that of academics with rival demands on their time during extended evaluation periods.

On a pejorative level, academics are often thought of as more profound and less analytically shallow than consultants, and as more objective and less seif-serving, i.e. less likely to produce reports designed only to curry favour with their paymasters. To balance things out, however, academics are interest-driven to the extent that they can sometimes be diverted from the evaluation task in hand by an interesting line of intellectual enquiry - interesting, that is, to themselves, but not always to an expectant policy community hoping to learn something useful from an evaluation. Consultants are more problem-driven and less likely to make tangential excursions.

EXHIBIT I

COMMONL Y ACCEPTED CHARACTERISTICS OF ACADEMICS AND CONSULT ANTS

ACADEMICS CONSULTANTS

PUBLIC PRIVATE

CHEAP EXPENSIVE

LONG-TERM AVAILABILITY SHORT-TERM AVAILABILITY

PARTIAL COMMITMENT TOTAL COMMITMENT

ANAL YTICALLY 'DEEP' OBJECTIVE

INTEREST-DRIVEN

ANAL YTICALLY 'SHALLOW' SELF-SERVING

PROBLEM-DRIVEN

By now it should be apparent that the above generalisations offer only crude caricatures of both academics and consultants. Numerous counter examples and qualifiers spring to mind. Before examining why this is so, however, it is useful to describe one other commonly held perception vis-~-vis the tasles expected of academic and consultancy organisations. Exhibit 2 makes a distinction between policy analysis and technical assessment tasles related to the evaluation of R&D programmes, and between the types of institutions expected to carry them out.

There are three broad categories:

- academics in science and technology departments undertaking technical assessment as part of the nonnal peer review process;

- consultants resident in Management Consultancies perfonning policy analyses;

- technical experts in a variety of private and public sector settings called upon to assess technological developments.

This categorisation scheme is useful not because it provides an adequate means of describing the current situation. It is in fact a very inadequate descriptive scheme.

But it does allow us to see that the roots of this inadequacy lie in current developments which are blurring both task and institutional boundaries. For example, within evaluations based on technical peer review processes it is not unheard of for the experts involved to comment not only on scientific and technical excellence, but also on the efficiency with which a programme has been conducted, and on the appropriateness of the initiative as a whole. Thus there is a blurring of task boundaries. Equally, it is also possible for policy analysts in, for example, Science Policy Units, to be involved in considerations of technical merit.

Familiarisation with technical developments in certain areas, acquired during the course of extended evaluations, can sometimes allow policy analysts to make positive contributions to tecbnical assessments.

EXHIBIT 2

EVALUATION TASKS AND INSTITUTIONS

COMMONL Y ACCEPTED PERCEPTION

ACADEMICS

INSTITUTIONS

TASKS POLICY ANAL YSIS Science Policy Units

TECHNICAL ASSESSMENT Peer Review Process

CONSULT ANTS Management Consultancy Tecbnical Experts FACTORS AFFECTING THIS SITUATION

- THE BLURRING OF TASK BOUNDARIES

- THE BLURRING OF INSTITUTIONAL BOUNDARIES - DIFFERENCES BETWEEN PROGRAMMES

- DIFFERENCES BETWEEN EV ALUATIONS

There is also a blurring of institutional boundaries. It is fairly common these days for academics to spend some of their time on their usual pursuits and the remainder acting in a consultancy capacity, often under the banner of small consultancy firms.

In turn, consultants occasionally act as Visiting Fellows or Professors at academic establishments. Furthermore, as we shall see, a number of evaluations utilise teams of both academics and consultants.

The blurring of task and institutional boundaries complicates a simple choice between academics and consultants in the evaluation of R&D programmes. There are also crucial differences between programmes and between types of evaluations which further complicate the issue. Exhibit 3 lists some of the key variables. With regard to the nature of the programmes being conducted, the type of R&D, the scale of the programmes and their technical scope are all important determinants in the choice. For example in large expensive programmes of industrial R&D the budget may be large enough to involve consultants, whereas in small programmes of academic research the percentage of the budget available for evaluation purposes may not be enough to attract consultants. Similarly, when the technical scope of a pro gramme is very broad, the normal peer review mechanisms become complex and unwieldy, and analysts are often called in to collect and synthesise views on technical performance via interview- and questionnaire-based techniques.

EXHIBIT 3

CRITICAL V ARIABLES IN THE CHOICE OF EV ALUATORS