• No results found

75 3.6 Evaluation utilization

3.7 Summary of perspectives on evaluation design, implementation and utilization

When drawing together research from the evaluation field, including questions of purpose, context, quality of process and type of utilization, we are left with a

81 The latter view, according to House (2006) has been reflected strongly in Scandinavian approaches by the work, amongst others, of Karlsson (2003b, 2003c), Vedung (1994, 1997, 1998, 2000) and Monsen (Haug & Monsen, 2002; Monsen & Haug, 2004), and these different emphases also appear to reflect over whether the focus is macro (policy) or micro level of programme evaluation.

81

dichotomy. It is often assumed that quality evaluations, i.e. strong in methods and relevant to their context or setting, will more likely be utilized in further decision-making. However, research has shown that this is not necessarily the case, and in fact the opposite may be the case. Figure 4 below symbolises this dichotomy, recognising that future utilization of results might be dependent upon variation in the initial decision making processes focused on the adoption of evaluation models. In the section on evaluation models it was recognised that Stufflebeam (1983) had recognised the importance of assessing attitudes amongst evaluators to the models they adopt and implement. Stufflebeam (2001) also elaborated these ideas in his ―meta-metaevaluation‖ (Henry, 2001) of evaluation approaches and models, recognising the development of around 22 different approaches of which 9 were thought to be pervasive82. Despite any disagreement over the efficacy of such models, Henry recognises what appear to be two key points for this study in Stufflebeam‘s analysis. Firstly, and following on from the point made above, Stufflebeam ―presumes‖ that models should be systematically evaluated with regard to their ability to assess a programme‘s merit and worth. Henry is recognising a decision based process underlying Stufflebeam‘s work. The second point follows on somewhat tautologically from this argument. Stufflebeam is arguing that the models are based on an assessment of value and worth for the task at hand. His choice off 22 approaches, as Henry (2001: 3) recognises, already omits certain models that do not fit easily under the categories presented. These important points appear to strengthen the argument that investigating decisions about models is an important task. These ideas, as well as further analysis of Stufflebeam‘s approach to decision making within evaluation are taken up in Chapter 5.

Current concepts of evaluation use and purpose have also been challenged. One of the most interesting reflections ties evaluation to institutional theory which is an open systems perspective on organisation theory, focusing on the wider environmental context that ―constrains, shapes and penetrates the organization‖, particularly from a social and cultural perspective (Scott, 1995: xiv). I return to this in Chapter 5 along with a more in depth study of different models thought appropriate to investigate decision making of evaluation. Therefore, focus on the decision-making processes of programme providers, symbolised by the broken arrow to the left of figure 4, is considered to be importance. While this study does not attend to evaluation use as such, it is still considered important as an outcome that is a purpose of the evaluative process.

82 Client-centred, Utilization-focused, Decision/accountability, Consumer-oriented,

Constructivist, Case Study, Outcome/Value Added Assessment, Accreditation and Deliberate Democratic (Stufflebeam, 2001: 7).

82

Figure 4: The problematic area of utilisation

But a question is raised as to what happens when organisational choices about evaluation are limited or external demands to produce certain types of evidence cannot be met or fulfilled? The process of decision-making about how to evaluate programmes under investigation thus comes under question. This focus is outlined in figure 5 below.

Figure 5: The process under study

It is, of course, recognised that there are many more variables that influence than those outlined above, which is framed within an historical context influenced, amongst other things, by previous evaluations, organisational traditions and individual preferences. Rather than being a causal model, the figure is assumed to illuminate how the process of decision choice is important. This is thought to vary from organisation to organisation. It is that variation that might ultimately help further develop the understanding of what else influences utilisation.

Although the process of utilisation is not under question here, illuminating its influence on the decision-making process is thought to be important. Therefore the box around utilisation is marked by dots. In this study it is the perception of future utilisation that is of interest and how that might affect decisions about evaluation. Linked to a greater focus on accountability for outcomes is the increased expectation that evaluation results will be utilized (Patton, 1997;

83

Weiss, 1998a) or at least ‗influence‘ new activity (Kirkhart, 2000). Increasingly, the level to which mandated programmes are evaluated and the extent to which results are valid comes under question, even though this is not generally a new idea (Easterby-Smith, 1994; Guskey, 2000; Hamblin, 1974; Kirkpatrick, 1971, 1998). Reflecting increased demands in society for accountability, evaluation as a discipline has also adopted a ‗scientific authority‘ (House, 1993: viii). House also recognises that the evaluation process has developed into a formal ‗cultural authority‘ with strongly recognised political effects. It might also be perceived that specific utilisation is different from general utilisation intention. For example, organisations may well intend to utilise but be hampered by the organisational logic that guides them. Therefore, while I do not now negate the importance of evaluation utilisation, I consider that decision processes will help us gain a stronger understanding of how and why certain models will be employed.

As was noted above in section 3.6, understanding the national context is important, even when researching at the micro level. Schwandt‘s research (in Dahler-Larsen, 2005b: 365) observed a Scandinavian approach to evaluation as steeped in a positive attitude towards the welfare state combined with focus upon equality and solidarity within a collectivist approach to problem-solving and policy, which contrasts with the more ―logical-empiricist‖ Anglo-American tradition. This contributes to understanding that evaluation is less likely merely perceived as a ―technical-methodological activity‖ but rather within a particular ideological and philosophical tradition (Larsen, 2005b: 366). Dahler-Larsen (2005b) also notes that across Scandinavia and Europe more widely, there has been a varied understanding of the term evaluation. At the same time, the author notes that the introduction of NPM from the late 1980s has seen evaluation more markedly conceptualized as based on ascertaining success in relation to goals, results and effectiveness.

This study is therefore concerned with attempting to ascertain the demands, designs and decision makers from which an evaluation is implemented from.

But understanding these better will require an understanding of the decisions made considering these areas. Therefore, the reflections from and questions raised by this chapter are thought best illuminated by an analysis of organisational decision-making. This will lead to a framework for understanding organisational decision-making, which is outlined in Chapter 5.

Before that I turn in the next chapter to consider more closely the context and systems surrounding the sub-units under investigation in terms of understanding the quality assurance frameworks that HEIs are to respond to.

84