• No results found

75 3.6 Evaluation utilization

5. Unravelling evaluation processes: focusing on decisions about rather than decisions from

5.2 The evaluation process: “elements, actors and rationales”

In the previous section it was recognised that interest in the decision making process has mainly focused on improving the post-evaluative decision process, that is a utilisation focus. It is proposed here that improved understanding of decisions concerning design of evaluations and the interpretation of demands would greatly enlighten this process further. Debates over the intrinsic nature of the evaluation exercise are bound to take place, if one accepts House‘s recognition that the modern profession of evaluators ―rests on collegial, cognitive and moral authority‖ (1993: ix). It is therefore important to recognise where the focus of this study is placed. This section briefly deals with a framework to analyse components within evaluation processes, particularly with regard to ―elements and actors and rationales‖, beyond a linear-rational view and acknowledging an inter-related process (Dahler-Larsen, 2004a).

Elements of the evaluation process

Dahler-Larsen‘s framework is an ―analytical construction‖ which recognises that parts of evaluation processes will overlap, in both substance and chronology.

The author chooses therefore to refer to these parts as ―elements‖106 rather than

106 My translation from the Danish word ―momenter‖, which can also be translated literally as ―moments‖, but I prefer to use the term ―elements‖, which seems to span spatial and content distinctions.

121

phases, considering 8 elements within the evaluation process (Dahler-Larsen, 2004a: 41). These are outlined in summary form in the table below. Within these elements there will be multiple decisions made, that are at the same time also influenced by those made across the different elements, reinforcing the idea of overlap. Elements of the evaluation process are therefore considered to be intertwined and recursive (Dahler-Larsen, 2004a; Dornbusch & Scott, 1975).

Table 3: Elements of the evaluation process (after Dahler-Larsen, 2004a: 41-45)

Elements Description

Initiation The point at which concrete initiative is taken to implement an evaluation, which can also begin to determine ownership of the process.

Agenda Where purpose and themes for the evaluation to answer are decided and the evaluand is determined. This includes implicit and explicit standards and values to assess the evaluand against.

Knowledge management and organisation

Decision concerning ―who does what‖, as well as access rights to information during the process.

Design Choice of evaluation model and methodology, a decision concerning ―effects‖. Influenced by credibility of designs as perceived by involved parties.

Data Collection Point at which data is collected. Seen as a distinguishable from design phase, as decisions made and influence over the process at this point can afford distinct change.

Analysis and

Summary

Selection, interpretation and presentation of results and findings

Validation Social definition of validity of data, building on analytical choices but also on context, relevance and attention attracted from decision makers.

Consequence Preface to legitimacy or response to information gathered, based often on recommendations from evaluators.

In this study I have mainly focused upon the first four of the elements suggested by Dahler-Larsen: ―initiation‖, ―agenda‖, ―knowledge management and organisation‖, and ―design‖. However, while I do not specifically investigate all of these processes, as they are considered to be recursive it is proposed that they will impact current and future decisions iteratively.

The ―initiation‖ element involves trying to understand where the demands for an evaluation come from. Dahler-Larsen recognises that this can involve the

122

directions of a concrete mandator or be the result of a more general legal requirement (2004a: 41). Of importance would seem to be how movement from a general desire to a specific demand for an evaluation is interpreted by those responsible for implementation, as well as their perception of ownership, i.e., who the mandator is.

The ―agenda‖ element refers to the juncture at which the purpose and the themes of evaluation are decided as well as how the evaluand is to be distinguished (Dahler-Larsen, 2004a: 42). It would appear important to attempt to illuminate how evaluators perceive the interpretative framework that is to be employed, and in what way these, as Dahler-Larsen suggests, might ultimately be reinterpreted at later stages. This supports the need to investigate the implementation of evaluations, in this case with respect to academic groups evaluating their own programmes, through a process focused exploration of their perception of events and how they have been involved in them and any decisions that have been taken.

―Knowledge management and organisation‖ involves the course of action of apportioning responsibility and ascertaining the degree of insight and influence for different parties (ibid.). It will be important to explore how evaluators perceive their degree of control over this process, as well as in what sense different mandators play a part. The ―design‖ element of the evaluation process deals with how model and methodology are distinguished. As was noted in the summary in the table above, this involves understanding what degree of effects from a particular programme or activity are expected to be discovered and this will portray a particular image of an evaluand (Dahler-Larsen, 2004a: 43). The author also proposes that the credibility of certain methods and view of evaluation within a particular field will necessarily influence the choice of model and methodology. As was noted earlier in relation to the propositions of Stufflebeam et al., understanding of the informants‘ values and preferences will be important in informing this study.

The importance of context to evaluation decisions about programme impact Alongside understanding the values and preferences of the actors involved in choosing and designing models and implementing evaluation it is also considered important to understand the influence of context on these decisions.

As we will see from the following sections, the context of the programmes under study is considered to be more complex than is assumed under simpler models of decision making that focus mostly upon goals, implementation and results.

Understanding how the context might influence the ―success‖ of a programme requires a wider interpretation of variables and factors thought to be of influence. However, perception of how context might influence the impact of a programme is also thought to vary under certain epistemological modes of thinking and values systems. When investigating the designing and implementation of evaluations of programmes it is therefore felt important to

123

gain an understanding of how evaluators perceive programmes can impact those taking them and how such potential changes might be measured, in addition to any wider expectations for the programme from stakeholders. Questions about context might therefore be as much about individual attitudes and values as they are about mapping out the particularities of the environment. The wider factors, or context of the programme, are then thought to influence evaluation designs.

Respondents will be asked to reflect over the framework that their programmes are set in and their perceptions and responses to it. The major legislative and policy demands and structures that set the agenda for evaluation of postgraduate programmes for school leadership in Norway and England were outlined in chapter 2, in order to understand the wider context in which decisions are made within the framework of this study. This overview is considered against the responses of programme providers at the micro level, who are asked to relate their perception of the demands placed upon them and the way that they follow these up. There are of course many degrees of freedom when attempting to collate such information. But, I reiterate that the exercise at hand is an attempt to explore the decision making made concerning they type of evaluation to be implemented and to further develop a conceptual understanding of this process.

The next section outlines linkage between the perception of the context and how respondents view the possibility to ascertain programme impact in relation to it.

Such a discussion further informs respondents‘ attitudes to evaluation, their responses to demands placed upon them and their deliberation over which models to implement.

Activist and determinist perception of context impact

Stake (1990) considers the importance of research into the ―situational context‖

of evaluations and how that might influence both evaluation design and subsequent utilisation of findings, an area of focus thought to have been previously little studied, or even ignored. The contexts of educational programmes are considered to be multifaceted, encompassing the ―temporal, physical, spatial, social, political, economic etc… [where] an educational practice has its habitat, its milieu, its frame of reference, its zeitgeist—not one but many contexts‖ (1990: 231-2). Stake goes on to recognise that part of evaluation is to illuminate what influence this context has.

Part of Stake‘s propositions is thought to be of particular relevance to this study.

Stake forwards an idea of evaluators as activists and determinists, although he recognises that these descriptors are normative categories and that some kind of continuum would most likely explain attitude and behaviour (1990: 241). In a simplistic summary, activists will see the potential for programmes to impact strong change over their context, whereas determinists will reject such a possibility and see context as dominating. As Stake intimates, it is unlikely to find many evaluators of a strongly deterministic persuasion as such a position would be oxymoronic to the task ahead of them. However, evaluators might be

124

disposed to one or the other of the viewpoints which will in turn affect their approach to evaluation design.

While Stake (1990: 241) appears to focus primarily on the perceptions of external evaluators to the ―control potential‖ of providers over their ―program destiny‖, it would seem that this idea also bears much wider relevance in terms of current thinking in evaluation, and particularly to the context of this study.

Writing in 1990 Stake mapped out a key issue as the evaluator‘s attitude to the concept of ascertaining programme impact, extending his idea of activistic and deterministic perception. Impact is an expression which appears to have been reinvented or gained new vigour under NPM in recent years. Those of an activist persuasion will ―over-dignify the concept of impact and set unrealistic standards for success‖, which Stake claims appears to characterise ―Western thinking‖

with regard to programme evaluation, contributing to ―desired change‖ (1990:

241). Those of a more deterministic persuasion will ―devote too little of the design to the discovery of effects‖, focusing rather on process description,

―elegance of purpose… quality of arrangement… covariation of endeavors, and to the intrigue of the story… [where] [a]ccomplishment may be treated as ephemeral and without substance‖ (1990: 241- 2).

The significance of these descriptions is not so much in terms of how evident extremes on the continuum will be. Stake recognises that these ―might degenerate into mere tendencies of strictness and leniency‖ (ibid) in evaluator personality, but that it is also important to attempt to illuminate the underlying the epistemological worldview of the evaluator towards the idea of ascertaining programme effects and impact. As I have already pointed out, Stake appears to focus particularly on the discussion regarding the supremacy of situational context over programme delivery with regard to external evaluation. In this study the attitudes of those evaluating their own programme delivery are under investigation. While it is not fully clear to what degree the categories that Stake proposes, i.e. activist and determinist, are suitably efficacious descriptors to use for respondents, it is still considered relevant to follow up Stake‘s charge to the evaluation field and investigate programme providers‘ attitudes to accounting for, in this case, some semblance of programme impact. This focus appears even more relevant with regard to the recent discussion concerning the increasing interest from mandators and commissioning bodies in being presented with evidence of programme impact, also evident within the context of this study of school leadership development. It is recognised though that this might be more indirect with regard to HEI programmes compared with nationally mandated courses and training programmes. HEIs in Norway have developed programmes for local mandators to which they must report back to, but as will be seen in later sections, these vary both in their demands and competence in analysing data.

125