• No results found

6.1 Methodological considerations

6.1.3 Complex interventions in a real life clinical setting

In a recent Norwegian study, only 38% of newly diagnosed lung cancer patients were referred to a specialised service for assessment of whether their lung cancer could have been caused by occupational exposure, even when frequent reminders were sent[160]. In a study by Jiwa et al, colorectal referrals were sent using a newly developed, interactive, electronic pro forma, but the uptake of the intervention was only 18%[53]. Both of these studies were performed with educational outreach and follow-up, yet the intervention uptake was very limited. This highlights the difficulties of research in a ‘real life’ clinical setting. Not only can interventions be complex, but complexity already exists at the level of the health care system[161]. In a health care setting, a complex intervention is defined as being “built up from a number of components, which may act independently and inter-dependently”[162]. A complex system, on the other hand, is one that “is adaptive to changes in its local environment, is composed of other complex systems, and behaves in a non-linear fashion”[161], with examples such as primary care, hospitals, and schools. Hence, the effect of any given intervention must be interpreted not only in the context of its own complexity, but also in light of its effects on the entire system. It is therefore not surprising that uptake may be low and performance difficult when applying interventions at the GP/hospital interface, where many other factors than the ongoing study impact clinical decisions.

50

A classic example of a complex intervention in the medical field is the stroke unit. In any trial assessing the impact of stroke units on morbidity and mortality after a stroke, it is hard to define the specific active component (e.g., various health professionals, drugs, guidelines, discharge routines), and hence replicating the results is more challenging[162].

The current project introduced a referral template at the GP/hospital interface coupled with educational follow-up, a seemingly straightforward and simple intervention. However, given the complexities of the health care system and the interaction between the system and the intervention, it would be unwise to assume that all of the observed effect was purely based on the referral template itself, and that the study had envisaged all the potential effects of the intervention.

Although the complexities of the intervention and systems make the interpretation of intervention effects difficult, it more than likely improves generalisability. The

implementation of an acceptable and feasible intervention using a clustered study technique in normal clinical practice likely mirrors the effect in other settings, and follows guidelines set out for the evaluation of complex interventions[162]. However, further evaluation, especially quantitatively, would probably cast further light on the factors that affected the referral process.

6.1.4 Expected change process

The current project was not designed or prepared as a complex intervention. The basic research concept was primarily to design a simple intervention, implementable in everyday clinical practice with limited unintended consequences. Much effort was expended to identify measurable, relevant outcome measures to evaluate the intervention. In the design of the project, the PhD candidate somewhat underestimated the complexity of what was intended as a simple intervention. A good theoretical understanding of how the

intervention causes change has been said to be paramount in designing and evaluating complex interventions[163]. This was not formally outlined prior to the implementation of referral templates, but in hindsight, many of the aspects regarding a more formal process were discussed. In this Chapter, a short description of the thought processes and expected effects of the intervention during the planning phase is provided along the framework provided by the Medical Research Council[164].

51

6.1.4.1 Development of the intervention and evaluation process

As presented in Chapter 2.3, two systematic reviews were found on the topic of referral improvement[7,54]. These reviews suggested that structured referral guidance and local educational outreach can achieve the intended effects on referral rates. No further major studies were found, and no further formal review paper was produced. Therefore, the research group concluded that referral improvement was possible. Underpinning the aim of referral improvement in the literature is the belief that improved referrals would lead to improvements in both service delivery and care. The cost of change in the current project would mainly be incurred at the level of the GP, with a potential increase in the time spent on each referral. Therefore, this PhD project was not designed to evaluate if referrals could be improved, but if improved referrals could lead to a measurable change in the care delivered to each patient, and hence justify the increased workload for GPs.

GP uptake and use of the intervention was recognised early as an important potential limitation (see Chapter 6.2). The use of obligatory electronic pop-up solutions were

considered, but rejected based on the time necessary to develop this application, the cost, and the lack of flexibility it would provide to the referring GP.

Considering the intervention as a whole, the research team expected a measurable change in the outcome measures described in Chapters 4.7 and 4.8, but no large effect on referral numbers or other organisational factors. No appropriate prior assessment tools could be identified, and considerable time was spent researching and discussing different evaluation options. Given the expected change highlighted above, outcome evaluation was envisaged at several levels (Figure 4), as described in Chapters 4.7 and 4.8. With the aim of taking patient assessment into account, self-administered questionnaires were used.

Although the intervention itself was not aimed directly at patients, the questionnaire was intended to measure the expected positive change in the experience of a more appropriate care pathway of higher quality.

6.1.4.2 Piloting of the intervention

The literature on both cluster randomised trials[147] and complex interventions[164]

recommends piloting an intervention for feasibility, usage, and recruitment. No formal

52

feasibility or pilot study took place in the current PhD project. Instead, the intervention was piloted at local GP surgeries, and the patient questionnaire was piloted with health care personnel and patients. A formal feasibility study may have provided clues on how to improve the uptake of the intervention and improve sample size estimation. A pilot study could have highlighted the potential effect of the intervention on the outcome measures.

This is especially interesting in a trial were the outcome measures have not been previously documented, as was the case in this trial.

6.1.4.3 Evaluation

A good theoretical understanding of the intervention has been described as the key to suitable outcome measures[164]. As shown in the current project, the assessment of health care interventions can be less straightforward than expected, and the effects difficult to assess accurately. In addition to the potential benefits of a pilot study, the discussion regarding a continuous, qualitative process of evaluation in Chapter 6.5.3 is pertinent in helping plan the evaluation of an intervention.

6.1.5 Blinding

Blinding is an important concept in modern medical research; ideally treatment allocation should not be known to the patient, investigators, or assessors[165]. It has been shown that intervention effects can be overrated if randomisation concealment is not carried out in a satisfactory manner[166]. Non-blinding of participants, organisers, or evaluators in any given study may give rise to bias in the form of differential treatment during the study process, differential drop out, or differential outcome assessment (information bias). However, as with the current study, full blinding may be unattainable with complex interventions[146,162,167]. In the design of the present study, efforts were made to ensure that patients and outcomes assessors remained blinded to the intervention status of all patients. It was especially important to also keep patients blinded, as patient experience was included as an outcome. However, because the referral template was included in an electronic form in the GPs EHR, it was sometimes evident when it had been used for referral. This was noted beforehand as a possible breach of both carer and assessor blinding, but very few of the GPs used the electronic referral template, instead referring to the laminated paper template.

53

As noted above, the lack of blinding in a study generally tends to increase the effect of the intervention. In the current project the intervention showed no clear effect on the main outcome, and no clear indication that bias has affected the patient treatment or outcome assessment.