Health Psychology, 2nd Edition

(Tuis.) #1

Research identified more than 400 evaluation training, support and guidance
documents. For example, a guide to outcome evaluations has been developed by the
US Centres for Disease Control and Prevention (2012). A useful guide to process
evaluation has been produced by the UK Medical Research Council (Moore et al.,
2014) and the United States Department of Health and Human Services has provided
guidance on economic evaluations (Honeycutt et al., 2006). Instead, we will briefly
highlight key aspects of outcome and process evaluations, focusing especially on the
latter.
A rigorous outcome evaluation is likely to involve a randomized controlled trial.
This may compare the intervention to a no-intervention control group or another
intervention group (as is the case when an intervention is compared to routine or usual
care), or both. Typically, post-intervention levels of outcome measures are compared,
controlling for any pre-intervention differences. Ideally, we would observe no
differences between intervention and control groups before the intervention but in
practice these may occur. Randomization to intervention and control groups minimizes
such differences and may be undertaken at an individual, or organizational level, as is
the case in a cluster randomized trial when groups of individuals (such as schools or
clinics) are randomized together. When, for policy or ethical reasons, all participants
need to receive an intervention (so that there can be no no-intervention control group)
then clusters can be randomly allocated to receive the intervention or not over a period
of time so that all clusters are observed receiving and not receiving the intervention.
In this design, called a step wedge design, each cluster acts as its own control. When
randomization is impossible, matched groups need to be carefully scrutinized to ensure
that differences other than exposure to the intervention are not responsible for
observed differences in outcome.
Outcome evaluations usually calculate an effect size (e.g. standardized mean
differences or Cohen’s d) to indicate how effective the intervention was (Cohen, 1992).
Anticipating the likely effect size in advance is important to ensure that enough
participants are included to detect change. Attrition rates, that is the number of people
who drop out of the study, are also important. For example, if an intervention requires
persistence and 50 per cent of those in the intervention group drop out then, even if
the intervention is very effective among the remaining 50 per cent (compared to no-
intervention controls) the overall impact of the intervention may be limited. An
intention-to-treat analysis is recommended in such instances. This involves retaining
all randomized participants in the analyses and counting those who do not complete
the intervention as showing no change. Note what a difference this makes when
attrition in the intervention group is high.
The issue of active control groups in which the control participants are already
receiving a service or intervention raises an important issue for trials that compare
interventions to usual care. Comparison with high quality usual care can make an
intervention appear less effective than comparison with a poor usual care group (de
Bruin et al., 2010). This can create problems for commissioners who may wish to
imple ment effective interventions because it means that, unless we know what standard
of usual care was used as a comparator, we cannot easily compare effect sizes across
trials. An apparently very effective intervention (when compared to poor usual care)
may add nothing to established best practice in another context. Yet the content of
usual care, and control groups more generally, is rarely well described. Thus evaluation


216 MOTIVATION AND BEHAVIOUR

Free download pdf