Health Psychology, 2nd Edition

(Tuis.) #1

and implementation require accurate, detailed descriptions of the intervention – and
control groups to which it has been compared.
Better reporting of behaviour change interventions and their evaluations would
greatly enhance our capacity to translate evaluations of interventions into improved
health care practice (Abraham et al., 2014; Davidson et al., 2003). Acknowledgement
of inadequate scientific reporting has resulted in the development of many reporting
guidelines (Simera et al., 2013) including CONSORT (Consolidated Standards of
Reporting Trials) for randomized controlled trials (Schulz et al., 2010; Montgomery
et al., 2013), STROBE (Strengthening the Reporting of Observational studies in
Epidemiology) for observational studies (Von Elm et al., 2007) and guidance on the
development and evaluation of complex interventions (Craig et al., 2008). The
TIDieR checklist (Template for Intervention Description and Replication; Hoffman
et al., 2014), the work of the WIDER group (Workgroup for Intervention Develop -
ment and Evaluation Research; Abraham, 2012; Albrecht et al., 2013) and more specific
guidance such as that for the description of group-based behaviour change interventions
(Borek et al., 2015) extend this guidance. Currently there are more than 200 pub -
lished reporting guidelines listed on the EQUATOR (Enhancing the QUAlity and
Transparency Of health Research) website (www.equator-network.org), which was
established to improve the reporting of health research.
Process evaluations are particularly important when evaluating complex inter ventions,
which have a number of interacting components operating at different levels and that
may address complex problems or seek to produce multiple outcomes (Campbell et al.,
2000; Craig et al., 2008). Moore et al. (2014) provide a useful frame work, which
describes key elements that a process evaluation might investigate: mechanisms of change,
intervention delivery and contextual factors. This guidance, like that of the Kellogg
Foundation (2004) and intervention mapping recommends beginning with
mechanisms, or processes, of change and suggests that intervention designers develop
a ‘logic model’ that maps out the regulatory process(es) that the intervention is expected
to change (see stage 3 of intervention mapping). This allows identification of meaures
of processes or mechanisms that need to be included. Process measures may include
cognition measures such as measures of attitude, self-efficacy or planning that can be
assessed in both the control and intervention groups before and after intervention
delivery. Such measurement allows mediation analyses to be conducted, testing
whether the assumed change mechanisms account for the success of the intervention.
Process evaluations also investigate intervention delivery, trial management (such
as the management of participant recruitment to a trail) and implementation processes
(such as adaptations that were made when delivering the intervention in different
contexts). A key part of process evaluation is the assessment of fidelity of delivery, that
is, was the intervention designed as planned. This includes ‘dose’, that is, did recipients
receive enough of the intervention? Glasgow et al.(2002) and Green and Glasgow
(2006) provide a useful framework model for evaluating intervention delivery and
implementation: the RE-AIM (Reach, Effectiveness, Adoption, Implementation and
Maintenance) framework. Reach refers to how many of the target population were
involved in an evaluation and how representative they were. For example, if an
intervention was evaluated using economically advantaged participants then questions
would arise as to whether it would also be effective for economically less advantaged
people – or, for example, with those with more severe health problems than the


CHANGING BEHAVIOUR 217
Free download pdf