Academic Leadership

(Dana P.) #1

Chapter 8 – Monitor


161


Most Australian universities also collect, analyse and report on measures equivalent
to those used in the CEQ for internal purposes. For example, they will administer local
surveys (at both course and program level) that ask questions similar or related to those
asked in the CEQ. While these internal surveys may vary from the external measures
slightly, the two are usually aligned to some extent. Thus, there is a wealth of
information available to both Academic Coordinators and Course Coordinators to
consider in their Monitor role.
Walsh 2005 (as cited in Quinn, Faerman, Thompson, & McGrath, 2003, p. 135)
distinguishes between measures which focus on outcomes, processes and initiatives.
Exact measures of outcomes are ‘complete measures that cover all the key attributes for
the outcome under consideration’ (Quinn et al., 2003, p. 135). Exact measures of
outcomes may be ‘subjective’ (dependent on personal judgement) or ‘objective’ (able to
be verified independently) and subject to different levels of responsiveness, where
responsiveness is the extent to which the manager can act to influence the measure.
Proxy measures of outcomes are used to make inferences about exact measures. They
are incomplete but are often used because they are easier to obtain. Indeed, it is very
difficult to obtain exact measures of outcomes in relation to educational programs,
because so much of the data is subjective and at times contradictory. For example, this
is often the case with course and teacher evaluations where different groups of students,
or even students within the same group, will make opposing statements about quality
measures. Factors such as sample size and response rates as well as trends over time
are also important factors to consider when interpreting any such data.
In organisations, both exact and proxy measures of outcomes are frequently used to
measure achievement of strategic institutional or organisational objectives. However,
when measuring performance it is not sufficient to measure and monitor only the
outcomes. It is also necessary to understand and measure the processes that lead to
those outcomes (Quinn et al., 2003, p. 135). In an educational setting where the
outcomes are associated with student learning, the processes needed are linked to
student and teacher effort, and program and course design. These processes are also
measured to some degree, both directly and indirectly, in the CEQ and equivalent
surveys related to courses and teachers.
The GDS, CEQ and related course evaluation instruments are proxy measures of the
quality of the processes and student learning outcomes of programs and courses rather
than exact measures of outcomes of teaching quality. The CEQ measures how students
experience critical aspects of the learning environment within their program of study
(Patrick, 2003, p. 3); most course evaluation instruments do the same at course level.
Other surveys do this at teacher level – for example, Student Evaluation of Teaching
questionnaires.
The GDS, CEQ and almost any other evaluation instrument will not provide you with
the information you need to make actual improvements in students' learning
experiences. For instance, they will not help you to discover what feedback different
students find useful, what difficulties they are experiencing, or which assessment tasks
they perceive as requiring only memorisation and which therefore encourage shallow
approaches to learning. Nevertheless, the data gathered via such surveys will help you
to gauge how your program or course is going overall and to identify areas requiring
further investigation.
Other sources of information that you can use to measure and monitor program
performance include, but are not limited to, success and retention rates, peer evaluation

Free download pdf