Clinical Psychology

(Kiana) #1

Training Observers. There is no substitute for
the careful and systematic training of observers.
For example, observers who are sent into psychiat-
ric hospitals to study patient behaviors and then
make diagnostic ratings must be carefully prepared
in advance. It is necessary to brief them extensively
on just what the definition of, say, depression is,
what specific behaviors represent depression, and
so on. Their goal should not be to“please”their
supervisor by coming up (consciously or uncon-
sciously) with data“helpful”to the project. Nor
should they protect one another by talking over
their ratings and then“agreeing to agree.”
Occasionally, there are instances ofobserver drift,
in which observers who work closely together sub-
tly, without awareness, begin to drift away from
other observers in their ratings. Although reliability
among the drifting observers may be acceptable, it
is only so because, over time, they have begun to
shift their definitions of target behaviors. Occasion-
ally, too, observers are not as careful in their obser-
vations when they feel they are on their own as
when they expect to be monitored or checked.
To guard against observer drift, regularly scheduled
reliability checks (by an independent rater) should
be conducted and feedback provided to raters.


Variables Affecting Validity of Observations

At this point, it seems unnecessary to reiterate the
importance of validity. We have encountered the
concept before in our discussions of both inter-
viewing and testing; it is no less critical in the case
of observation. But here, issues of validity can be
deceptive. It seems obvious in interviewing that
what patients tell the interviewer may not corre-
spond to their actual behavior in non-interview set-
tings. When a child is observed to bully his peers
unmercifully and these observations are corrobo-
rated by reports from teachers, there would seem
to be little question of the validity of the observers’
data. Aggression is aggression! However, things are
not always so simple, as the following discussion
will illustrate.


Content Validity. A behavioral observation
schema should include the behaviors that are deemed
important for the research or clinical purposes at
hand. Usually, the investigator or clinician who
develops the system also determines whether or not
the system shows content validity. But this process is
almost circular in the sense that a system is valid if the
clinician decides it is valid. In developing theBehav-
ioral Coding System(BCS), Jones et al. (1975) circum-
vented this problem by organizing several categories
of noxious behaviors in children and then submitting
them for ratings. By using mothers’ratings, they
were able to confirm their own a priori clinical judg-
ments as to whether or not certain deviant behaviors
were in fact noxious or aversive.

Concurrent Validity. Another way to approach
the validity of observations is to ask whether one’s
obtained observational ratings correspond to what
others (e.g., teachers, spouses, or friends) are
observing in the same time frame. For example,
do observational ratings of children’s aggression on
the playground made by trained observers agree
with the ratings made by the children’s peers? In
short, do the children perceive each other’s aggres-
sion in the same way that observers do?

Construct Validity. Observational systems are
usually derived from some implicit or explicit the-
oretical framework. For example, the BCS of Jones
et al. (1975) was derived from a social learning
framework that sees aggression as the result of
learning in the family. When the rewards for
aggression are substantial, aggression will occur.
When such rewards are no longer contingent on
the behavior, aggression should subside. Therefore,
the construct validity of the BCS could be demon-
strated by showing children’s aggressive behavior
declines from a baseline point after clinical treat-
ment, with clinical treatment defined as rearranging
the social contingencies in the family in a way that
should reduce the incidence of observed aggression.

Mechanics of Rating. It is important that aunit of
analysisbe specified (Tryon, 1998). A unit of analy-
sis is the length of time observations will be made,

270 CHAPTER 9

Free download pdf