TypeWhat Is
Determined? Description
Test-retest Stability New instrument is given at
two different times under the
same conditions. Scores are
correlated. Strong positive
correlations indicate good
reliability.
Parallel or
alternateStability
EquivalenceNew instrument is given in
two different versions. Scores
are correlated. Strong positive
correlations indicate good
reliability.
Interrater
reliabilityEquivalence Two observers measure
the same event. Scores are
correlated. Strong positive
correlations indicate good
reliability.
Split-half Internal
consistencyThe items are divided to
form two instruments. Both
instruments are given and
the halves are compared
using the Spearman-Brown
formula.
Item to total Internal
consistencyEach item is correlated to the
total score. Reliable items
have strong correlations with
the total score.
Kuder-
Richardson
coefficientInternal
consistencyUsed with dichotomous
items. A computer is used to
simultaneously compare all
items.
Cronbach’s
alphaInternal
consistencyUsed with interval or ratio
items. A computer is used to
simultaneously compare all
items.TABLE 10-3
Reliability: Does the Instrument
Yield the Same Results on Repeated
Measurements?KEY TERMS
internal
consistency:
An attribute of
reliability when
all items on
an instrument
measure the same
concept
test-retest
reliability: A test
for instrument
reliability when
new instruments
are given at two
different times
under the same
conditions; scores
are correlated, and
strong positive
correlations indicate
good reliability
parallel form: A
test for instrument
reliability in which
two different
versions of new
instruments are
given. Scores are
correlated, and
strong positive
correlations indicate
good reliability; also
known as alternate
form
alternate form: A
test for instrument
reliability in which
two different
versions of new
instruments are
given. Scores are
correlated, and
strong positive
correlations indicate
good reliability; also
known as parallel
form10.3 Validity and Reliability 269