Type
What Is
Determined? Description
Test-retest Stability New instrument is given at
two different times under the
same conditions. Scores are
correlated. Strong positive
correlations indicate good
reliability.
Parallel or
alternate
Stability
Equivalence
New instrument is given in
two different versions. Scores
are correlated. Strong positive
correlations indicate good
reliability.
Interrater
reliability
Equivalence Two observers measure
the same event. Scores are
correlated. Strong positive
correlations indicate good
reliability.
Split-half Internal
consistency
The items are divided to
form two instruments. Both
instruments are given and
the halves are compared
using the Spearman-Brown
formula.
Item to total Internal
consistency
Each item is correlated to the
total score. Reliable items
have strong correlations with
the total score.
Kuder-
Richardson
coefficient
Internal
consistency
Used with dichotomous
items. A computer is used to
simultaneously compare all
items.
Cronbach’s
alpha
Internal
consistency
Used with interval or ratio
items. A computer is used to
simultaneously compare all
items.
TABLE 10-3
Reliability: Does the Instrument
Yield the Same Results on Repeated
Measurements?
KEY TERMS
internal
consistency:
An attribute of
reliability when
all items on
an instrument
measure the same
concept
test-retest
reliability: A test
for instrument
reliability when
new instruments
are given at two
different times
under the same
conditions; scores
are correlated, and
strong positive
correlations indicate
good reliability
parallel form: A
test for instrument
reliability in which
two different
versions of new
instruments are
given. Scores are
correlated, and
strong positive
correlations indicate
good reliability; also
known as alternate
form
alternate form: A
test for instrument
reliability in which
two different
versions of new
instruments are
given. Scores are
correlated, and
strong positive
correlations indicate
good reliability; also
known as parallel
form
10.3 Validity and Reliability 269