characteristic. If O = T, there would be a perfect instrument, but this is never
the case because error (E) is always present during measurements. Error can
be either random error or systematic error. Random error is error that occurs
by chance. It is difficult for researchers to control for random error because it
results from transient factors. Random error can be attributed to subject fac-
tors, instrumentation variations, or environmental factors. For example, have
you ever made the mistake of answering a test question by filling in the wrong
bubble? Although you knew the correct answer, something occurred that caused
a mistake. In this example, the observed score (your test score) indicates that you
did not know the content, whereas your true score would indicate otherwise.
Systematic error occurs when the same kind of error occurs repeatedly. Also
known as consistent error, it can result from subject, instrumentation, and
environmental factors. For example, a researcher measures body temperatures
after an intervention. The researcher assumes that the electronic thermometer is
measuring accurately; however, the device has not recently been calibrated. Body
temperatures are being reported a half degree lower than the actual tempera-
tures. Because every temperature measured is affected, this is systematic error.
If there is error in all measurement, how do researchers know when in-
struments are useful? Instruments can be tested using a variety of strategies
to identify error. Researchers conduct methodological studies to test instru-
ments. The nursing literature, as well as literature from other disciplines,
contains such reports. Another term, psychometrics, is also associated with
instrument testing. Psychometrics refers to the development of measures for
psychological attributes.
Validity
When selecting an instrument, researchers must first ask themselves if it is valid.
Validity refers to the degree an instrument measures what it is supposed to
measure. A valid instrument reflects the characteristics or concept that is being
measured. For example, a researcher might want to measure the amount of fear
patients have prior to surgery. The researcher would need to carefully select an
instrument that measures fear, and not anxiety, which is a similar but different
concept. There are three kinds of validity: content validity, criterion-related
validity, and construct validity (Table 10-2).
KEY TERMS
random error:
Error that occurs
by chance during
measurement
systematic error:
Error that occurs in
the same way with
each measurement
methodological:
Studies for the
purpose of creating
and testing new
instruments
psychometrics: The
development of
instruments to mea-
sure psychological
attributes
validity: The degree
that an instrument
measures what
it is supposed to
measure
content validity: A
kind of validity to
ensure that the in-
strument measures
the concept
criterion-related
validity: Degree to
which the observed
score and the true
score are related
construct validity:
A threat to external
validity when the
instrument does not
accurately measure
the theoretical
concepts
What if you were to take an exam two times? The first time would be in a comfortable room
sitting at a desk in a nice chair. The second time you are expected to stand with a clipboard in a
hot, stuffy room. Which observed score is likely to be closer to your true score? Why?
CRITICAL THINKING EXERCISE 10-2
264 CHAPTER 10 Collecting Evidence