Evidence-Based Practice for Nurses

(Ben Green) #1
Content validity is established when researchers know that the instrument
measures the concept intended to be measured. This requires that researchers
clearly define the concept being studied to ensure that the selected instru-
ment fits.

Researchers test for content validity in two ways: face validity and content
validity testing. Both methods involve letting others examine the instruments.
When researchers try to obtain face validity, they ask colleagues or subjects to
examine an instrument and indicate whether the instrument appears to mea-
sure the concept. Face validity is less desirable than content validity because
face validity uses an intuitive approach. To implement content validity testing,
researchers give an instrument to a panel consisting of experts on the concept.
The experts judge the instrument by rating each item for the degree to which
it reflects the concept being measured. After researchers receive feedback
from the panel, they make adjustments to the instrument. Items that receive
high ratings are kept, and items with low ratings are altered or eliminated. For
example, when Beyer, Denyes, and Villarruel (1992) developed a pediatric
photographic pain scale, they asked children, rather than adults, to rank pic-
tures of a child experiencing pain. Children placed the pictures in order from
no pain to most pain. Researchers found consensus among children regarding
some photographs and eliminated others when children disagreed about the
amount of pain being expressed.

Criterion-related validity is concerned with the degree to which the observed
score and the true score are related. Researchers test for criterion-related validity
in two ways: concurrent validity and predictive validity. Concurrent validity is
tested when researchers simultaneously administer two different instruments
measuring the same concept. Usually, the new instrument being developed is
compared to an instrument already shown to be valid. Researchers use correla-
tions to compare scores from the two instruments. High correlations indicate
agreement between the instruments, whereas low correlations indicate that
the instruments are measuring different concepts. Predictive validity refers
to whether a current score is correlated with a score obtained in the future.
For example, suppose that a class of sophomore nursing students completed
an instrument measuring critical thinking today, and they will complete the
instrument again 1 month from now. If the instrument has good criterion-
related validity, their scores will be correlated.

Construct validity focuses on theory. Constructs are theoretical concepts
that are tested empirically. When researchers test for construct validity, they
ask how well the instrument measures a theoretical concept. Because establish-
ing construct validity requires empirical testing, more sophisticated strategies
implemented over a longer period of time are needed. There are a number of
ways to determine construct validity: hypothesis testing, convergent testing,

KEY TERMS
face validity: A test
for content validity
when colleagues or
subjects examine
an instrument and
are asked whether
it appears to mea-
sure the concept
content validity
testing: A test in
which experts on
the topic are asked
to judge each item
on an instrument by
assigning a rating
to determine its fit
with the concept
being measured
concurrent validity:
A test for criterion-
related validity when
a new instrument
is administered at
the same time as an
instrument known
to be valid; scores of
the two instruments
are compared, and
strong positive cor-
relations indicate
good validity
predictive validity:
Test for criterion-
related validity
where a new instru-
ment is given at two
different times and
scores are corre-
lated; strong positive
correlations indicate
good validity
hypothesis testing:
Collection of ob-
jectively measur-
able data that are
gathered through
the five senses to
confirm or refute a
hypothesis; empiri-
cal testing; a test for
construct validity

266 CHAPTER 10 Collecting Evidence

Free download pdf