Evidence-Based Practice for Nurses

(Ben Green) #1

divergent testing, multitrait-multimethod testing, known group testing, and
factor analysis.


For hypothesis testing, researchers use theories to make predictions about
the concept being measured. Data are gathered, and a determination is made
as to whether the findings support the hypothesis. In the study by Beyer et al.
(1992), the authors hypothesized that pain scores would be highest on the day
of the surgical procedure and then gradually decrease. They found that scores
were highest on the operative day and gradually decreased over the course of
the hospital stay. The construct validity of the Oucher, a children’s pain scale,
was supported because data were consistent with predicted pain patterns.


When researchers use two or more instruments to measure the same theoretical
component, they are testing for convergent validity. Convergent validity focuses
on how the observed scores compare to one another. For example, Beyer et al.
(1992) compared the Oucher to a VAS pain scale. Pain ratings were highly cor-
related, thus establishing convergent validity. Divergent validity testing involves
comparing scores from two or more instruments that measure different theoretical
constructs. In this strategy, it is not unusual for researchers to compare opposites;
for example, depression and happiness. In this situation, negative correlations
support construct validity. When convergent testing and divergent testing are
combined, researchers are using a strategy known as multitrait-multimethod.
This approach is especially helpful to reduce systematic error.


Another way to test for construct validity is to use the known group approach.
Instruments are administered to individuals known to be high or low on the
characteristic being measured. Researchers expect that there will be significantly
different scores between the high group and the low group. Beyer et al. (1992)
tested known groups by comparing pain scores for children who had experienced
extensive surgical procedures with pain scores for children who had minor surgical
procedures. Children who had spinal fusions reported significantly higher pain
scores using the Oucher than did children who had a cardiac catheterization.


Most concepts have more than one dimension. These dimensions are known
as factors. Researchers use factor analysis, a statistical approach, to identify
questions that group around different factors. Thus, items that group together as
one factor have high correlations. These items may or may not highly correlate
with items around a different factor. Questions that do not fit are altered or
eliminated. Because factor analyses require complex, simultaneous computa-
tions of correlations, computers are needed.


Reliability


Instruments are considered to be reliable when researchers obtain consis-
tent measurements over time. Reliability must be considered in relation to


KEY TERMS
convergent testing:
A test for construct
validity in which
new instruments
are administered at
the same time as an
instrument known
to be valid; scores of
the two instruments
are compared, and
strong, positive
correlations indicate
good validity
divergent testing:
Test for construct
validity in which
new instruments
are administered
at the same time
as an instrument
measuring the
opposite of the
concept; scores of
the two instruments
are compared, and
strong negative
correlations indicate
good validity
multitrait-
multimethod
testing: Test for
construct validity
in which a new
instrument,
established
instrument of the
same concept,
and established
instrument of
the opposite
concept are given
at the same time;
strong positive
and negative
correlations indicate
good validity

10.3 Validity and Reliability 267
Free download pdf