Social Research Methods: Qualitative and Quantitative Approaches

(Brent) #1
QUALITATIVE AND QUANTITATIVE MEASUREMENT

whereas people in their fifties understated their true
age, the indicator has a low degree of representative
reliability. To have representative reliability, the
measure needs to give accurate information for
every age group.
A subpopulation analysisverifies whether an
indicator has this type of reliability. The analysis
compares the indicator across different subpopula-
tions or subgroups and uses independent knowledge
about them. For example, I want to test the repre-
sentative reliability of a questionnaire item that asks
about a person’s education. I conduct a subpopula-
tion analysis to see whether the question works
equally well for men and women. I ask men and
women the question and then obtain independent
information (e.g., check school records) and check
to see whether the errors in answering the question
are equal for men and women. The item has repre-
sentative reliability if men and women have the
same error rate.



  1. Equivalence reliabilityapplies when re-
    searchers use multiple indicators—that is, when a
    construct is measured with multiple specific mea-
    sures (e.g., several items in a questionnaire all mea-
    sure the same construct). Equivalence reliability
    addresses the question: Does the measure yield con-
    sistent results across different indicators? If several
    different indicators measure the same construct,
    then a reliable measure gives the same result with all
    indicators.
    We verify equivalence reliability with the split-
    half method.This involves dividing the indicators
    of the same construct into two groups, usually by a
    random process, and determining whether both
    halves give the same results. For example, I have
    fourteen items on a questionnaire. All measure
    political conservatism among college students. If
    my indicators (i.e., questionnaire items) have equiv-
    alence reliability, then I can randomly divide them
    into two groups of seven and get the same results.
    For example, I use the first seven questions and find
    that a class of fifty business majors is twice as con-
    servative as a class of fifty education majors. I get
    the same results using the second seven questions.
    Special statistical measures (e.g., Cronbach’s alpha)
    also can determine this type of reliability. A special
    type of equivalence reliability, intercoder reliability,


can be used when there are several observers, raters,
or coders of information. In a sense, each observer
is an indicator. A measure is reliable if the observers,
raters, or coders agree with each other. This mea-
sure is a common type of reliability reported in con-
tent analysis studies. For example, I hire six students
to observe student seating patterns in a cafeteria. If
all six are equally skilled at observing and record-
ing, I can combine the information from all six into
a single reliable measure. But if one or two students
are lazy, inattentive, or sloppy, my measure will
have lower reliability. Intercoder reliability is tested
by having several coders measure the exact same
thing and then comparing the measures. For in-
stance, I have three coders independently code the
seating patterns during the same hour on three dif-
ferent days. I compare the recorded observations. If
they agree, I can be confident of my measure’s in-
tercoder reliability. Special statistical techniques
measure the degree of intercoder reliability.

How to Improve Reliability.It is rare to have per-
fect reliability. We can do four things to improve
reliability: (1) clearly conceptualize constructs,
(2) use a precise level of measurement, (3) use mul-
tiple indicators, and (4) use pilot tests.


  1. Clearly conceptualize all constructs.Reli-
    ability increases when each measure indicates one
    and only one concept. This means we must develop
    unambiguous, clear theoretical definitions. Con-
    structs should be specified to eliminate “noise” (i.e.,
    distracting or interfering information) from other
    constructs. For example, the indicator of a pure
    chemical compound is more reliable than the indi-
    cator in which the chemical is mixed with other
    material or dirt. In the latter case, separating the


Equivalence reliability Measurement reliability
across indicators; a measure that yields consistent re-
sults using different specific indicators, assuming that
all measure the same construct.
Multiple indicators The use of multiple procedures
or several specific measures to provide empirical evi-
dence of the levels of a variable.
Free download pdf