Social Research Methods: Qualitative and Quantitative Approaches

(Brent) #1
QUALITATIVE AND QUANTITATIVE MEASUREMENT

about pay and household tasks. They ignore the
other areas (intellectual pursuits, politics, authority
relations, and other aspects of work and family). For
a content-valid measure, I must either expand the
measure or narrow the definition.^11


  1. Criterion validityuses some standard or
    criterion to indicate a construct accurately. The va-
    lidity of an indicator is verified by comparing it with
    another measure of the same construct in which a
    researcher has confidence. The two subtypes of this
    type of validity are concurrent and predictive.^12
    To h ave concurrent validity, we need to asso-
    ciate an indicator with a preexisting indicator that
    we already judge to be valid (i.e., it has face valid-
    ity). For example, we create a new test to measure
    intelligence. For it to be concurrently valid, it should
    be highly associated with existing IQ tests (assum-
    ing the same definition of intelligence is used). This
    means that most people who score high on the old
    measure should also score high on the new one, and
    vice versa. The two measures may not be perfectly
    associated, but if they measure the same or a simi-
    lar construct, it is logical for them to yield similar
    results.
    Criterion validity by which an indicator pre-
    dicts future events that are logically related to a con-
    struct is called predictive validity. It cannot be used
    for all measures. The measure and the action pre-
    dicted must be distinct from but indicate the same
    construct. Predictive measurement validity should
    not be confused with prediction in hypothesis test-
    ing in which one variable predicts a different vari-
    able in the future. For example, the Scholastic
    Assessment Test (SAT) that many U.S. high school
    students take measures scholastic aptitude: the abil-
    ity of a student to perform in college. If the SAT has
    high predictive validity, students who achieve high
    SAT scores will subsequently do well in college. If
    students with high scores perform at the same level
    as students with average or low scores, the SAT has
    low predictive validity.
    Another way to test predictive validity is to se-
    lect a group of people who have specific character-
    istics and predict how they will score (very high or
    very low) vis-à-vis the construct. For example, I cre-
    ate a measure of political conservatism. I predict
    that members of conservative groups (e.g., John


Birch Society, Conservative Caucus, Daughters of
the American Revolution, Moral Majority) will
score high on it whereas members of liberal groups
(e.g., Democratic Socialists, People for the Ameri-
can Way, Americans for Democratic Action) will
score low. I “validate” it by pilot-testing it on mem-
bers of the groups. It can then be used as a measure
of political conservatism for the public.


  1. Construct validityis for measures with
    multiple indicators. It addresses this question: If the
    measure is valid, do the various indicators operate
    in a consistent manner? It requires a definition with
    clearly specified conceptual boundaries. The two
    types of construct validity are convergent and dis-
    criminant.
    Convergent validityapplies when multiple in-
    dicators converge or are associated with one an-
    other. It means that multiple measures of the same
    construct hang together or operate in similar ways.
    For example, I measure the construct “education”
    by asking people how much education they have
    completed, looking up school records, and asking
    the people to complete a test of school knowledge.
    If the measures do not converge (i.e., people who
    claim to have a college degree but have no records
    of attending college or those with college degrees
    perform no better than high school dropouts on my
    tests), my measure has weak convergent validity,
    and I should not combine all three indicators into
    one measure.


Concurrent validity Measurement validity that re-
lies on a preexisting and already accepted measure to
verify the indicator of a construct.
Predictive validity Measurement validity that relies
on the occurrence of a future event or behavior that is
logically consistent to verify the indicator of a construct.

Convergent validity A type of measurement valid-
ity for multiple indicators based on the idea that indi-
cators of one construct will act alike or converge.

Construct validity A type of measurement validity
that uses multiple indicators and has two subtypes:
how well the indicators of one construct converge or
how well the indicators of different constructs diverge.

Criterion validity Measurement validity that relies
on some independent, outside verification.
Free download pdf