Social Research Methods: Qualitative and Quantitative Approaches

(Brent) #1
QUALITATIVE AND QUANTITATIVE MEASUREMENT

NOTES

REVIEW QUESTIONS

1.What are the three basic parts of measurement, and how do they fit together?


  1. What is the difference between reliability and validity, and how do they complement
    each other?
    3.What are ways to improve the reliability of a measure?
    4.How do the levels of measurement differ from each other?
    5.What are the differences between convergent, content, and concurrent validity?
    Can you have all three at once? Explain your answer.
    6.Why are multiple indicators usually better than one indicator?
    7.What is the difference between the logic of a scale and that of an index?
    8.Why is unidimensionality an important characteristic of a scale?
    9.What are advantages and disadvantages of weighting indexes?
    10.How does standardization make comparison easier?

  2. See Sullivan and Feldman (1979) on multiple indica-
    tors. A more technical discussion can be found in Hert-
    ing (1985), Herting and Costner (1985), and Scott
    (1968).

  3. See Carmines and Zeller (1979:17). For a discussion
    of the many types of validity, see Brinberg and McGrath
    (1982).

  4. The epistemic correlation is discussed in Costner
    (1985) and in Zeller and Carmines (1980:50–51,
    137–139).

  5. Kidder (1982) discussed the issue of disagreements
    over face validity, such as acceptance of a measure’s
    meaning by the scientific community but not the subjects
    being studied.

  6. This was adapted from Carmines and Zeller
    (1979:20–21).

  7. For a discussion of types of criterion validity, see
    Carmines and Zeller (1979:17–19) and Fiske (1982) for
    construct validity.

  8. See Cook and Campbell (1979) for elaboration.

  9. See Borgatta and Bohrnstedt (1980) and Duncan
    (1984:119–155) for a discussion and critique of the topic
    of levels of measurement.

  10. Johnson and Creech (1983) examined the measure-
    ment errors that occur when variables that are conceptu-
    alized as continuous are operationalized in a series of
    ordinal categories. They argued that errors are not seri-
    ous if more than four categories and large samples are
    used.

  11. Duncan (1984:220–239) presented cautions from a
    positivist approach on the issue of measuring anything.

  12. The terms concept, construct,and ideaare used more
    or less interchangeably, but their meanings have some
    differences. An ideais any mental image, belief, or im-
    pression. It refers to any vague impression, opinion, or
    thought. A conceptis a thought, a general notion, or a
    generalized idea about a class of objects. A constructis
    a thought that is systematically put together, an orderly
    arrangement of ideas, facts, and impressions. The term
    constructis used here because its emphasis is on taking
    vague concepts and turning them into systematically or-
    ganized ideas.

  13. See Grinnell (1987:5–18) for further discussion.

  14. See Blalock (1982:25–27) and Costner (1985) on the
    rules of correspondence or the auxiliary theories that con-
    nect an abstract concept with empirical indicators. Also
    see Zeller and Carmines (1980:5) for a diagram that illus-
    trates the place of the rules in the measurement process. In
    his presidential address to the American Sociological
    Association in 1979, Hubert Blalock (1979a:882) said, “I
    believe that the most serious and important problems that
    require our immediate and concerted attention are those of
    conceptualization and measurement.”

  15. See Bailey (1984, 1986) for a discussion of the three
    levels.

  16. See Bohrnstedt (1992a,b) and Carmines and Zeller
    (1979) for discussions of reliability and its various
    types.

Free download pdf