Interpretation and Method Empirical Research Methods and the Interpretive Turn

(Ann) #1

xxvi INTRODUCTION


Green (2002), for example, notes that in “constructivist comparative politics,” “constructivism” refers spe-
cifically to a meeting ground between (social constructionist) agency and Giddens’s structuration theory.
Moreover, Ted Hopf (2002) draws a taxonomy within international relations (IR) among three different
treatments of constructionism. And, as mentioned in the previous note, the two terms are used differently in
sociology and psychology. Gergen (1999, 236–37), for instance, distinguishes between the use of
constructivism in developmental psychology to mean individuals’ mental constructs, inside their respective
heads, of their experienced world and constructionists’ emphasis on social processes of reality construction.
We will not attempt to sort out these definitional and conceptual knots, which are apparently afflicting other
fields as well: Hacking identifies a book on the social construction of literacy that concerns “innovative ways
of teaching children to read”—it offers, that is, a “‘social perspective’ on how children learn to read, or
don’t,” rather than treating the idea of literacy as a social construct (1999, 35). We note, then, that our
approach is in the same spirit as Patrick Jackson’s, who writes: “I utilize this term [social constructionism]
rather than the more familiar [in international relations] ‘constructivism’ in order to signal the fact that my
inspiration is the sociological and social theoretical literatures... rather than what might be called ‘mainstream
IR constructivism,’” reflecting the former’s concern with “intersubjective negotiations of meaning and pro-
cesses of social transaction” as distinct from the latter’s “causal impact of roles and norms” and debates over
“logics of consequences” versus “logics of appropriateness” as “better accounts of social behavior” (2002c,
258, n. 12). See also Hacking (1999) for a detailed parsing of the various meanings of the phrase “social
construction.”


  1. However, as discussed in chapter 1, observation of that door depends on an a priori conceptual
    framework, as demonstrated most dramatically by Roberson and her colleagues’ research (Roberson 2005;
    Roberson, Davies, and Davidoff 2000) on color perception. This research suggests that the “reasonableness”
    of a methodologically positivist procedure for counting doors may be self-evident within a single cultural or
    epistemic community but not necessarily so across interpretive communities.

  2. The subtitle of the recently published Brady and Collier volume (2004)—Diverse Tools, Shared
    Standards—illustrates this point: Sharing standards between quantitative methods and qualitative methods
    in the forms advanced by Brady and Collier is possible because the two modes, in their view, (should) share
    epistemological and ontological presuppositions.

  3. We add “nonexperimental” to distinguish “field studies,” understood as ethnographic, participant-
    observation, or site-based interviewing projects, from “field experiments or natural experiments.” On the
    latter see, for example, Yale University’s Summer Program “Designing, Conducting, and Analyzing Field
    Experiments,” sponsored by the Institution of Social and Policy Studies, in operation since 2000.

  4. But “collecting” may not even be an accurate representation of what goes on in the natural sciences,
    nor is the sort of data “collection” done in some of the natural sciences necessarily a good model for all of the
    physical sciences, such as in physics or astronomy, which study things that are difficult to observe (e.g.,
    atoms, electrons, black holes). As Amann and Knorr Cetina (1990), Latour (1990, 1999), and Lynch and
    Woolgar (1990a) point out, as often as not it is slices of botanical substances or representational forms of
    observed things and events, such as maps and diagrams, that are brought back to laboratories, rendered as
    data, and there transformed into “evidence.”

  5. We are playing here on the opposition established in the title of Claude Lévi-Strauss’s well-known
    anthropological work The Raw and the Cooked (1969). We do not mean to imply that researchers “cook”
    their data in the sense that accountants might be accused of “cooking”—falsifying—their books, only that
    from an interpretive perspective there can be no unfiltered data. Kuhn (1970) makes a similar point in noting
    that things observed do not present themselves as “facts” without some theoretical framework.

  6. Formal modelers now appear to share statistical researchers’ preference for numerical data over other
    data forms. In the past, modelers were taken to task by statistical researchers for their reliance on ad hoc
    examples to “translate” the logic of their models into prose and for their failure to provide “empirical” (i.e.,
    quantitative) data for the testing of their models (D.P. Green and Shapiro 1994). Indeed, the “words” they
    used were from examples drawn from their own heads, rather than having been generated systematically
    from context-specific documentary or field research, as in interpretive research. Modelers seem to have
    accepted these arguments that “words” are not good enough as a test of their models. Morton (1999) has
    sought to close the “gap” between modelers and statisticians by recommending graduate training in both
    modeling and statistics. The National Science Foundation has funded a number of summer institutes in
    Empirical Implications of Theoretical Models (EITM) to bring these two groups of scholars together and to
    “train a new generation of scholars who can better link theory and empirical work” (http://eitm/berkeley.edu).
    Advanced graduate students and junior faculty are encouraged to apply to the institutes but they should have

Free download pdf