112 MEANING AND METHODOLOGY
been innovative in pursuit of the ideal, cross-checking meaning making between participants in a single
study or, in the case of historical research, developing additional techniques for assessing researchers’ repre-
sentations from documents of earlier time periods (see Brandwein, chapter 12, this volume).
- Atkinson, Coffey, and Delamont object to calling “grounded theory” a “technique” (2003, 151), but
space precludes recounting the nuance of their argument here. - To be sure, there is significant attention to the importance of pretesting surveys to counter such
problems and to the use of focus groups to discover the kinds of language most likely used by potential
respondents. Underlying these efforts, however, is the assumption that there is a “communication problem”
to be solved rather than, as in the interpretive approach, an appreciation of the deep contextuality of human
meaning-making processes, which neither pre-testing nor focus groups can eliminate. Once the final deci-
sions about question wording have been made, administering a survey enacts a one-size-fits-all logic that
perforce ignores individual differences in interpretation of questions and, depending on the scope of the
survey, group cultural differences as well. For an example of this latter difficulty, see the discussion in Pader
(chapter 8, this volume, pp. 168–70). For a discussion of this issue overall, see Suchman and Jordan (1992). - Practices vary by discipline and journal. At a minimum, reviewers can request such information or
authors may indicate that such information is available on request. Online publishing provides new ways
to promote more widespread access to such background materials, although clear norms have yet to be
established. - The connection of such practices to other criteria for evaluation is apparent. Documenting one’s
research process is part and parcel of showing a study’s trustworthiness, a way of saying, “I was there and
this is what I did,” as “thick description” likewise shows. - This question sometimes involves an accusation of researcher bias—that a researcher is assumed to
favor a particular outcome for which, consciously or unconsciously, she garners positive evidence while
ignoring the negative cases. This charge comes most often from a positivist perspective that presumes that
“word analysis” is somehow less systematic and less objective than quantitative analysis. For a discussion of
bias, objectivity, and interpretive methodology, see chapter 4, this volume. - Negative case analysis bears a family resemblance to counterfactual analysis (prevalent in the study
of international relations and historical research) in that both can be used to check and/or tighten the argu-
ment of the researcher. Whereas negative case analysis techniques use existing data to check initial data-
based inferences, counterfactual analysis uses imagined scenarios to explore the theoretical logic of the
argument. On the need for more rigorous counterfactual analysis, see Fearon (1991). - A similar process of moving from “puzzlement” to “insight” can occur in many of the forms of
interpretive data analysis, from deconstruction (Clair 1993) to semiotics (Feldman 1995). - Agar’s methodological advice on what to expect concerning meaning making in the field can be
contrasted with the National Science Foundation Report of the Workshop on the Scientific Foundations of
Qualitative Research. Though laudable in its goal of “making qualitative projects competitive in the NSF
review process” (Ragin, Nagel, and White 2004, 3), the executive summary emphasizes that grant suppli-
cants should provide and reviewers should expect to find “plans to look for and interpret disconfirming
evidence, alternative explanations, unexpected findings, and new interpretations—try to be wrong as well as
right” (2004, 17, emphasis in original). It is not so much the specific language or the impulse that is prob-
lematic. As discussed, this same impulse is part of the interpretive emphasis on triangulation, reflexivity, and
negative case analysis; that is, it is appropriate to seek a complex, in-depth portrait of whatever phenomenon
is under study. What is troublesome is the failure to recognize that for interpretive researchers, predicting
what will constitute “disconfirming evidence” at the design stage is inconsistent with interpretive presuppo-
sitions that researchers seek to understand the meaning of others. The NSF requirements put the cart before
the horse in stipulating that the researcher should know ahead of time what can only be discovered in the
midst of doing research. - As discussed in the Introduction to Part III, ethnography is simultaneously a method of data access,
generation, and analysis. - There is a similar diversity of statistical analytic techniques, but these share an epistemological edi-
fice most easily understood in terms of the language of “Type I” and “Type II” errors, in which what the
researcher claims is compared to a “true situation in the world” (Neuman 1997, 323). A “Type I” error is
made when a researcher claims that a relationship exists in the data when, due to sampling error, that is not
the case in the world. A “Type II” error is made when the researcher claims that a relationship does not exist,
based on the sample data, when in fact such a relationship is the case in the world. This distinction assumes