EVALUATIVE CRITERIA AND EPISTEMIC COMMUNITIES 105
et al. 1977 [1961], or addicts, as in Burns 1980); to providing a critical, even potentially
emancipatory, perspective on group meanings and practices, as in some feminist research (see
Pierce 1995, on litigation lawyers, and D. Kaufman 1989, on religiously orthodox women). What
this discussion reveals is that the technique of “informant feedback/member checks” is entangled
with some of the more recent interpretive criteria, such as “voice” or “authenticity” and “critical
subjectivity” or “criticality” (see Lincoln 1995 and Brower, Abolafia, and Carr 2000 in Table
5.2); with other techniques, such as front-stage and back-stage representation (see Brower, Abolafia,
and Carr 2000 in Table 5.3); and with the concern, mentioned earlier, about whether such criteria
are inappropriately “political.”
Whatever the mix of researcher purpose and the power of the group under study, one possible
result is seamless agreement: all group members agree that the researcher has “got it right.” Alterna-
tively, and more likely, one or more group members may protest some aspects of the researcher’s
representations. Such protest (as well as the researcher’s purposes and perspectives on the contesta-
tion) may be included in the research report and assessed against member quotations and other
thickly descriptive elements in a way that shows that the researcher has taken members’ meanings
seriously, and not just at face value. In Atkinson, Coffey, and Delamont’s words, “informants’
accounts should neither be endorsed nor disregarded: they need to be analyzed” (2003, 194).
The technique and practice of informant feedback/member checks also bears a family resem-
blance to what is called “grounded theory” (Glaser and Strauss 1967). Although disagreements
have developed between its creators over what this technique entails (Atkinson, Coffey, and
Delamont 2003; Creswell 1998b; esp. 148–54),^19 what is germane here is the similarity between
the impulse to “go back” to members and the desire to “ground” theory. Both these techniques—
though very different in detail, in stage of research when they are undertaken, and so forth—
emphasize inductive analysis as a check on researchers’ tendencies to forget the gulf between self
and others to the possible detriment of faithful constructions of others’ meanings and to the pos-
sible detriment of theory when it overgeneralizes the ideas of its particular human creators, thereby
producing, by implication, “ungrounded” theory. Many methodologically positivist research prac-
tices fail to adequately address this gulf. For example, in survey research, participants are not
allowed to “talk back” to researchers, to articulate the ways in which the wording of a question
fails to capture their concerns, or to articulate an alternative option beyond the limited number
from which they are told to choose.^20 In interpretive research practice, the use of “member checks”
and “informant feedback” is evidence of a methodological commitment to “getting it right” from
the perspective of situational actors’ lived experiences.
“Audit,” the second concept, refers to a set of practices for documenting study procedures,
enabling the researcher to respond to the question, “How exactly did you do this research?”
Lincoln and Guba (1985) used the term in their discussion of techniques for demonstrating the
criteria of dependability (parallel to reliability) and confirmability (parallel to objectivity). (See
Table 5.3.) They give credit for the operationalization of the auditing concept to Halpern (1983),
but it was also part of the language of evaluation research that had been developing in public
policy analysis throughout the 1970s, when evaluations began to be built in to the policy process.
Perhaps no other term better illustrates the disciplining force of criteria, because it may be the
anticipation of a critical “audit” of his data and research processes that provokes the researcher to
construct “an audit trail,” that is, a record as complete as possible of the processes and steps he
used to conduct the research. The goal is to make the linkages among researcher decisions, evi-
dence generated, and inferences drawn as transparent as they can be.
The term “audit” has not been much repeated in the criteria literature; its associations with ac-
counting plus the linear notion of a “trail” are a bit at odds with other interpretive language, such as