106 MEANING AND METHODOLOGY
“thick description” and the notion of “immersion” in the field or the data, which imply more syner-
gistic understandings of the research process. Although the term itself has not spread widely, Lin-
coln and Guba’s specific terminology and descriptions codify practices that have been used by
researchers in various types of field studies—although perhaps not quite in the ways envisioned by
Lincoln and Guba (1985), who describe a full-blown, formal process complete with a written agree-
ment with an outside, paid auditor whose “letter of attestation” assesses a study’s dependability and
confirmability. Instead, the peer review process typically fulfills the auditor role, and the researcher
is expected to provide a detailed record of what has been done in the research process.^21 At least in
the ethnographic tradition (Altheide and Johnson 1994), such record keeping has long been a taken-
for-granted part of research and writing practices. Reviewers expect to find, as Brower, Abolafia,
and Carr put it, a “detailed description of procedures” (2000, 391). For document analysis this
expectation has typically been met by providing basic information on selection and authenticity of
documents; for interviews, researchers provide information on who was selected, how long and
where they were interviewed, interviewing and transcription techniques, the total number of inter-
views, and other relevant details. The expected background information for participant-observation
is yet more extensive, including site selection, length of time in the field, events observed, researcher’s
role, and so on (see Altheide and Johnson 1994, 491).^22
Description of the research process in anticipation of a peer review “audit” might be better
thought of as meeting a criterion of “transparency”^ than of reliability/dependability and objectivity/
credibility. This is so because it is now widely recognized in the criteria literature that these latter
terms import too much of a positivist conception of research into interpretive criteria. In attempt-
ing to parse the concept of reliability, Lincoln and Guba (1985) argued that repeatability of re-
sults, a common definition, depends “upon an assumption of naive realism. There must be
something tangible and unchanging ‘out there’ that can serve as a bench mark if the idea of
replication is to make sense” (1985, 299). They went on, however, to concede the possibility that
human investigators can be careless, and for this reason they suggested “dependability” as a more
useful term to identify that problem. Such skepticism of the appropriateness of “reliability” as a
standard has continued and grown in interpretive research. Hollway and Jefferson, for example,
argue that standard definitions of this term—consistency, stability, repeatability of results—assume
that “meanings can be controlled and made identical in successive applications of a question,”
making reliability an “invalid criterion” from their interpretive perspective on interviewing (2000,
79). Notably, they do not mention the term “dependability” as an alternative; it does not seem to
have caught on in the literature as a replacement for “reliability” or as a way to refer to researcher
carefulness. “Confirmability” has also not enjoyed widespread usage; rather, the notion of objec-
tivity has received intensive and extensive criticism (see, e.g., Harding 1993). What interpretive
researchers emphasize in place of objectivity is reflexivity, which can be assessed by a variety of
techniques, including the reflexive journal, reflexive writing practices, member checks, descrip-
tions of research methods, and attention to the researcher role in the generation of evidence,
among others.
To sum up discussion of the audit concept, interpretive researchers are committed to, and have
written about, techniques for recording and making available their research processes for others’
inspection, for addressing the question, “How exactly did you do this research?” Although “au-
dit” is used in the criteria literature and research methods texts to some extent, its use is not
ubiquitous but, instead, represents a plethora of practices that enact the goal of “transparency.”
Because the word “transparency” evokes expectations that researchers willingly lay bare their
processes for inspection and critique—the sin qua non of peer review—it might be a better term
to encompass the reasons interpretive researchers document their research processes. But it is not