EVALUATIVE CRITERIA AND EPISTEMIC COMMUNITIES 95
findings were valid and procedures robust. That situation is changing, but slowly.... And
when we read the research reports, they are most often heavy on the “what” (the findings,
the descriptions) and rather thin on the “how” (how you got to the “what”). We rarely see
data displays—only the conclusions.... Researchers are not being cryptic or obtuse. It’s
just that they have a slim tradition to guide their analytic moves, and few guidelines for
explaining to their colleagues what they did, and how.
—Miles and Huberman (1994, 262, emphasis added)
There is a “deep structural tension” (Bensel 2003, 103) within epistemic communities reflected
in the literature on evaluation criteria. Evaluative standards aid in the production and judgment
of research quality, but they may also constrain innovation. Some methodologists, such as Miles
and Huberman, emphasize the enabling side of criteria and their potential for improving schol-
arly production, communication, and judgment. Others, such as Wolcott (1990), eschew the
discussion of criteria altogether, characterizing it as a distraction that neither guides nor in-
forms research (cf. Schwandt 2001). Most scholars working on this issue seem to accept the
pragmatic necessity of criteria, but they are suspicious of approaches that take positivist-in-
formed standards as the operative template. Instead, they seek to develop new criteria consis-
tent with interpretive presuppositions. This latter perspective has led within the last decade or
so to a flourishing of terminology and classification schemes as scholars seek to develop new
visions of what it means to do and evaluate scholarly research in an interpretive mode. Table
5.2 displays this variety as developed in the work of methodologists following the two classics.
The works presented in this table were selected because of the prominence of the author(s)
as judged by extensive citations of their work in the criteria literature and, as important, be-
cause these texts demonstrate the evolution of this literature over time. Note, first, the variabil-
ity in the number of terms produced by each of the studies summarized here: Lincoln and Guba
(1985), used here as a baseline, introduced four interpretive criteria; Eisner (1991) argues for
three criteria; Maxwell (1992) describes five; Lather (1993) and Riessman (1993), four each;
Lincoln (1995), eight; and Brower, Abolafia, and Carr (2000), three. Also notable are the changes
in terminology across time, particularly the change in Lincoln’s perspective: In 1985 (coau-
thoring with Guba) she promoted the use of four criteria that parallel positivist terms, whereas
by 1995 she is arguing for the use of eight different criteria as more appropriate to interpretive
presuppositions.
Part of what emerges in this literature in the early 1990s (something that Miles and Huberman
[1994] pick up on with their inclusion of the “utilization/application/action” criterion set) is the
concern that research should be used not only to solve problems identified by privileged, elite
decision makers, but also to challenge inequalities in society (such as Lincoln’s “sharing of
privileges” and “voice” criteria and Brower, Abolafia, and Carr’s “criticality” criterion, noted
in Table 5.2). The addition of this sort of rationale has not been uncontroversial. For example,
Atkinson, Coffey, and Delamont (2003, 156) argue that this development “conflates” judging
the scientific adequacy of a research project with political, ethical, and other kinds of concerns.
Such disagreement is likely to continue. Judgments of scientific inclusion (“This study is ‘good’
research”) and exclusion (“This research is ‘junk’ science”) are made through the application
of evaluative criteria; but decisions about inclusion and exclusion are themselves political (i.e.,
judging that political and ethical concerns are best excluded as irrelevant to scientific adequacy
is itself a political decision)—making the line between “political” and “nonpolitical” criteria
difficult to draw.