88 Mark Chaves and Laura Stephens
as the denominator in a count-based attendance rate while simultaneously using the
number of self-identifying Catholics as the denominator in a survey-based attendance
rate. Using the number of registered Catholics as the denominator in calculating a
count-based attendance rate would indeed generate a higher rate, but it would not
reduce the gap between self-reported and actual attendance rates when both are based
on the same denominator.
A second criticism accepts the fact that survey-based church attendance rates are
inflated but argues that much of this inflation can be attributed to problems in sur-
vey techniques rather than to an overreporting of religious activities on the part of
survey respondents (Woodberry 1998). Church attenders are oversampled by most sur-
veys, this argument goes, because churchgoers are generally easier to contact and are
more cooperative respondents, and they are particularly overrepresented in telephone
surveys that do not make many repeat telephone calls in an effort to reach people who
do not respond to the first few attempts at telephone contact. If this is true, survey-based
attendance rates will be artificially high, but they will be high because churchgoers are
overrepresented among respondents to surveys, not because people overreport their
attendance.
The main problem with this criticism is that the count-based attendance rates ob-
served by Hadaway et al. were well below rates generated byallconventional survey
techniques, including surveys using face-to-face interviews and multiple callbacks. It is
therefore not plausible to argue that sampling bias has produced a large portion of the
gap between count-based and survey-based attendance estimates.
A third criticism comes from using checks internal to conventional surveys to assess
the reliability of self-reported attendance (Hout and Greeley 1998). When, for example,
wives’ reports about their husbands’ church attendance are compared to what husbands
say about themselves, the numbers are nearly identical. This similarity, the argument
goes, suggests that people accurately report the frequency of their own attendance at
religious services. Another kind of reliability check offered by these critics is to exam-
ine the attendance rates of people thought to be unlikely to exaggerate their church
attendance. According to Hout and Greeley (1998), two such groups of people are intel-
lectuals and members of “skeptical” professions, such as scientists and artists. The logic
here is that such people are unlikely to exaggerate their church attendance because
frequently attending religious services would not be considered desirable within their
occupational reference group. Since individuals in these two categories are not likely
to overreport their attendance, the argument goes, their reports can be considered true
measures of church attendance. And since the self-reported attendance of people in
these categories is not much less than the self-reported attendance of everyone else,
this comparison, like the first comparison, is taken to mean that there is very little
overreporting of church attendance in surveys.
These comparisons are not persuasive checks on the reliability or validity of self-
reported attendance. Regarding the first internal check, it is not at all surprising that
wives’ reports of their husbands behavior are consistent with husbands’ reports of their
own behaviors. The likely reason for this is that whatever dynamics govern self-reported
attendance also govern how someone reports a spouse’s attendance. It is not evidence
that contradicts the presence of a large gap between self-reported and actual atten-
dance rates. The second internal check is even less convincing. The assumption that
intellectuals and skeptical professionals will be less likely than others to overreport