How it works: in theory
Measuring What a Nation
of 330 Million Thinks:
A Checklist
A Random Sample:
Were the people who participated in the
survey selected randomly, such that any
member of the population had an equal
chance of being selected?
Sample Size:
How many people do researchers
need to survey to know what 330
million Americans think? Major
national surveys usually use a sample
of 1,000–2,000 respondents.
Sampling Error:
For a group of any size (even 330 million),
95 percent of the time a survey of
1,000 randomly selected respondents
will measure the average opinion in a
population within 2 percentage points.
Reputable surveys will usually give the
sampling error (sometimes also called
“margin of error”).
Question Wording:
Did the way the question was worded
influence the results? Scientific surveys
try to phrase questions in a neutral
way, but even in reputable polls,
differences in question wording can
influence answers.
Reliable Respondents: Respondents
often give socially acceptable answers
rather than truthful ones—or invent
opinions on the spot. Is there a reason to
think people may not have answered a
survey honestly and thoughtfully?
How it works: in practice
Surveying the
2016 Elections:
Two Approaches
To illustrate how these
requirements for good survey
design play out in real life,
we examine how two well-known polling
organizations, the Pew Research Center
and Rasmussen Reports, designed polls
to measure presidential preference
during the 2016 campaign.
Land-line
only.
A random sample:
Use a land-line-only
sample.
Sample size:
Survey the first person
who answers the phone.
Include
cell phones.
A random sample:
Generate a random
sample of Americans,
including people who
only have cell phones.
Sample size:
When you call a residence,
only talk to the person
who’s in your sample.
Adjust
results.
Sampling error:
If there are more
Democrats in your
sample than you expect,
adjust your results
to reduce support for
Clinton.
Don’t adjust
results.
Sampling error:
If the partisan divide in
the sample is not what
you expect, treat it as a
finding—people may be
changing their party ID.
Use a script
to call.
Reliable respondents:
Use a computer script,
and only call once.
Do the survey in a short
time period (hours).
Use a person
to call.
Reliable respondents:
Use a live interviewer,
and call back people
you miss the first time.
Implement the survey
over several days.
Pew is more
reliable.
Pew’s results were
generally regarded
as more reliable than
Rasmussen’s—and
ultimately made more
accurate predictions
about the 2016 results.
Furthermore, Rasmus-
sen kept its data largely
private, while Pew
released its question-
naire. This made Pew’s
survey verifiable and
provided an opportunity
for further analysis.
Rasmussen Reports
Pew Research Center
Outcome
Rasmussen
data are
incomplete...
A random sample:
Rasmussen misses
people who only use
cell phones (many of
whom are younger and
are Democrats).
Sample size:
Rasmussen is biased
toward people likely
to answer the phone.
These people tend to be
older and include more
men than average.
and assume
nothing has
changed.
Sampling error:
Adjusting for partisan-
ship assumes that no
changes in party ID
have occurred since the
last election.
Reliable respondents:
Rasmussen tactic
lowers response rate.
Rasmussen tactic makes
poll results sensitive to
short-term events.
- When evaluating these two
polling organizations, how
would you respond to someone
who said he still preferred
Rasmussen’s polls because they
correctly predicted the outcome
of a Senate race in his state in
2014? - Which kinds of respondents
are likely to be put off by polls
that the computer scripts? How
might this affect the findings of
the survey?
Critical Thinking
ww04_Ch6,12,15,16_033016_ck.indd 1-2 24/09/18 10:23 AM
Full_07_APT_64431_ch06_198-231.indd 213 16/11/18 1:38 PM