205
Why We Believe: Fostering Scientific Thought
Clearly, the problem of comprehensible and accurate presentation of research results is
not always the result only of journalistic flaws. It is no secret that researchers often write
in ways incomprehensible to outsiders. It is also no secret that researchers are sometimes
simply poor writers. As Bruner (1942) wrote, with tongue only partially in her cheek,
“I have even succumbed to a conviction that authors are engaged wilfully and with malice
in suppressing every vestige of spontaneity and emphasis in what they are writing” (p. 53),
including “the tortured circumlocutions of the passive voice” (p. 55). Ferreting through
turgid prose undoubtedly contributes to many journalistic missteps.
Questioning the Conclusions
In one example, several newspapers reported that African-Americans received certain
heart-related treatments only 60% as often as White men. In reality, referrals for Black
men did not differ from those of White men, and Black women were referred 87% as
often as White men. The problem is that journalists misinterpreted a technical term and
misunderstood the research results. As a result, The New York Times, The Washington Post,
and USA Today all misreported the results (Greenstein, 1999).
Because of the different goals of scientific and journalistic writing, readers need to be
aware that the issues that are important to scientists differ from those of journalists. Journalists
look for captivating stories and are probably less interested in the caveats that researchers
think are important. This type of material is useful for students who are learning how to
write their results in either technical or nontechnical format, the demands of which differ.
As an example, reporter Jim Dyer wrote about the so-called “Monster Study” in which
a researcher conditioned children to stutter, some of them experiencing lifelong distress
because of it (Dyer, 2001). Although it was a captivating, if horrific story, researchers sub-
sequently called into question the claims that Dyer made (Ambrose & Yairi, 2002). For
instance, one woman who had participated in the study as a child asserted that her life was
ruined because of her stuttering. It appears that, subsequent to the study, she did not
stutter for the next six decades, beginning to do so only when she met her husband (Owen,
2003b) or when he died (Owen 2003a), depending on the account.
There was further misinformation in The Village Voice (Collins, 2006), the implication
appearing that the researchers unsuccessfully attempted to reverse the stuttering they had
induced. The actual data from the study indicated no increase in stuttering in the groups
that were supposedly conditioned to stutter (Ambrose & Yairi, 2002).
A number of legitimate journalistic sources picked up the story. Unfortunately, the more
scholarly research in a professional journal did not attract much attention. A juicy contro-
versy is always better copy than a sober counterargument. It would behoove students to
learn that news reports about research are always simpler than the actual research and that
it is not wise, particularly with controversial research, to take a news report at face value.
Questioning the Data
Sometimes one can question not only the conclusions that appear in popular sources but
also the data that writers adduce to support their arguments. Best (2001, 2004) has