Encyclopedia of Sociology

(Marcin) #1
CONTENT ANALYSIS

a ‘‘top down’’ strategy, beginning with a theory
and hypotheses to be tested, developing reliable
coding categories, applying these to coding-speci-
fied bodies of text, and finally testing the hypothe-
ses by statistically comparing code indexes across
documents.


With the increasing popularity of qualitative
sociology, content analysis has also come to refer
to ‘‘grounded’’ inductive procedures for identify-
ing patterns in various kinds of qualitative data
including text, illustrations, and videos. For exam-
ple, the data might include observers’ detailed
notes of children’s behaviors under different forms
of supervision, possibly supplemented with video-
tapes of those same behaviors. While traditional
content analysis usually enlisted statistical analyses
to test hypotheses, many of these researchers do
not start with hypotheses, but carefully search for
patterns in their data.


However, rather than just produce statistical
analyses or search for patterns, investigators should
also situate the results of a content analysis in
terms of the contexts in which the documents were
produced. A content-analysis comparison of let-
ters to stockholders, for example, should take into
consideration the particular business sectors cov-
ered and the prevailing economic climates in which
they were written. An analysis of American presi-
dential nomination acceptance speeches should
consider that they changed dramatically in form
once they started to be broadcast live on national
radio. A content analysis may have reliable coding,
but the inferences drawn from that coding may
have little validity unless the researcher factors in
such shaping forces.


Like any expanding domain, there has been a
tendency for content analysis to segment into
specialized topics. For example, Roberts (1997)
focuses on drawing statistical inferences from text,
including Carley’s networking strategies and
Gottschalk’s clinical diagnostic tools. There also
has been a stream of instructional books, includ-
ing several series published by Sage, that focus on
particular kinds of qualitative data such as focus-
group transcripts. A technical literature also has
developed addressing specialized computer soft-
ware and video-analysis techniques. Nevertheless
the common agenda is analyzing the content of
qualitative data. Inasmuch as our lives are shaped


by different forms of media, and inasmuch as
different analytic procedures can complement one
another in uncovering important insights, it makes
sense to strive toward an integration, rather than
fragmentation.

Many advances in content-analysis procedures
have been made possible by the convenience and
power of desktop and laptop computers. In addi-
tion, an overwhelming proportion of text docu-
ments are now generated on computers, making
their text files computer accessible for content
analysis. And a revolution in hand-held analogue
and digital video cameras, together with comput-
er-based technology for editing and analyzing video-
tapes, makes new research procedures feasible.

Consider, for example, new possibilities for
analyzing responses to open-ended questions in
survey research. For years, survey researchers have
been well aware that closed-ended questions re-
quire respondents to frame how they think about
an issue in terms of a question’s multiple choices,
even when the choice options had little to do with
how a respondent views an issue. But the costs and
time involved in analyzing open-ended responses
resulted in such questions rarely being used. Even
when they were included in a survey, the interview-
ers usually just recorded capsule summaries of the
responses that omitted most nuances of what
was said.

Contrast this then with survey research using
today’s audio information-capturing technologies.
Telephone survey interviewers are guided by in-
structions appearing on a computer screen. When-
ever an open-ended question appears, the inter-
viewer no longer needs to type short summaries of
the responses. Instead, a computer digitally cap-
tures an audio recording of each open-ended re-
sponse, labels it, and files it as a computer record.
Any audio response can later be easily fetched and
replayed, allowing a researcher, for example, to
identify a ‘‘leaky voice,’’ that is, one indicative of
the respondent’s underlying emotion or attitude.
And the full audio responses are then available to
be transcribed to text, including, if desired, nota-
tions indicating hesitations and voice inflections.
Until computer voice recognition is completely
reliable, transcribing usually remains a manual
task. But with spreadsheet software (such as Excel)
no longer restrictively limiting the amount of text
Free download pdf