326 ANALYZING DATA
while on patrol during down times between calls. After each session, we would schedule the next
story collection for two to four weeks later. We would remind the storyteller of the instructions,
emphasizing our interest in stories told and heard within the work setting. We limited the number
of story collections to a maximum three for each storyteller to keep within the six to eight months
of time for fieldwork in each setting.
The workers’ stories, our probes, and additional details added in response to our probes were
then transcribed verbatim and in the sequence of the interaction. Transcription began after each
story collection encounter. We then revised the stories, incorporating the added details from the
responses to probes into the story text to create the fully rendered story. Although transformed
from oral to written texts, revisions minimized the changes in the spoken structure of the stories.
After all the stories were collected for each setting, we shared edited, written versions of their
own stories with individual street-level workers, who were allowed to revise and further edit their
stories. We did not share the stories with the group, just with those who told them. No storyteller
made significant changes, but some did embellish their stories further or qualify them in the
direction of greater caution. A few, especially the teachers, eliminated verbalisms, such as “you
know.” We wanted to be assured by and to assure the storytellers that these were their stories, so
that to the extent possible, our interpretations would be based on their narratives. These edited
versions were the texts we used for analysis.
STORY ANALYSIS
Our analysis of the stories was not as orderly as it may appear in this rendering of our interpretive
engagement with the material, but the research team did follow a set of procedures. We developed
a “Story Cover Page” for each story. This identified the site and storyteller as well as several
common characteristics of each story.^7 We also developed a set of codes to classify text elements
based on our evolving interpretive frames. Text elements could be a word or phrase or as much as
a paragraph. We used two types of codes. “Story Codes” referenced different narrative storytelling
elements, such as repetition and causal statements. “Thematic Codes” referred to theoretically
relevant constructs, such as decision norms or workplace relational dynamics.^8 Each story was
coded by one member of the research team and then check-coded by another.
This process encouraged a close reading of the texts as the coders had to think about the
thematic relevance of each story element. By forcing us to compare our analytic summaries, the
codes enabled us to ascertain that our interpretations of the meanings of the texts were shared:
They facilitated conversation and highlighted differences of interpretation. We were less con-
cerned with producing precise inter-rater reliability measures than with developing intersubjective
consensus; disagreements were discussed until consensus emerged. Comparing the two codings
provided an opportunity to negotiate shared meanings of texts among the four members of the
research team. When research team members independently coded the same word, phrase, or
passage in the same manner, we could see common interpretations emerging. Disagreements
among coders provided opportunities to challenge each other’s readings and to encourage con-
tinual interpretation.
The codes also provided a form of indexing that enabled us to locate the individual stories and
the places in the stories where specific themes were discussed. We created a database identifying
each theme present in each story. So, for example, when drafting the book chapter entitled “Street-
Level Worker Knows Best” (Maynard-Moody and Musheno 2003, chapter 10), we could identify
all of the stories that spoke to this theme.
As we discussed individual codes and text elements, we quickly learned that in story interpre-