mini-lecture and the last sentence indicated the participant inferred from the
given outline. So the small piece was then segmented into two nodes, one as
summarizing and the other as inferring from the task. In the whole process of
coding, I found splitting segments can avoid double or mixed coding, but an
obvious disadvantage of this technique lies in the extent to which I can divide
my protocols. If they were divided into propositions containing two or three
words, those segments could hardly give sufficient clues about the context.
Eventually, I still stick to a clause containing a complete idea as a basic unit for
segmenting protocols.
I feel that the whole passage talks about information,
so I feel likefilling in information. It’s a feeling.
And the following is also information, so I just put it here.
c. combine codes. On the other hand, there are a few participants who talked quite
a lot for each blank in the task, but a big paragraph of words might just mean the
same, sofinally these separate codes were then combined into one code. For
example, atfirst the following paragraph was segmented into several nodes, but
on the second thought, these nodes were combined as one individual node as
“parsing the task”.
So“and”actually connects two words that might not both modify“thinking”. And they are
different two parts. Critical in thinking. Um...God. Reflecting, keep reflecting, acting
critically, oh, I modify it. modify what? It’s strange. What is it? Critical, critical in thinking,
critical in thinking (rising tone). Is there any parallel structure to it?Oh, God. Reflecting,
reflect...critical in thinking (pause 5 s).
d. reliability. In order to guarantee the reliability of coding, both intra-coder
agreement and inter-coder agreement were implemented and calculated. Young
(1997) suggested two formulae to help calculate the intra-coder and inter-coder
reliability coefficients:
Intra-coder reliability coefficient:
Number of items coded the same in thefirst and second coding
Number of items coded in thefirst coding
Inter-coder reliability coefficient:
Number of items coded the same by researcher and external coder
Number of items coded by researcher
The researcher coded all the TAP data twice in two separate months and then
calculated the intra-coder reliability. The intra-coder reliability coefficient is 0.846.
One of the researcher’s PhD colleagues was invited to code a portion of the data
with thefinalized coding scheme to see if two independent researchers have dif-
ferent interpretations of the data and the inter-coder reliability is 0.895.
82 6 Employing the Think-Alound Method...