As just described, the processing model may sound as though it is totally sequential. But that is not necessary. A level
of structure need not be completed by its integrative processor in order for the next interface processor to start
passing information up or down the line. Rather, any fragment of structure at one level is sufficient to call into action
(or activate) any processors that can make use of it. That is, this architecture permits radically“opportunistic”or
“incremental”processing (to use Marslen-Wilson and Tyler's (1987) and Levelt's (1989) terms, respectively).
On theother hand, theprocessor's“opportunism”is notchaotic.Itis constrainedat anyparticular momentintime by
what structure is already available, and how that structure can affect other structuresthrough the rules of the grammar.
Theessentialrole ofthegrammar becomes more evidentwhenwe add intercomponentialfeedback tothestory. Letus
work through a typical example. As already noted, the auditory interface delivers a phonetic input bereft of word
boundaries (except possibly where there are pauses in the signal). Word boundaries in general must be constructed by
the integrative processor for phonology, in part by using lexical look-up. However, consider sentences (2a, b), which
are acoustically indistinguishable (at least in my dialect) up to the point marked . The difference in word boundaries
cannot be settled phonologically: the distinction between (2a) and (2b) depends on the semantics as well as the syntax
of the phrase following . (And there is no sense of“garden path”here.)
(2) a. It's only a PARent, not * a TEACHer.
b. It's only apPARent, not * REAL.
Hence the phonology processor must potentially entertain alternative structures, both of which get passed on by the
interface processors successively to syntax and semantics. When the semantics processor resolves the ambiguity, thus
“clamping”the intended meaning, the semantics processor cannot by itself reject the incorrect phonological structure.
Rather, the interface processors must pass down the inhibition of the rejected structure in succession to syntax and
phonology. In short, what words areheardcan indeed be affected by semantics—but only through the relation of
semantics to phonology through the interface rules.
We see a similar effect in an experiment by Tanenhaus et al. (1995), which shows that visual input interacts with
syntactic parsing. Subjects are confronted with an array of objects and an instructionlike (3), and theireyemovements
over the array are tracked.
(3) Put the apple on * the towel in the cup.