The Philosophy of Psychology

(Elliott) #1

(Philosophers and logicians should note that Chomsky’s LF is very
diVerent from whattheyare apt to mean by ‘logical form’. In particular,
sentences of LF do not just contain logical constants and quantiWers,
variables, and dummy names. Rather, they consist of lexical items drawn
from the natural language in question, syntactically structured, but regi-
mented in such a way that all scope-ambiguities and the like are resolved,
and with pronouns cross-indexed to their binding noun-phrases and so on.
And the lexical items will be semantically interpreted, linked to whatever
structures in the knowledge-base secure their meanings.)
Moreover, the proposal is not that LF is the language of all central
processing (as Mentalese is supposed to be). For,Wrst, much of central
cognition may in any case employ visual or other images, or cognitive
models and maps (Johnson-Laird, 1983). Second, and more importantly,
our proposal is that LF serves only as the intermediary between a number
of quasi-modular central systems, whose internal processes will, at least
partly, take place in some other medium of representation (perhaps pat-
terns of activation in a connectionist network, or algorithms computed
over sentences of Mentalese). This idea will be further elaborated below.
But basically, the thought is that the various central systems may be so set
up as to take natural language representations (of LF) as input, and to
generate such representations as output. This makes it possible for the
output of one central module (mind-reading, say) to be taken as input by
another (the cheater-detection system, for example), hence enabling a
variety of modular systems to co-operate in the solution of a problem, and
to interact in such a way as to generate trains of thinking.
But how can such an hypothesis be even so much aspossible? How can a
modular central system interpret and generate natural language represen-
tations, except byWrst transforming an LF input into a distinct conceptual
representation (of Mentalese, as it might be), then using that to generate a
further conceptual representation as output, which can then be fed to the
language system to build yet another LF sentence? But ifthatis the story,
then the central module in question does not, itself, utilise the resources of
the language system. And it also becomes hard to see why central modules
could not communicate with one another by exchanging the sentences of
Mentalese which they generate as outputs and take as immediate inputs.
Let us brieXy outline an evolutionary answer to the question how LF,
rather than Mentalese, could have come to be the medium of intra-cranial
communication between central modules. (For more extended develop-
ment, see Carruthers, 1998a.)
Suppose that the picture painted by Mithen (1996) of the mind ofHomo
erectusand the Neanderthals is broadly correct. Suppose, that is, that their
minds contained a set of more-or-less isolated central modules for dealing


224 Forms of representation

Free download pdf