ment to causal systematicity seems rather to derive from our folk-belief in
systematic inference.
While we doubt whether there is anything in folk psychology which
commits us to believing that occurrent propositional thoughts are tokened
in natural language sentences, the folk do at leastWnd this idea quite
natural. Whether this is because, in reporting thoughts, we slip easily
between the use of a that-clause (‘Mary thought that it was about to
break’) and the use of indirect speech (‘Mary thought, ‘‘It is about to
break’’’); or whether, rather, our use of indirect speech reXects our belief in
the role of natural language in thinking, is moot. In fact we are inclined to
suspect the latter, since ‘inner speech’ is a familiar – indeed ubiquitous –
introspectible phenomenon (Hurlburt, 1990, 1993), and since the patterns
in inner speech seem to mirror so closely the inferential and causal roles
distinctive of thought. (We return to this issue in sections 3.4 and 3.5
below.)
Does all this mean that connectionism is a threat to folk-psychological
beliefs about the mind? Should the modest successes so far enjoyed by
connectionist models lead us to wonder whether our folk-beliefs about the
mind might be radically mistaken? Do those successes even provide us,
perhaps, with suYcient reason to think that those beliefs probablyare
mistaken? Here we are inclined to take a tough line. Since weknowthat
human inference involves tokenings of discrete states, and since weknow
that many of these states share common conceptual components, it is a
constraint on any adequate connectionist model of human inference that it
should be able to replicate these facts. If connectionist networks are
incapable of generating systematic relations amongst their states – except
by accident – of the sort displayed above, then so much the worse for
connectionism, we say.
2.4 Connectionism and systematicity
Some connectionists seem, at least tacitly, to accept the point just made,
and devote their energies to showing that connectionist systemscanexhibit
systematicity (Smolensky, 1988, 1991, 1995). For example, it is often
noticed that distributed connectionist systems which have been trained-up
on some domain (concerning hot drinks and drink-containers, say) will
show certain complexpatternsin their outputs corresponding to certain
components of their inputs. These activation-vectors will generally cross-
cut traditional conceptual boundaries – with vectors corresponding to
mug-containing-coVeeor mug-containing-tea-with-milk(between which
there would be no common element corresponding tomug), rather than
what we think of as the more usual atomic concepts ofmugorcoVee. But
Mentalese versus connectionism 203