Consciousness

(Tuis.) #1

mistake and would like you to turn around when possible. None of these, how-
ever, can probably be said to understand what they say.


Early attempts to teach machines language used the GOFAI approach, trying to
program computers with the right rules. But natural languages are notoriously
resistant to being captured by rules of any kind. Such rules as there are always have
exceptions, words have numerous different meanings, and many sentences are
ambiguous. A machine programmed to parse a sentence, construct a tree of possi-
ble meanings, and choose the most likely may completely fail on sentences that you
and I have no trouble understanding. Pinker (1994, p. 209) gives some examples:


Ingres enjoyed painting his models in the nude.
My son has grown another foot.

Visiting relatives can be boring.
I saw the man with the binoculars.

The most famous example was encountered by an early computer-parser in the
1960s. The computer came up with no less than five possible meanings for the
well-known saying ‘Time flies like an arrow’, giving rise to the aphorism ‘Time flies
like an arrow; fruit flies like a banana’.


Machines analysing language this way remained like Searle inside his Chinese
Room, shuffling symbols back and forth. The advent of neural nets and con-
nectionism improved the prospects. For example, early neural nets learned
relatively easily how to pronounce written sentences correctly without being
programmed to do so, even though the correct pronunciation of a word often
depends on the context. Even so, they could not be said to speak or understand
true language.


A real shift occurred with an approach that is closer to evolutionary theory and
memetics. One of the fundamental principles in memetics is that when organ-
isms can imitate each other, a new evolutionary process begins. Memes are trans-
mitted by copying from person to person, compete to be copied and selected,
and thereby evolve. This suggests the perhaps surprising implication that once
imitation occurs (whether in human or non-human animals or human-made
meme machines), language may spontaneously appear through the competition
between sounds to be copied (Blackmore, 1999).


There is evidence from both computer simulations and studies of robots to con-
firm this. For example, Luc Steels (2000), a computer scientist at the Free Univer-
sity of Brussels, has built the ‘talking heads’: robots that can make sounds, detect
each other’s sounds, and imitate them. They have simple vision and categorisa-
tion systems, and can track each other’s gaze while looking at scenes including
coloured shapes and objects. By imitating each other when looking at the same
thing, they develop a lexicon of sounds that refer to the shapes they are looking
at, although a listening human may or may not understand their words.


Developing grammar proved harder, but a breakthrough occurred when Steels
realised that the speaker could apply its language comprehension system to its
own utterances, either before speaking them or after a failure in communication.


‘How could a slow,
mindless process build
a thing that could
build a thing that a
slow mindless process
couldn’t build on its
own?’

(Dennett, 2017, p. 77)
Free download pdf