Genes, Brains, and Human Potential The Science and Ideology of Intelligence

(sharon) #1
206 A CREATIVE COGNITION

multiple units can also be represented. Th e connection weights thus cor-
respond to the structural par ameters (and collectively the grammars)
captured by brain networks, as described in chapter 6.
All of this, of course, requires suitable training of the network by the
(very human and separately conscious) programmer. It usually involves
inputting examples— digitally specifi ed ele ments such as “eyes,” “fur,”
“legs,” or whatever— from the keyboard and reading what ever output is
created at the next layer. Th is is then corrected to what it should be— for
example, “dog”—by adjusting the connection weights and then inputting
the next examples. Th rough successive approximations of input and feed-
back, the network be hav ior expected by the modeller can be obtained.
One of the striking indications of successful training is the predictabil-
ity the network then furnishes even on partial or novel inputs, just like
real cognition. For example, a partial input, like a few doggy features, will
elicit a correct recognition, as if a whole image is being constructed in the
network. Experimenters have thus been excited by the way the networks
can approximate complex, abstract rules, as in categorization and very
simple language learning.
In other words, the networks exhibit emergent properties like those
typically found in cognitive systems. Bold claims have been made that
connectionist networks explain the construction of knowledge, the epi-
genesis of cognitive development, emergent cognitive abilities, a decisive
refutation of pure nativism, or the like. All this is furnished without a
detached central executive, prestructured rules, discrete repre sen ta tions,
and so on.
Is this the answer to William James’s question? Unfortunately, such
connectionist models have many prob lems. Although pretending to be
brainlike, they are not biologically very realistic. Th e elementism at the
roots of the approach is not entirely persuasive. For example, inputs to
the system are keyboarded in as predefi ned, static features or other ele-
ments. Th is is most unlike the natu ral, dynamic fl ow of real experience.
Moreover, getting such networks to learn with more complex struc-
tures, like segments of human speech, has required elaborate data prepa-
ration and built-in (i.e., quasi- nativist) pro cessing propensities working
in slow, progressive, stages. Fi nally, there is obvious artifi ciality in the way
the network has to be trained by the computer operator, according to nu-


This content downloaded from 139.184.14.159 on Tue, 17 Oct 2017 13:54:24 UTC

http://www.ebook3000.com
Free download pdf