the brain of every human who ever had its component symbols, even if it
never was awakened in them. This would mean that to enumerate the
symbols in anyone's brain, one would have to count all dormant symbols-all
possible combinations and permutations of all types of activations of all
known symbols. This would even include those fantastic creatures of
software that one's brain invents when one is asleep-the strange mixtures
of ideas which wake up when their host goes to sleep ... The existence of
these "potential symbols" shows that it is really a huge oversimplification to
imagine that the brain is a well-defined collection of symbols in well-defined
states of activation. It is much harder than that to pin down a brain state on
the symbol level.
Symbols - Software or Hardware?
With the enormous and ever-growing repertoire of symbols that exist in
each brain, you might wonder whether there eventually comes a point
when the brain is saturated-when there is just no more room for a new
symbol. This would come about, presumably, if symbols never overlapped
each other-if a given neuron never served a double function, so that
symbols would be like people getting into an elevator. "Warning: This
brain has a maximum capacity of 350,275 symbols!"
This is not a necessary feature of the symbol model of brain function,
however. In fact, overlapping and completely tangled symbols are probably
the rule, so that each neuron, far from being a member of a unique symbol,
is probably a functioning part of hundreds of symbols. This gets a little
disturbing, because if it is true, then might it not just as easily be the case
that each neuron is part of every single symbol? If that were so, then there
would be no localizability whatsoever of symbols-every symbol would be
identified with the whole of the brain. This would account for results like
Lashley's cortex removal in rats-but It would also mean abandonment of
our original idea of breaking the brain up into physically distinct subsys-
tems. Our earlier characterization of symbols as "hardware realizations of
concepts" could at best be a great oversimplification. In fact, if every symbol
were made up of the same component neurons as every other symbol, then
what sense would it make to speak of distinct symbols at all? What would be
the signature of a given symbol's activation-that is, how could the activa-
tion of symbol A be distinguished from the activation of symbol B?
Wouldn't our whole theory go down the drain? And even if there is not a
total overlap of symbols, is our theory not more and more difficult to
maintain, the more that symbols do overlap? (One possible way of portray-
ing overlapping symbols is shown in Figure 68.)
There is a way to keep a theory hased on symbols even if physically,
they overlap considerably or totally. Consider the surface of a pond, which
can support many different types of waves or ripples. The hardware-
namely the water itself-is the same in all cases, but it possesses different
possible modes of excitation. Such software excitations of the same
(^356) Brains and Thoughts