Of course, this does not elevate consciousness or awareness to any
"magical", nonphysical level. Awareness here is a direct effect of the com-
plex hardware and software we have described. Still, despite its earthly
origin, this way of describing awareness-as the monitoring of brain activ-
ity by a subsystem of the brain itself-seems to resemble the nearly inde-
scribable sensation which we all know and call "consciousness". Certainly
one can see that the complexity here is enough that many unexpected
effects could be created. For instance, it is quite plausible that a computer
program with this kind of structure would make statements about itself
which would have a great deal of resemblance to statements which people
commonly make about themselves. This includes insisting that it has free
will, that it is not explicable as a "sum of its parts", and so on. (On this
subject, see the article "Matter, Mind, and Models" by M. Minsky in his
book Semantic Information Processing.)
What kind of guarantee is there that a subsystem, such as I have here
postulated, which represents the self, actually exists in our brains? Could a
whole complex network of symbols such as has been described above evolve
without a self-symbol evolving? How could these symbols and their ac-
tivities play out "isomorphic" mental events to real events in the surround-
ing universe, if there were no symbol for the host organism? All the stimuli
coming into the system are centered on one small mass in space. It would be
quite a glaring hole in a brain's symbolic structure not to have a symbol for
the physical object in which it is housed, and which plays a larger role in the
events it mirrors than any other object. In fact, upon reflection, it seems
that the only way one could make sense of the world surrounding a
localized animate object is to understand the role of that object in relation
to the other objects around it. This necessitates the existence of a self-
symbol; and the step from symbol to subsystem is merely a reflection of the
importance of the self-symbol, and is not a qualitative change.
Our First Encounter with Lucas
The Oxford philosopher J. R. Lucas (not connected with the Lucas num-
bers described earlier) wrote a remarkable article in 1961, entitled "Minds,
Machines, and Godel". His views are quite opposite to mine, and yet he
manages to mix many of the same ingredients together in coming up with
his opinions. The following excerpt is quite relevant to what we have just
been discussing:
At one's first and simplest attempts to philosophize, one becomes entangled in
questions of whether when one knows wmething one knows that one knows
it, and what, when one is thinking of oneself, is being thought about, and what
is doing the thinking. After one has been puzzled and bruised by this problem
for a long time, one learns not to press these questions: the concept of a
conscious being is, implicitly, realized to be different from that of an uncon-
scious object. In saying that a conscious being knows something, we are saying
not only that he knows it, but that he knows that he knows it, and that he
knows that he knows that he knows it, and so on, as long as we care to pose the
(^388) Minds and Thoughts