than not come as sudden flashes of insight rather than as products of a
series of slow, deliberate thought processes. Probably these intuitive flashes
come from the extreme core of intelligence-and needless to say, their
source is a closely protected secret of our jealous brains.
In any case, the trouble is not that problem reduction per se leads to
failures; it is quite a sound technique. The problem is a deeper one: how do
you choose a good internal representation for a problem? What kind of
"space" do you see it in? What kinds of action reduce the "distance"
between you and your goal in the space you have chosen? This can be
expressed in mathematical language as the problem of hunting for an
approprate metric (distance function) between states. You want to find a
metric in which the distance between you and your goal is very small.
Now since this matter of choosing an internal representation is itself a
type of problem-and a most tricky one, too-you might think of turning
the technique of problem reduction back on it! To do so, you would have to
have a way of representing a huge variety of abstract spaces, which is an
exceedingly complex project. I am not aware of anyone's having tried
anything along these lines. It may bejust a theoretically appealing, amusing
suggestion which is in fact wholly unrealistic. In any case, what AI sorely
lacks is programs which can "step back" and take a look at what is going on,
and with this perspective, reorient themselves to the task at hand. It is one
thing to write a program which excels at a single task which, when done by
a human being, seems to require intelligence-and it is another thing
altogether to write an intelligent program! It is the difference between the
Sphex wasp (see Chapter XI), whose wired-in routine gives the deceptive
appearance of great intelligence, and a human being observing a Sphex
wasp.
The I-Mode and the M-Mode Again
An intelligent program would presumably be one which is versatile enough
to solve problems of many different sorts. It would learn to do each
different one and would accumulate experience in doing so. It would be
able to work within a set of rules and yet also, at appropriate moments, to
step back and make a judgment about whether working within that set of
rules is likely to be profitable in terms of some overall set of goals which it
has. It would be able to choose to stop working within a given framework, if
need be, and to create a new framework of rules within which to work for a
while.
Much of this discussion may remind you of aspects of the MU-puzzle.
For instance, moving away from the goal of a problem is reminiscent of
moving away from MU by making longer and longer strings which you
hope may in some indirect way enable you to make MU. If you are a naive
"dog", you may feel you are moving away from your "MU-bone" whenever
your string increases beyond two characters; if you are a more sophisticated
dog, the use of such lengthening rules has an indirect justification, some-
thing like heading for the gate to get your MU-bone.
Artificial Intelligence: Retrospects^613