Gödel, Escher, Bach An Eternal Golden Braid by Douglas R. Hofstadter

(Dana P.) #1

fused" outwards, as the odor of perfume slowly diffuses in a room when
the bottle is broken. In that sense the new knowledge takes a long time to be
incorporated. Furthermore, if you wanted to go back and replace Axiom 1
by its negation, you could not just do that by itself; you would have to delete
all theorems which had involved Axiom I in their derivations. Clearly this
system's explicit knowledge is not nearly so modular as its implicit knowl-
edge.
It would be useful if we learned how to transplant knowledge modu-
larly. Then to teach everyone French, we would just open up their heads
and operate in a fixed way on their neural structures-then they would
know how to speak French. Of course, this is only a hilarious pipe dream.
Another aspect of knowledge representation has to do with the way in
which one wishes to use the knowledge. Are inferences supposed to be
drawn as pieces of information arrive? Should analogies and comparisons
constantly be being made between new information and old information?
In a chess program, for instance, if you want to generate look-ahead trees,
then a representation which encodes board positions with a minimum of
redundancy will be preferable to one which repeats the information in
several different ways. But if you want your program to "understand" a
board position by looking for patterns and comparing them to known
patterns, then representing the same information several times over in
different forms will be more useful.


Representing Knowledge in a logical Formalism


There are various schools of thought concerning the best way to represent
and manipulate knowledge. One which has had great influence advocates
representations using formal notations similar to those for TNT-using
propositional connectives and quantifiers. The basic operations in such
representations are, not surprisingly, formalizations of deductive reason-
ing. Logical deductions can be made using rules of inference analogous to
some of those in TNT. Querying the system about some particular idea sets
up a goal in the form of a string to be derived. For example: "Is MUMON a
theorem?" Then the automatic reasoning mechanisms take over in a goal-
oriented way, using various methods of problem reduction.
For example, suppose that the proposition "All formal arithmetics are
incomplete" were known, and the program were queried, "Is Principia
Mathematica incomplete?" In scanning the list of known facts-often called
the data base-the system might notice that if it could establish that Principia
Mathematica is a formal arithmetic, then it could answer the question.
Therefore the proposition "Principia Mathematica is a formal arithmetic"
would be set up as a subgoal, and then problem reduction would take over.
If it could find further things which would help in establishing (or refuting)
the goal or-the subgoal, it would work on them-and so on, recursively.
This process is given the name of backwards chaining, since it begins with the
goal and works its way backwards, presumably towards things which may
already be known. If one makes a graphic representation of the main goal,

618 Artificial Intelligence: Retrospects
Free download pdf