also a step of reasoning, one must resort to a yet higher-level rule, and so
on. Conclusion: Reasoning involves an infinite regress.
Of course something is wrong with the Tortoise's argument, and I
believe something analogous is wrong with Samuel's argument. To show
how the fallacies are analogous, I now shall "help the Devil", by arguing
momentarily as Devil's advocate. (Since, as is well known, God helps those
who help themselves, presumably the Devil helps all those, and only those,
who don't help themselves. Does the Devil help himself?) Here are my
devilish conclusions drawn from the Carroll Dialogue:
The conclusion "reasoning is impossible" does not apply to
people, because as is plain to anyone, we do manage to carry out
many steps of reasoning, all the higher levels notwithstanding.
That shows that we humans operate without need of rules: we are
"informal systems". On the other hand, as an argument against
the possibility of any mechanical instantiation of reasoning, it is
valid, for any mechanical reasoning-system would have to depend
on rules explicitly, and so it couldn't get off the ground unless it
had metarules telling it when to apply its rules, metametarules
telling it when to apply its metarules, and so on. We may conclude
that the ability to reason can never be mechanized. It is a uniquely
human capability.
What is wrong with this Devil's advocate point of view? It is obviously
the assumption that a machine cannot do anything without having a rule telling it
to do so. In fact, machines get around the Tortoise's silly objections as easily
as people do, and moreover for exactly the same reason: both machines
and people are made of hardware which runs all by itself, according to the
laws of physics. There is no need to rely on "rules that permit you to apply
the rules", because the lowest-level rules-those without any "meta" 's in
front-are embedded in the hardware, and they run without permission.
Moral: The Carroll Dialogue doesn't say anything about the differences
between people and machines, after all. (And indeed, reasoning is
mechanizable. )
So much for the Carroll Dialogue. On to Samuel's argument. Samuel's
point, if I may caricature it, is this:
No computer ever "wants" to do anything, because it was pro-
grammed by someone else. Only if it could program itself from
zero on up-an absurdity-would it have its own sense of desire.
In his argument, Samuel reconstructs the Tortoise's position, replacing "to
reason" by "to want". He implies that behind any mechanization of desire,
there has to be either an infinite regress or worse, a closed loop. If this is
why computers have no will of their own, what about people? The same
criterion would imply that
Strange Loops, Or Tangled Hierarchies 685