Consciousness

(Tuis.) #1

Computable v. non-computable.
A computable procedure is one that can be described
explicitly, and any such procedure can be carried out by
a computer program (this is the Church–turing thesis).
Computational functionalism is the doctrine that the brain
is essentially a turing machine and its operations are
computations. If this is true, then it should be possible to
reproduce all human abilities by carrying out the right
computations, making strong AI feasible. Against this is
the claim that such computations would only simulate
human functions; that there is more to consciousness than
running the right program. turing himself showed that
some functions are noncomputable and Penrose argues
that mathematicians can intuitively see noncomputable
truths, meaning that the brain is not a turing machine and
conscious understanding goes beyond computation. (see
the website for more on the maths of noncomputability.)


Deterministic v. nondeterministic.
A machine that always produces the same outcome from
the same input and the same internal state is determinis-
tic; one that can produce different outcomes is nondeter-
ministic. Digital computers are deterministic. note that this
does not mean that their outcome must be predictable. For
example, chaos theory shows that for some deterministic
processes, the outcome varies dramatically with only very
slight differences in initial conditions. nor does it mean
that computers cannot be creative. the evolutionary
algorithm (Chapter 10) is par excellence a deterministic
procedure that yields creativity. Computers can simulate
nondeterministic systems by adding pseudo-randomness.


Brains, at least at one level, are nondeterministic. they are
warm, wet, and noisy, and therefore cannot always produce
the same output to the same input. neurons are living entities
whose electrical properties change as their dendrites grow or
move. synapses form and decay, and their strength changes
in response to use. so the machine itself is never the same
from one moment to the next. At a smaller scale, though,
the underlying molecular processes are usually assumed to
be deterministic. this is one reason why there appears to be
no room for free will, and adding randomness, as one can
do with a computer, does not provide a meaningful kind of
‘freedom’ (Chapter 9). Going smaller still, one reaches the
level of quantum effects and quantum indeterminacy. some
have argued that this is the ultimate source of human cre-
ativity, free will, and consciousness.


a more complex task, such as identifying individual
faces, it would need enough output units to encode
any allowable identity. During training, a program
compares the net’s actual output with the correct
output and makes adjustments to the weights
accordingly  – but how? The best-known method
uses the back-propagation algorithm (meaning
that the error is iteratively fed back into the net-
work to update the weights). As training proceeds,
the errors get gradually smaller until the network
responds more or less correctly. If the training set of
photographs is appropriately chosen, the network
should now perform well on a completely new
photograph.
Note that the process of adjusting the weights is
algorithmic, or rule-based, and the whole system
may be run on a digital computer, or hardcoded
on to a chip for far greater speed. The system con-
tains nothing that tells it how to recognise men and
women. The ANN works this out for itself, and even
its creators cannot know what exactly the weights
mean. Unlike traditional machines, connectionist
networks do not just do what their programmers tell
them to do. This is a long way from good old-fash-
ioned rule-and-symbol AI, and getting further away
all the time, as new developments bring in fuzzy
logic (allowing ANNs to take into account concepts
like ‘usually’, ‘somewhat’, and ‘sometimes’ rather
than just binary true/false values) and explore the
possibility of pulsed neural networks to mimic how
biological neural networks use the timing of pulses
to communicate information and perform computa-
tions. Deep learning (in networks with many layers)
has also been accelerated by the advent of massively
parallel graphics processors (GPUs) developed for
video gaming. These are used to drive applications
that require vast processing power to train billions of
‘software-neurons’.
ANNs are useful for many purposes, including recog-
nising handwriting, controlling robots, mining data,
forecasting market fluctuations, and filtering spam,
and may soon be used in many more applications
like self-driving cars. The connectionist–computa-
tional debates continue, but so does the gradual
movement from understanding cognition as manip-
ulation of static symbols towards treating it as a
continuous dynamical system that cannot be easily
broken down into discrete states.
Free download pdf