Consciousness

(Tuis.) #1

  • seCtIon tWo: tHe BRAIn
    computer is not – it would have zero Φ and be a perfect zombie’ (Tononi, 2015). Yet
    surely this system would not behave identically to a human, and so in that import-
    ant sense would not fit the definition of a zombie – let alone be a perfect one.
    Like global workspace models, IIT insists on the importance of distributed
    dynamic processes and treats consciousness as a continuous variable. But in
    other respects, the theories differ. In GWTs, the contents of the workspace are
    conscious because they are displayed or made available to the rest of the sys-
    tem. In IIT there is no equivalent of this theatre-like display or global availability,
    other than the distributed power to affect other parts of the brain. Experience
    is a fundamental quantity, like mass, charge, or energy (Tononi, 2004), and con-
    sciousness ‘increases in proportion to a system’s ability to integrate information’
    (Tononi, 2007, p. 298).
    A great deal rests on the central concept of integrated information, Φ. Although
    there are several competing definitions of the value of Φ, the most common
    version seems to be, essentially, that the value of Φ is obtained by dividing the
    system into parts A  and B and minimising a measure of the shared information
    between A’s outputs and B’s inputs and vice versa. According to the theoretical
    computer scientist Scott Aaronson (2014), however, having a large Φ value can-
    not be a sufficient condition for consciousness. Like Dennett in several contexts
    we have already come across, Aaronson (2014) warns us that our intuition about
    the magical status of integrated complexity may be leading us astray:


As humans, we seem to have the intuition that global integration
of information is such a powerful property that no ‘simple’ or
‘mundane’ computational process could possibly achieve it. But our
intuition is wrong. If it were right, then we wouldn’t have linear-size
superconcentrators or LDPC [low-density parity check] codes.

A superconcentrator is a type of graph that appears in the design of communi-
cation networks, and a parity check code is an error-checking code that ensures
correct transmission of a digital signal. Both rely on highly complex maths and
have huge expressive power, but other than their integration of lots of informa-
tion, they don’t offer very persuasive reasons to consider them conscious.
Aaronson proposes a slightly easier version of the hard problem, the Pretty-Hard
Problem of Consciousness: ‘which physical systems are associated with con-
sciousness and which aren’t’. In his view, IIT cannot even solve the Pretty-Hard
Problem, let alone Chalmers’ Hard Hard problem, ‘because it unavoidably predicts
vast  amounts of consciousness in physical systems that no sane person  would
regard as particularly “conscious” at all’. As he puts it, ‘you can have integrated
information without consciousness (or even intelligence) – just like you can have
computation without consciousness, and unpredictability  without conscious-
ness, and electricity without consciousness’ (2014). But of course this just pits one
intuition against another. When using a theory to make predictions about which
non-human systems are conscious, it is hard to know whether the theory or your
intuitions should win out. If your theory gives a fridge and a paving stone high
enough scores to make them conscious, should you reject the theory or accept its
predictions? In Chapter 12, we will explore the problems of testing for conscious-
ness in human-made machines.

‘IIT unavoidably


predicts vast amounts


of consciousness in


physical systems that


no sane person would


regard as particularly


“conscious” at all’


(Aaronson, 2014)

Free download pdf