Consciousness

(Tuis.) #1

  • seCtIon FIVe: BoRDeRLAnDs
    of hallucinations while their brain activity was
    recorded. Hallucinations of faces were associ-
    ated with activity in the face area, objects with
    activity in the object area, colour with colour
    areas, and so on. For complex visions, the fea-
    tures simply added up: activity in both object
    and colour areas was associated with a coloured
    object, while activity in a texture area without
    activity in a colour area was associated with a
    colourless texture (ffytche et al., 1998).


HALLUCINATING MACHINES


Hallucination-like patterns have even been
replicated without human involvement. A sci-
ence–arts collaboration called the Einstein’s
Brain Project was inspired by the phenomena
of closed-eye hallucination, including random light–dark regions, blobs, flashes,
and colours in motion, as well the familiar form constants. In a camera-based
experiment, the lens is covered with a uniformly illuminated goggle and bathed in
yellow light to make a ganzfeld or uniform field. The video stream is then sent to a
computer to be analysed for optical features, and tiny inconsistencies are tracked,
amplified, and projected onto a wall. Patterns emerge from the noise as video
frames accumulate and merge – just as in ganzfeld involving human participants.
The authors describe the machine memory as ‘generating form from within. [. . .]
It is as if algorithmic access to an archive – machine memory if you will – is, and
must be, fundamentally hallucinatory (Dunning and Woodrow, 2010).

Far more dramatic images are produced by Google’s ‘deep dreaming’ algorithms.
The idea is based on artificial neural networks that are trained to recognise objects
in complex images. These multilayer networks are shown thousands of images
and trained to extract progressively higher- and higher-level features until the
final layer can identify specific objects such as faces, houses, and animals or even
a specific person, breed of dog, or type of farm building. Even relatively simple
networks are found to over-interpret images, finding shapes and objects that are
not really there, as with human pareidolia.
The trick that researchers at Google and elsewhere have been exploring is to
reverse the flow of information through the network in a process they call ‘incep-
tionism’, a name based on a line in the science fiction film Inception: ‘We need to go
deeper’ (Hayes, 2015). Once the network has been trained to recognise an object
(e.g. a face), the learning process is stopped and the network is run in reverse,
and then the forward–backward cycle is repeated. But instead of adjusting the
synaptic weights in the network, in this case the weights are held constant and
the image (the input) is manipulated. Whether the image has the target object
in it or not, as the iterations continue the self-reinforcing process produces first
ghostly versions of the object and then extraordinarily complex images with mul-
tiple objects and patterns: dogs with ten legs, human heads on artificial bodies,
cities sprouting snakes and eyes. They look for all the world like the psychedelic
art inspired by the major hallucinogens.

80
70
60
50
40
30
20
10
0
BV
BIG
YO
RW

Mescaline
(300 mg)
LSD (100 μg)
Psilocybin (20 mg)
∆^9 -THC (20 mg)
d-Amphetamine (15 mg)
Phenobarbitol (60 mg)
BOL (100 μg)
Placebo
Baseline
Tr ained Ss (n = 4)

Drugs
Colour re
ports

Relative per cent

FIGURE 14.6 • Siegel and Jarvik (1975)
trained psychonauts to report on
their experiences while taking
various drugs. Mean percentage
distributions for colour are shown
here. B black, V violet, Bl blue, G
green, Y yellow, O orange, R red,
W white.

Free download pdf