Handbook of Psychology, Volume 4: Experimental Psychology

(Axel Boer) #1
The “Structure” of the Visual Environment and Perception 111

not because the low-level features themselves are of interest
for perception, but because these features make convenient
stepping-stones for the detection of higher order features in
the stimulation.
The view of a perceptual system as a collection of devices
sensitive to low-level features of stimulation raises the diffi-
cult question of how such features are combined into the
meaningful entities of our visual experience. This question,
known as the binding problem,has two aspects: (a) How does
the brain know which similar features (such as edges of a
contour) belong to the same object in the environment? and
(b) How does the brain know which different features (e.g.,
pertaining to the form and the color) should be bound into the
representation of a single object? These questions could not
be answered without understanding the statistics of optical
covariation(MacKay, 1986), as we argue in the next section.
That the visual system uses such statistical data is suggested
by physiological evidence that visual cortical cells are con-
currently selective for values on several perceptual dimen-
sions rather than being selective to a single dimension
(Zohary, 1992). We now briefly review the background
against which the idea of optical covariation has emerged in
order to prepare the ground for our discussion of contempo-
rary research on the statistics of natural environment.


Redundancy and Covariation


Following the development of the mathematical theory of
communication and the theory of information (Shannon &
Weaver, 1949; Wiener, 1948; see also chapter by Proctor and
Vu in this volume), mathematical ideas about information-
handling systems began to influence the thinking of researchers
of perception. Although the application of these ideas to per-
ception required a good deal of creative effort and insight, the
resulting theories of perception looked much like the theories
of human-engineered devices, “receiving” from the environ-
ment packets of “signals” through separable “channels.”
Whereas the hope of assigning precise mathematical meaning
to such notions as information, feedback, and capacity was to
some extent fulfilled with respect to low-level sensory
processes (Graham, 1989; Watson, 1986), it gradually became
clear that a rethinking of the ideas inspired by the theory of
communication was in order (e.g., Nakayama, 1998).
An illuminating example of such rethinking is the evolu-
tion of the notion ofredundancy reductioninto the notion of
redundancy exploitation(see Barlow, 2001, in press, for a
firsthand account of this evolution). The notion of redundancy
comes from Shannon’s information theory, where it was a
measure of nonrandomness of messages (see Attneave, 1954,
1959, p. 9, for a definition). In a structureless distribution of


luminances, such as the snow on the screen of an untuned TV
set, the are no correlations between elements in different parts
of the screen. In a structure-bearing distribution there exist
correlations (or redundancy) between some aspects of the dis-
tribution, so that we can to some extent predict one aspect of
the stimulation from other aspects. As Barlow (2001) put it,
“any form of regularity in the messages is a form of redun-
dancy, and since information and capacity are quantitatively
defined, so is redundancy, and we have a measure for the
quantity of environmental regularities.”
On Attneave’s view, and on Barlow’s earlier view, a pur-
pose of sensory processing was to reduce redundancy and
code information into the sensory “channels of reduced
capacity.” After this idea dominated the literature for several
decades, it has become increasingly clear—from factual evi-
dence (such as the number of neurons at different stages of
visual processing) and from theoretical considerations (such
as the inefficiency of the resulting code)—that the redun-
dancy of sensory representations does not decrease in the
brain from the retina to the higher levels in the visual path-
ways. Instead, it was proposed that the brain exploits, rather
than reduces, the redundancy of optical stimulation.
According to this new conception of redundancy, the brain
seeks redundancy in the optical stimulation and uses it for a
variety of purposes. For example, the brain could look for a
correlation between the values of local luminance and retinal
distances across the scene (underwriting grouping by prox-
imity; e.g., Ruderman, 1997), or it could look for correlations
between local edge orientations at different retinal locations
(underwriting grouping by continuation; e.g., Geisler et al.,
2001). The idea of discovering such correlations between
multiple variables is akin to performing covariational analy-
sis on the stimulation. MacKay (1986, p. 367) explained the
utility of covariational analysis:

The power of covariational analysis—asking “what else hap-
pened when this happened?”—may be illuminated by its use in
the rather different context of military intelligence-gathering. It
becomes effective and economical, despite its apparent crudity,
when the range of possible states of affairs to be identified is rel-
atively small, and when the categories in terms of which covari-
ations are sought have been selected or adjusted according to the
information already gathered. It is particularly efficacious where
many coincidences or covariations can be detected cheaply in
parallel, each eliminating a different fraction of the set of possi-
ble states of affairs. To take an idealized example, if each obser-
vation were so crude that it eliminated only half of the range of
possibilities, but the categories used were suitably orthogonal-
ized (as in the game of “Twenty questions”), only 100 parallel
analyzers would be needed in principle to identify one out of
2100 , or say 10^30 , states of affairs.
Free download pdf