14 January 2022 The Guardian Weekly
31
1 4 January 2022 The Guardian Weekly
31
- being guided by a rule of thumb, say
- and a stockbroker’s rule of thumb
might not be the same as that of a teen-
age bitcoin trader, since it is drawn
from diff erent experiences.
“We’re basically using the machine
learning system to identify those cases
where we’re seeing something that’s
inconsistent with our theory,” Griffi ths
said. The end result is not a theory in
the traditional sense of a precise claim
about how people make decisions, but
a set of claims that is subject to certain
constraints. A way to picture it might
be as a branching tree of “if... then”-
type rules.
W
hat the Princeton
psychologists are dis-
covering is still just
about explainable, by
extension from existing theories. But as
they reveal more and more complexity,
it will become less so – the logical cul-
mination of that process being the
theory-free predictive engines embod-
ied by Facebook or AlphaFold.
Critics point out, for example, that
correct”. Ultimately, it will be hard
to argue that the machine is the more
biased of the two. A tougher obstacle
may be our human need to explain
the world – to talk in terms of cause
and eff ect. “ Explainable AI ”, which
addresses how to bridge the interpret-
ability gap, has become a hot topic. But
that gap is only set to widen and we
might instead be faced with a trade-
off : how much predictability are we
willing to give up for interpretability?
Sumit Chopra , an AI scientist
who thinks about the application of
machine learning to healthcare at New
York University , gives the example of
an MRI image. It takes a lot of raw data
- and hence scanning time – to produce
such an image, which isn’t necessarily
the best use of that data if your goal is
to detect, say, cancer. You could train
an AI to identify what smaller portion
of the raw data is suffi cient to pro-
duce an accurate diagnosis . But radi-
ologists and patients remain wedded
to the image. “We humans are more
comfortable with a 2D image that our
eyes can interpret,” he said.
The fi nal objection to post- theory
science is that there is likely to be
useful old-style theory – that is,
generalisations extracted from dis-
crete examples – that remains to be
discovered and only humans can do
that because it requires intuition. One
reason we consider Newton brilliant
is that in order to come up with his
second law he had to ignore some data.
He had to imagine, for example, that
things were falling in a vacuum, free of
the interfering eff ects of air resistance.
In Nature , the mathematician
Christian Stump, of Ruhr University
Bochum in Germany , called this intui-
tive step “the core of the creative pro-
cess”. But the reason he was writing
about it was to say that for the fi rst
time, an AI had pulled it off. DeepMind
had built a machine -learning program
that had prompted mathematicians
towards new insights – new generali-
sations – in the mathematics of knots.
In 2022, therefore, there is almost no
stage of the scientifi c process where AI
hasn’t left its footprint. And the more
we draw it into our quest for know-
ledge, the more it changes that quest.
We can reassure ourselves about one
thing: we’re still asking the questions.
As Pablo Picasso put it in the 1960s,
“computers are useless. They can only
give you answers. ” Observer
LAURA SPINNEY IS A SCIENCE
JOURNALIST AND AUTHOR
▲ An MRI scan of
a brain requires a
huge amount of
data
SALIH DENIZ/GETTY
neural nets can throw up spurious
correlations, especially if the datasets
they are trained on are small. And all
datasets are biased, because scientists
don’t collect data evenly or neutrally,
but always with certain hypotheses
or assumptions in mind, assumptions
that worked their way damagingly into
Google’s and Amazon’s AIs.
Dayan point ed out that humans are
biased too and, unlike AIs, “in ways
that are very hard to interrogate or
A n i m a g e o f p r o t e i n
structures representing
the data obtained by
AlphaFold. Its creation of
a database of nearly all
human protein structures
could revolutionise the
way diseases are treated.
GRANGER HISTORICAL PICTURE ARCHIVE/ALAMY; KAREN ARNOTT/EMBL-EBI/PA
10k
Last year,
Tom Grif fi ths
of Princeton
University
described how
his group trained
a neural net on
a vast dataset of
decisions people
took in 10,000
risky choice
scenarios. They
then compared
how accurately
it predicted
further decisions
with respect to
prospect theory.
Results revealed
that prospect
theory did
pretty well, but
the neural net
showed its worth
in highlighting
where the theory
broke down ;
that is, where
its predictions
failed
An image of protein
structures representing
the data obtained by
AlphaFold. Its creation of
a database of nearly all
human protein structures
could revolutionise the
way diseases are treated.