The Observer
Opinion 25.08.19 27
think it will be a partnership rather
than an alternative.”
Others, however, believe the
commercial imperatives of the tech
giants confl ict with the public service
values of the NHS. “What does
return on investment look like in
healthcare?” asks Mathana Stender ,
a Berlin-based tech ethicist. “The
incentives of companies like Amazon
are to increase market share, share
prices and profi ts, which puts them
at odds with the underlying goals of
a public health system like the NHS.”
Some fear that free-market
politicians would be more than
happy to cut NHS funding and invite
private companies to fi ll the gap.
However, Matthew Honeyman , of
health think tank the King’s Fund ,
says those interested in cutting
costs shouldn’t expect quick gains.
“These kinds of transformations do
bring effi ciencies and productivity
benefi ts, but our research suggests
these take years to come through.”
Can I trust Amazon
with my health data?
The most widespread concerns
relate to data protection. Last month
it was revealed that contractors
working on quality control of
Apple’s Siri voice assistant regularly
hear recordings of confi dential
conversations, including doctor-
patient discussions. In 2017,
London’s Royal Free hospital was
rebuked for failing to comply with
the Data Protection Act when it
handed over the personal data of
1.6 million patients to Google’s
DeepMind as part of efforts to
develop an early diagnosis system.
Amazon has promised that
UK patients who asked Alexa for
health advice would have their data
encrypted, but critics demand more
specifi cs on how patient data is
protected. “We need more detail and
transparency about how people’s
sensitive data will be processed
and used when they ask Alexa a
question,” says Honeyman.
Others say there are privacy
risks even if big tech companies do
make stringent efforts to protect
patient data. “Even using the most
sophisticated anonymisation
techniques, given enough data
points, it may become possible to
de -anonymise individuals in future,”
says Stender, also a fellow at the
Centre for Internet and Human
Rights at the Viadrina European
University in Frankfurt an der Oder.
Healthcare is widely considered
a “digital laggard” compared with
other sectors. But the pace of change
is quickening. Amazon and the
other major tech companies have
become rich and powerful in large
part by realising the value of and
mon etising data. Critics highlight
what they say is their poor data
protection track record, and warn of
irreparable damage to our privacy
if the same approach is applied to
personal health data. Few doubt
digital disruption can bring big gains
in healthcare. Less clear is whether
Amazon and its ilk can or should be
trusted to play central roles in the
new era of data-driven healthcare
they are carving out for themselves.
The networker
Douglas Adams was right – knowledge is
meaningless without understanding
F
ans of Douglas Adams’s
Hitchhiker’s Guide to
the Galaxy treasure
the bit where a group
of hyper-dimensional
beings demand that
a supercomputer tells them the
secret to life, the universe and
everything. The machine, which
has been constructed specifi cally
for this purpose, takes 7.5 m
years to compute the answer,
which famously comes out
as 42. The computer helpfully
points out that the answer seems
meaningless because the beings
who instructed it never knew what
the question was. And the name
of the supercomputer? Why, Deep
Thought , of course.
It’s years since I read Adams’s
wonderful novel, but an article
published in Nature last month
brought it vividly to mind. The
article was about the search for
the secret to life and the role of a
supercomputer in helping to answer
it. The question is how to predict
the three-dimensional structures
of proteins from their amino-
acid sequences. The computer is
a machine called AlphaFold. And
the company that created it? You
guessed it – DeepMind.
Proteins are large biomolecules
constructed from amino acid
residues and are fundamental
to all animal life. They are, says
one expert , “the most spectacular
machines ever created for moving
atoms at the nanoscale and often
do chemistry orders of magnitude
more effi ciently than anything that
we’ve built”.
But these vital biomachines
are also inscrutable because they
assemble themselves into structures
of astonishing complexity and
beauty. Understanding this “folding”
process is one of the key challenges
in biochemistry, partly because
proteins are necessary for virtually
every cell in a body and partly
because it’s suspected that mis-
folding may help to explain diseases
such as diabetes, Alzheimer’s and
Parkinson’s.
So the question “How do proteins
fold?” is defi nitely worth asking. The
traditional way of answering it was
by lab-based x-ray crystallography,
which is expensive and slow. So
researchers have turned to building
computer models that simulate
the folding process and predict
protein structures. For some years,
specialists in the fi eld have run
a biennial competition in critical
assessment of protein structure
prediction (CASP) , where teams
are challenged to design computer
programs that predict protein
structures from amino sequences.
Two years ago , DeepMind,
having conquered the board
game Go , decided to take on the
challenge, using the deep-learning
technology it had developed for
Go. The resulting machine was,
predictably, named AlphaFold. At
the CASP meeting last December ,
it unveiled the results. Its machine
was, on average, more accurate
than the other teams and by some
criteria it was signifi cantly ahead of
the others. For protein sequences
modelled from scratch – 43 of the
90 – AlphaFold made the most
accurate prediction for 25 proteins.
Its nearest rival only managed three.
These results seem to have
had a seismic impact on many
of the researchers present. The
atmosphere and the implications
were summed up in a remarkable
blog post entitled “What Just
Happened?” by Harvard’s
Mohammed AlQuraishi , a world
expert in the fi eld. On the one hand,
he was judiciously cautious about
the contribution of the DeepMind
team. It represented “substantial
progress, more so than
usual”. But does that mean
the problem is solved or
nearly so? “The answer
right now,” he concludes,
“is no. We are not
there yet. However,
if the [AlphaFold-
adjusted] trend...
were to continue,
then perhaps in two
CASPs, ie four years,
we’ll actually get to
a point where the problem can be
called solved.”
On the other hand, AlQuraishi
also discussed the existential angst
generated by AlphaFold in the young
scientists present at the event.
Their underlying concern, he says,
was “whether protein structure
prediction as an academic fi eld
has a future, or whether... the best
research will from here on out get
done in industrial labs, with mere
breadcrumbs left for academic
groups”. Young biochemists will
have to decide whether it’s good for
their careers to continue working
on structure prediction. For some
(many?) of them, it may make sense
to go into industrial labs, while
for others it will mean staying in
academia but shifting to entirely
new problems that avoid head-on
competition with DeepMind.
Underpinning all this, though,
is a deeper question. Reaching
a scientifi c explanation of how
protein folding works will be a
gigantic intellectual task. (In 1969,
the molecular biologist Cyrus
Levinthal formulated a famous
paradox : as any protein can fold in
an astronomically large number
of ways it would take longer than
the universe has existed for every
confi guration to be tested, yet most
small proteins fold spontaneously in
milliseconds. Nobody knows how.)
It’s conceivable that a machine-
learning approach will soon enable
us to make accurate predictions
of how a protein will fold and this
may be very useful to know. But it
won’t be scientifi c knowledge. After
all, AlphaFold knows nothing about
biochemistry. We’re heading into
uncharted territory.
A representation
of folding
proteins as
predicted by
DeepMind’s
AlphaFold
computer.
John Naughton
What I’m
reading
John Naughton’s
recommendations
Sale of the century
Machine learning used to
be an exotic technology.
Now Timothy B Lee
argues in Vox that it’s
being commoditised.
Th at may not be as good
as it sounds.
What’s your poison?
If some algorithms
can have harmful
psychological eff ects
on users, shouldn’t
they be regulated
like pharmacological
drugs? Th ere’s an
interesting argument
about it in Wired’s
opinion section.
Progressive learning
Should “progress”
be an academic
subject?
Read Diane
Coyle’ s
thoughtful
essay
on the
matter on
the Project
Syndicate
website.
a
w
ALAMY