Science - 06.12.2019

(singke) #1

1202 6 DECEMBER 2019 • VOL 366 ISSUE 6470 sciencemag.org SCIENCE


ILLUSTRATION: DAVIDE BONAZZI/SALZMANART

By Boris Babic^1 , Sara Gerke^2 ,
Theodoros Evgeniou^1 , I. Glenn Cohen^3


A

s use of artificial intelligence and ma-
chine learning (AI/ML) in medicine
continues to grow, regulators face a
fundamental problem: After evaluat-
ing a medical AI/ML technology and
deeming it safe and effective, should
the regulator limit its authorization to mar-
ket only the version of the algorithm that
was submitted, or permit marketing of an
algorithm that can learn and adapt to new
conditions? For drugs and ordinary medi-
cal devices, this problem typically does not
arise. But it is this capability to continuously
evolve that underlies much of the potential
benefit of AI/ML. We address this “update
problem” and the treatment of “locked” ver-
sus “adaptive” algorithms by building on two
proposals suggested earlier this year by one
prominent regulatory body, the U.S. Food
and Drug Administration (FDA) ( 1 , 2 ), which
may play an influential role in how other
countries shape their associated regulatory
architecture. The emphasis of regulators
needs to be on whether AI/ML is overall reli-
able as applied to new data and on treating


similar patients similarly. We describe sev-
eral features that are specific to and ubiqui-
tous in AI/ML systems and are closely tied to
their reliability. To manage the risks associ-
ated with these features, regulators should
focus particularly on continuous monitoring
and risk assessment, and less on articulating
ex-ante plans for future algorithm changes.

THE REGULATORY DESIGN PROBLEM
One of the key advantages of AI/ML is that
it can enable a “learning healthcare system,”
wherein the boundaries between research
and practice are regarded as porous ( 3 ).
Once the AI/ML is deployed, it can learn and
thereby alter its performance and behavior,
much the way a medical resident learns on
the job. But this poses a difficult regulatory
design challenge. Consider two polar ap-
proaches to the update problem:
One pole would be for a regulator to per-
mit marketing of only a locked algorithm
and require any change to the algorithm to
undergo a completely new premarket re-
view. Such an approach has several draw-
backs. Suppose an algorithm for analyzing
results of mammograms and making rec-
ommendations on breast cancer risk re-
ceives marketing authorization ( 4 ). Suppose
the training data were underinclusive of
African-American women who tend to have
differences in breast density from Caucasian
women. The algorithm would thus produce
recommendations ill-suited for that popula-

tion. As the AI/ML system is used in clinical
settings that include more African-American
women, it becomes possible to more ac-
curately estimate the parameters used to
predict breast cancer in this subpopulation
when making recommendations.
Although improvements in pre-specified
testing of subgroups might provide some
benefit in avoiding this problem, in some
situations, relevant subpopulations may not
be known ex-ante. For example, in conduct-
ing HIV vaccine studies, researchers did not
(and perhaps could not) know ex-ante that in
a particular trial, the vaccine might increase
rather than reduce HIV infection risk for
“uncircumcised men who both had sex with
men (MSM) and had high titers of preexist-
ing antibodies against Ad5” ( 5 ). Prespecified
testing is unlikely to capture these kinds of
issues. Going forward, AI/ML may be able to
identify such subpopulations and even de-
velop customized models for different ones
(some of which are only possible to identify
after using AI/ML on lots of patients). Such
customization would be health-promoting,
but if another premarket review is needed,
the update may never occur—the maker may
not have a financial incentive to pursue the
cost of another review and might also worry
about the message it might send about the
quality of its existing algorithm.
The opposite pole would be to treat the
initial marketing authorization as permit-
ting the AI/ML maker to update the algo-
rithm without any further regulatory review.
Such updates can be either of the algorithm
itself (“algorithm updates”)—for example,
replacing a linear ML model with a polyno-
mial one—or of the algorithm’s parameters
(“parametric updates”), which may be con-
tinuously tuned as the system is applied to
new data in practice. This approach is like-
wise perilous. Parametric updates are at the
core of modern AI/ML systems—they take
place almost continuously, without human

BIOMEDICAL TECHNOLOGY REGULATION


Algorithms on regulatory


lockdown in medicine


Prioritize risk monitoring to address the “update problem”


(^1) INSEAD, Singapore and Fontainebleau, France. (^2) The Petrie-
Flom Center for Health Law Policy, Biotechnology, and
Bioethics at Harvard Law School; The Project on Precision
Medicine, Artificial Intelligence, and the Law, Cambridge,
MA, USA.^3 Harvard Law School, Cambridge, MA, USA.
Email: [email protected]
POLICY FORUM
Published by AAAS
on December 12, 2019^
http://science.sciencemag.org/
Downloaded from

Free download pdf