Flight International 09Mar2020

(lu) #1

DEFENCE


20 | Flight International | 3-9 March 2020 flightglobal.com


Air force wants artificial intelligence to swiftly identify enemy combatants from MQ-9 Reaper footage

US Air Force

T


he US Department of Defense
(DoD) has outlined five prin-
ciples to help keep artificial
intelligence (AI) technology from
running amok.
Published on 25 February, the
definitions provide an ethical
framework to guide the US de-
fence industry’s development of
AI, and its military’s use of such
emerging technologies.
The DoD believes it must pur-
sue AI or risk being leapfrogged
by potential adversaries China
and Russia, which could use the
technology to prevail on future
battlefields by, for example, using
software to observe and react to
the USA’s moves faster.
“The stakes for AI adoption are
high. AI is a powerful emerging
and enabling technology that is
rapidly transforming culture, soci-
ety and eventually even warfight-
ing,” says US Air Force (USAF)
Lieutenant General John Shana-
han, director of the DoD’s Joint
Artificial Intelligence Center.
“Whether it does in a positive
or negative way depends on our
approach to adoption and use.
The complexity and the speed of
warfare will change as we build
an AI-ready force of the future.”
The DoD’s principles state that
the use of AI should be responsi-
ble, equitable, traceable, reliable
and governable.


NEW CAPABILITIES
“DoD personnel will exercise ap-
propriate levels of judgment and
care, while remaining responsible
for the development, deployment
and use of AI capabilities,” it says.
The department “will take
steps to minimise unintended bias
in AI capabilities”, which should
be “developed and deployed such
that relevant personnel possess an
appropriate understanding of the
technology, development process-
es and operational methods appli-
cable, including with transparent
and auditable methodologies, data
sources, and design procedure
and documentation”.


DEFINITION GARRETT REIM LOS ANGELES


USA targeting battlefield ethics for AI


Principles of technology development set out as Washington works to maintain advantage over potential adversaries


AI should have “explicit, well-
defined uses, and the safety, secu-
rity and effectiveness of such ca-
pabilities will be subject to testing
and assurance within those de-
fined uses across their life-cycles”.

Regarding governance, the DoD
says it will “design and engineer
AI capabilities to fulfill their in-
tended functions while possess-
ing the ability to detect and avoid
unintended consequences, and
the ability to disengage or deacti-
vate deployed systems that dem-
onstrate unintended behaviour”.
Acknowledging that its current
broad principles need further re-
finement, the DoD notes that this
is because much of how AI will
be employed remains unknown.
Specifically, the US govern-
ment wants to further develop
procurement guidance, techno-
logical safeguards, organisational
controls, risk mitigation strategies
and training measures.

Two of the USAF’s notable AI
projects are the Skyborg pro-
gramme, which seeks to develop
software to autonomously con-
trol attritable unmanned air vehi-
cles (UAVs), and Project Maven.
A computer vision development
project, the latter is intended to
help commanders quickly iden-
tify enemy combatants amid a
torrent of surveillance video foot-
age gathered by UAVs such as the
General Atomics Aeronautical
Systems MQ-9 Reaper.
China has also made AI ad-
vancement a top priority. “Their
intent is to move fast and aggres-
sively, with high levels in invest-
ment and extraordinary levels of
people,” says Shanahan. Beijing
wants to surpass US capability in
the area by 2030, he notes.
The USA is leading currently
thanks to factors including its
technology industry, academic in-
stitutions and culture of innova-
tion, says Shanahan. However,
while “The United States has
deep structural advantages, those
won’t be in place forever,” he says.
The DoD is concerned that
China and Russia will race to im-
plement shoddy AI in military
service without giving enough
thought to unintended conse-
quences or harming bystanders.
“What I worry about with both
countries is they are moving so
fast that they are not adhering to
what we would call mandatory

principles of AI adoption,” Shan-
ahan says. “We will not field an
algorithm until we feel it meets
our performance and standard.”
While soldiers and command-
ers are held responsible for deci-
sions on the battlefield, AI’s ability
to act independently could bring
repercussions for technology de-
signers and buyers – making soft-
ware developers and government
procurement officials accountable.

DATA QUALITY
“The real hard part is taking the AI
delivery pipeline and understand-
ing where those ethics principles
need to be applied,” says Shana-
han. For instance, he says the mil-
itary and developers must consid-
er the source and quality of data
used to develop AI.
“Is the data representative of a
very small sample size, as op-
posed to a very diverse set of data
that would be necessary to devel-
op a high-performing algorithm?
[It goes] all the way to things like
test and evaluation,” he notes.
The USA maintains its AI tech-
nologies must have a “kill switch”
to allow overseeing operators to
stop errant software from doing
unintended harm, and may even
need another layer of software to
perform further supervision.
“Some of those principles will
need unique solutions to bring
them to life,” says Shanahan. ■
See Cover Story P29

“AI is a powerful


emerging technology


that is transforming


culture, society and


even warfighting”
Lieutenant General John Shanahan
Director, Joint Artificial Intelligence
Center, US Department of Defense
Free download pdf