Bloomberg Businessweek

(Steven Felgate) #1
 TECHNOLOGY

23

FROM LEFT: ILLUSTRATION BY KATYA DOROKHINA; COURTESY REGULUS CYBER


Innovation GPS Spoof Defense


① A user plugs the Pyramid into a car
or drone between the GPS receiver
and control systems, or a manufacturer
builds it in.

② The device triangulates the
source of the signals it’s receiving
by combining five antennas and
onboard GPS receivers. Signals from
unexpected directions trigger an alert.

③ The Pyramid switches to navigation
from a route it downloaded before
starting the trip, then back to live GPS
data when it no longer detects any
suspicious signals.

Innovators
Yonatan Zur and Yoav
Zangvil
Ages: 41 and 39
Chief executive oicer
and chief technical oicer
of Regulus Cyber, an
11-employee startup in
Haifa, Israel

Origin
Zur and Zangvil,
co-workers at Israeli
defense contractor Elbit
Systems Ltd., founded
Regulus at the end of 2016
with an eye to improving
commercial drone security.

Funding
The company has raised
$6.3 million from Sierra
Ventures, Canaan Partners
Israel, the Technion-Israel
Institute of Technology, and
F2 Capital.

Early tests
NASA has been testing
prototype Pyramid
technology in drones in
North Dakota. Zur says
SwissDrones Operating AG
and AT&T Inc. are also in
the middle of trial runs.

Zur says Regulus is in talks with makers of cars and drones,
as well as operators of car and truck fleets, to conduct more
trials later this year, and it aims to bring the Pyramid to market
in 2019. As more autonomous vehicles hit the streets, there’ll
be a growing need for protection against GPS spoofing, says
Jonathan Petit, senior director of research at software maker
OnBoard Security Inc. Regulus says it’s refining its technology to
help defend the lidar, camera, and radar systems that help steer
self-driving cars. —Michael Belfiore

Next Steps


How It Works


Vehicles that rely on GPS navigation
are vulnerable to spoofing, the sending
of phony signals to lead them of
their intended course. The palm-size
Pyramid GPS SP from Regulus Cyber Ltd.
uses a bundle of antennas and receivers to
make sure the signals it’s reading are legit.

THE BOTTOM LINE Emergency managers in several cities and
counties are using AI services to help plan their disaster response,
but they remain largely unproven.

“You can save an order-of-magnitude more lives
with good planning,” says co-founder Nicole Hu.
The company says it plans to introduce similar ser-
vices for wildires and loods later this year.
The One Concern software is already being
used in San Francisco to plan drills, says Michael
Dayton, deputy director for the city’s Department
of Emergency Management. Dayton says the soft-
ware can predict, for example, whether and where
an earthquake is likely to cause ires, based on
where that earthquake strikes and how strong it
is. It can also predict the safest routes for bringing
aid and other supplies into the city.
In Utah, the nonproit Field Innovation Team is
experimenting with AI software that can anticipate
what people in shelters will need based on the ages
and health of those most likely to lose their homes.
That information can guide how the shelters are
designed, what help they ofer, and even what
kinds of donations oicials solicit from the pub-
lic, says founder Desiree Matel-Anderson, former
chief innovation adviser at the Federal Emergency
Management Agency.
AI isn’t a panacea, especially if the relevant data-
bases aren’t kept current. “That has to be an ongo-
ing efort,” says Mark Ghilarducci, director of the
California Governor’s Oice of Emergency Services.
“The last thing you want to do is make decisions on
old or bad information.” Given the power of such
systems, privacy is another concern. Just as import-
ant, the software can often be a black box that’s
diicult to hold accountable, says Sarah Miller,
chair of the Emerging Technology Caucus for the
International Association of Emergency Managers.
“If the AI somehow accidentally decides that those
who have higher incomes are more worthy of sav-
ing, then it might redirect resources accordingly,
and we might not know that,” she says.
Hu says One Concern will never sell or share
any personal information, while Tuneberg says
Geospiza ensures that such data are available only
to people with “an appropriate and valid need to
know.” She says that in the unlikely event that AI
inadvertently privileges some groups over others,
as it can in the risk assessment software sometimes
used in criminal sentencing, developers would be
able to spot those outcomes through regular test-
ing. “Minorities and people with disabilities are
ignored by the system through human approaches
every single day,” she says. “This idea that AI is
going to do worse by them, I would say, is ridicu-
lous.” —Christopher Flavelle
Free download pdf