2019-02-01_Popular_Science

(singke) #1
They can tell a pedestrian who wants to cross in front
of the car, “Waiting for You.” Or they can warn them:
“Going Now/Don’t Cross.” A related strategy is in-
tended for passengers, not pedestrians: Screens in
Waymo vehicles show car occupants a simple, an-
imated version of what the autonomous vehicle is
seeing. Those displays can also show what the car is
doing, like if it’s pausing to allow a human to cross.
“Trust is the willingness to make yourself vulnera-
ble to somebody else,” Shariff says. “We engage in it
because we can pretty easily predict what the other
person will do.” All of which means that if the cars are
predictable and do what they say they will do, people
will be more likely to trust them. Sound familiar?
Communicating with the machine mind is import-
ant, but that doesn’t mean we want it to mimic exactly
how humans think and act while driving. In fact, the
promise of traveling by autonomous car is that silicon
brains won’t do dumb things such as text and drive, or
drink and drive, or rocket down the highway while up-
set after a breakup. (Cars don’t date.) “I believe that
they have the potential to be safer” than regular cars,
says Marjory S. Blumenthal, a senior policy ana lyst
at the RAND Corporation think tank who has re-
searched the vehicles. But she says there’s not
enough good data yet to know for sure.
One practical way to create a reputation
for safety is to start slow. The University of
Michigan’s pair of self-driving shuttles go
just 12 miles per hour. Huei Peng, a professor
of mechanical engineering who oversees the
little buses, says the research team behind the
project is building trust by not asking too much:
The predetermined route is just about a mile long, so
they’re not exactly speeding down a highway in the
snow. “We’re trying to push the envelope but in a very
cautious way,” Peng says. If all goes well, they’ll ex-
pand the route. Like other experts, Peng compares
self- driving cars to elevators: an initially frightening
technology that people eventually got used to.
Ultimately, not everyone will have to trust driver-
less cars enough to go for a ride, and especially not
at first. Indeed, the public isn’t homogeneous, says
Raj Rajkumar, who directs the Metro21: Smart Cities
Institute at Carnegie Mellon University. He notices
three categories of potential users: tech skeptics,
who know that their computer crashes and worry
about getting into a vehicle controlled by one; early
adopters, who are delighted by the promise of new
tech; and people who are stressed by driving and
would rather not do it if they don’t have to. The early
adopters will buy in first, followed by the folks who
just dislike driving, and then finally the skeptics, he
argues. “So it’s a long process.” Trust grows like a
self-driving shuttle drives: slowly.

ON MARCH 18, 2018, ELAINE HERZBERG, 49, WAS
crossing a road in Tempe, Arizona, when a Volvo
SUV traveling at 39 miles per hour hit and killed her.
Although she was one of thousands of U.S. pedestri-
ans killed by vehicles every year, one distinctive—and
highly modern—aspect set her death apart: Nobody
was driving that Volvo. A computer was.
A fatality caused by a self-driving car might not
be more tragic than another, but it does encourage
the wariness many of us feel about technology mak-
ing life-and-death decisions. Just a couple of months
later, a survey by AAA revealed that 73 percent of
Americans were too scared to zip around in a to-
tally autonomous ride—a 10 percent increase from a
similar poll taken before Herzberg’s death.
Self-driving cars are already cruising our streets,
their spinning lasers and other sensors scanning the
world around them. Some are from big companies
such as Waymo—part of Google’s parent conglomer-
ate Alphabet—or General Motors, while others are the
work of outfits you might not have heard of, including
Drive.ai or Aptiv. (Uber operated the Volvo involved
in Arizona’s fatal crash and took its self- driving cars
off the roads for about nine months afterward.) But
what makes some of us so wary of these robotic
chauffeurs, and how can they earn our trust?
To understand these questions, it first
helps to consider what psychologists call
the theory of mind. Put simply, it’s the rec-
ognition that other people have brains in
their heads that are busy thinking, just like
ours (usually) are. The theory comes in handy
on the road. Before we venture into a cross-
walk, we might first make eye contact with a driver
and then think, He sees me, so I’m safe, or He doesn’t,
so I’m not. It’s a technique we likely use more than we
realize, both behind the wheel and on our feet. “We
know how other people are going to act because we
know how we would act,” explains Azim Shariff, an
associate professor of psychology at the University of
British Columbia, who has written about this issue in
the journal Nature Human Behaviour.
But you can’t make eye contact with an algorithm.
Autonomous cars generally have backup humans
ready to take control if necessary, but when the car
is in self-driving mode, the computer’s in charge.
“We’re going to have to learn a theory of the machine
mind,” Shariff says. What that means in practice is
that self-driving cars will need to provide clear sig-
nals—and not just turn signals—to let the public know
what that machine mind is planning.
One solution comes from Drive.ai, a company
running self-driving vans in Texas. The bright-orange-
and-blue vehicles have LED signs on all four sides
that respond to the environment with messages.

AU
TO
NO
MO
US


(^) C
AR
(^) BY
(^) PR
OS
YM
BO
LS
(^) FR
OM
(^) TH
E (^) N
OU
N
PR
OJ
EC
T

Free download pdf