2019-05-01_Discover

(Marcin) #1

62 DISCOVERMAGAZINE.COM


gamers, did not befit the assistant Cortana’s
professional role.
But the creative team didn’t ditch the sci-fi
ethos altogether, styling the assistant’s person-
ality as that of the cool nerd. A user who asks
about Cortana’s preferences will discover that
she likes Star Trek, E.T. and The Hitchhiker’s
Guide to the Galaxy. She sings and does
impressions. She celebrates Pi Day and speaks
a bit of Klingon. “Cortana’s personality exists
in an imaginary world,” Foster says. “And we
want that world to be vast and detailed.”


BIG ON PERSONALITY
Microsoft’s decision to go big on personality
has its roots in focus group studies that the
company conducted several years before
Cortana’s 2014 launch. Prospective users told
researchers that they would prefer a virtual
assistant with an approachable interface rather
than a purely utilitarian one. This only vaguely
hinted at the course that Microsoft should
pursue, but the company got sharper direc-
tion from a second finding — that consumers
eagerly personify technology.
This was apparently true even for simple
products with no intentionally programmed
traits. Ash and his colleagues learned about a
revealing example of this involving Roombas.
In studies a decade ago of people who owned
the disk-shaped vacuuming robots, Georgia
Tech roboticist Ja-Young Sung uncovered
surprising beliefs. Nearly two-thirds of the
people in the study reported that the cleaning
contraptions had intentions, feelings and per-
sonality traits like “crazy” or “spirited.” People
professed love (“My baby, a sweetie”) and
admitted grief when a “dead, sick or hospital-
ized” unit needed repair. When asked to supply
demographic information about members of
their household, three people in the Sung study
actually listed their Roombas, including names
and ages, as family members.
The penchant to personify surprised
Microsoft and “struck us as an opportunity,”
Ash says. Rather than creating the voice AI
version of a Roomba — a blank slate for user
imaginings — Microsoft decided to exercise
creative control with Cortana. Foster, the
former screenwriter, was among those who
thought that it would be important to craft a
sharply drawn character, not merely a generi-
cally likable one. “If you have an ambiguous,
wishy-washy personality, research shows that


it is universally disliked,” Foster says. “So we
tried to go in the other direction and create all
of this detail.”
Creative writers relish specifics like E.T. and
Pi Day. But Microsoft’s decision to implement
a vivid persona was motivated by practical
considerations more than artistic ones. First
and foremost, Ash says, Microsoft wanted to
bolster trust. Cortana can help with more tasks
if she has access to users’ calendars, emails and
locations, as well as details such as frequent-
flyer numbers, spouses’ names and culinary
preferences. Research indicated that if people
liked Cortana’s personality, they would be less
inclined to think that she was going to abuse
sensitive information. “We found that when
people associated a technology with something
— a name, a set of characteristics — that would
lead to a more trusting relationship,” Ash says.
Beyond the trust issue, Microsoft believed
that having an approachable personality would
encourage users to learn the assistant’s skill
set. Cortana’s personality lures people into
spending time with her, which in turn benefits
Cortana, who grows more capable through
contact. “The whole trick with these machine-
learning AI systems is if people don’t interact
and give you a bunch of data, the system can’t
train itself and get any smarter,” Ash says. “So
we knew that by having a personality that
would encourage people to engage more than
they probably normally would.”

LIFELIKE BUT NOT ALIVE
“What am I thinking right now?” I recently
asked the Google Assistant.
“You’re thinking, ‘If my Google Assistant
guesses what I’m thinking, I’m going to
freak out.’ ”
Whichever character type they choose,
designers walk a fine line. They maintain that,
while they are shooting for lifelike personas,
by no means are their products pretending
to actually be alive. Doing so would stoke
dystopian fears that intelligent machines will
take over the world. AI creators also rebuff sug-
gestions that they are synthesizing life, which
would offend religious or ethical beliefs. So
designers tread carefully. As Foster puts it,
“One of the main principles we have is that
Cortana knows she is an AI, and she’s not try-
ing to be human.”
As an experiment, I tried asking all of the
major voice AIs, “Are you alive?”

“The whole


trick with


these


machine-


learning AI


systems is if


people don’t


interact and


give you a


bunch of data,


the system


can’t train


itself and get


any smarter.”

Free download pdf