HWM Singapore – June 2019

(lily) #1

even if you call it stupid. That means
that communication is effectively
one-way and there’s no possibility
of establishing a relationship of
any sort.
However, if you had more visual
and emotional AI, people would
start treating them more nicely, or
at least more in line with how they
would treat a real person. The data
collected from these exchanges
would then be closer to how people
interact with each other in the real
world. This is great for facilitating
more natural conversations, which
in turn can result in higher quality
interactions.
At Connectome, some of our
developers come from a game
development background, such
as Final Fantasy. We’re looking to
take the expertise that comes from
modeling expressions on characters
in game and apply it to our VHAs.


Where are you getting your data
from right now and how else
are you looking to bolster your
security and privacy credentials?
We use public data shared by Google
or universities. Moving forward, we
also hope to improve transparency.
For instance, there’s no way of
knowing what data Google or


Amazon is using to train their
AI. It’s sort of a secret sauce for
them. However, when we develop
our VHAs, we will point out what
data sets we use, which are in turn
stored on the distributed blockchain
network. This is important for
accountability as well, since people
will always be able to trace the data
sets that were used.

What applications do you
foresee for VHAs?
We are currently doing some proof-
of-concept work in a Japanese
advertising agency, and we’re
collecting data on how people
react and communicate with the
virtual agent. Another scenario
we experimented with was a co-
working space, where the VHA
helped people with questions such
as what the Wi-Fi password was or
general directions around the of ce.
In the future, we hope to put a
VHA in cars, which might tie in
nicely with autonomous driving if it
ever catches on. The VHA would be
able to have a conversation with the
passenger and capture their needs. It
might even be able to communicate
with other agents used in stores
or other service providers. For
example, if the rider was hungry

or thirsty, the agent could talk to
other agents, make a decision about
where to go, and then tell the agent
onboard the car to stop at a certain
restaurant for a drink. Everything is
moving toward greater autonomy,
so VHAs could see extensive use in
the service industry.

Will we ever see something like
Joi from Blade Runner?
That’s the end goal. Compared
to robotics, virtualization is far
more cost effective and versatile.
Robots are generally good for
speci c tasks, but a VHA will need
to handle various things and adapt
dynamically. At the moment, we
have certain augmented reality
experiences that let you see the
agent through your smartphone
camera, but we’re also going to do a
hackathon with Microsoft in China
using the HoloLens, so we could see
AR glasses come into play as well.
Unfortunately, that particular usage
is hindered by existing cellular
networks, which are not yet as
fast as we need them to be. But as
5G networks roll out and speeds
improve, it could soon be possible
to transfer huge amounts of data in
real time and make putting VHAs
on AR glasses more feasible. When
that happens, imagine if you could
gather all your friends’ agents and
head out together.
There’s also not going to be a
one-size- ts-all approach to VHAs.
There will  rst be a template, but
we will also make available an SDK
that will let developers add different
looks or features to the agent to suit
their needs.

In the future, we


hope to put a VHA


in cars, which might


tie in nicely with


autonomous driving.


36 HWM | JUNE 2019

Free download pdf