HWM Singapore – June 2019

(lily) #1
LEARN

HERE’S A LOOK AT THE TECHNOLOGIES GOOGLE IS
EMPLOYINGTOMITIGATETHESECONCERNS.


REMOVING BIAS
For all the wonders of AI, it’s far from
perfect, and bias can be a problem
when training a model. As we come
to rely on AI to make decisions, it’s
becoming increasingly important
to understand how it arrived at a
certain conclusion. Big players like
IBM are already pushing for greater
transparency in terms of how these
algorithms work. There are also calls
for “explainable” AI, which is basically
AI that is designed to be transparent
and interpretable from the outset.
This is super important in terms of
accountability, especially when you
task AI with important systems like
those in a self-driving car.
It’s still a popular opinion that deep
neural networks are virtual mystery
boxes with inscrutable workings,
but Google is trying to change that
with a technique called TCAV. Short
for Testing with Concept Activation
Vectors, TCAV works by letting
developers inspect their algorithms
and see the concepts that the AI is
associating with their samples.
Take image recognition for
example. Many machine learning
algorithms operate using low-level
features such as edges and colors in
individual pixels, which is a far cry
from how humans process images.
TCAV operates at a higher level with
big picture concepts, so humans can
actually comprehend it. For instance, a
model that is trained to detect pictures
of doctors could mistakenly assume
that being male was an important
characteristic because the training


data included more images of men.
With TCAV, researchers could
identify this bias in their models
and correct for it.

TRAINING AI WHILE
KEEPING YOUR DATA SAFE
One of the biggest issues with training
AI is the amount of data it needs to
become effective. Inevitably, this
means concerns about data sharing
and what’s happening to your
information as it bounces around the

brain of some virtual intelligence
in the cloud. Google thinks it has
a possible solution to this with
federated learning, a new technique
that lets scientists train AI models
without raw data ever leaving
your device.
In the case of smartphones, it
allows devices to collaboratively learn
a shared prediction model while
keeping all the training data on the
device itself, effectively decoupling
an algorithm’s training from the

ILLUSTRATIONS

GOOGLE

As we come to


rely on AI to make


decisions, it’s becoming


increasingly important


to understand how


it arrived at a certain


conclusion.


DOCTOR MODEL + TCAV


80 HWM | JUNE 2019

Free download pdf