Open Source For You — December 2017

(Steven Felgate) #1
94 | DECEMBER 2017 | OPEN SOURCE FOR YOU | http://www.OpenSourceForU.com

D


eep learning is a new area of machine learning
research, which has been introduced with the objective
of moving machine learning closer to one of its
original goals—artificial intelligence (AI). Deep learning
is the sub-field of machine learning that is concerned with
algorithms. Its structure and function is inspired by that
part of the human brain called neural networks. It is the
work of well-known researchers like Andrew Ng, Geoff
Hinton, Yann LeCun, Yoshua Bengio and Andrej Karpathy
which has brought deep learning into the spotlight. If you
follow the latest tech news, you may have even heard
about how important deep learning has become among big
companies such as:
ƒ Google buying DeepMind for US$ 400 million
ƒ Apple and its self-driving car
ƒ NVIDIA and its GPUs
ƒ Toyota’s billion dollar AI research investments
All of this tells us that deep learning is really
gaining in importance.

Neural networks
The first thing you need to know is that deep learning is about
neural networks. The structure of a neural network is like any
other kind of network; there is an interconnected Web of nodes,
which are called neurons, and there are edges that join them
together. A neural network’s main function is to receive a set
of inputs, perform progressively complex calculations, and
then use the output to solve a problem. This series of events,
starting from the input, where each activation is sent to the next
layer and then the next, all the way to the output, is known as
forward propagation, or forward prop.

Deep learning is a sub-field of machine learning and is
related to algorithms. Machine learning is a kind of artificial
intelligence that provides computers with the ability to learn,
without explicitly programming them.

The first neural nets were born out of the need to address
the inaccuracy of an early classifier, the perceptron. It was
shown that by using a layered web of perceptrons, the
accuracy of predictions could be improved. This new breed of
neural nets was called a multi-layer perceptron or MLP.
You may have guessed that the prediction accuracy of a
neural net depends on its weights and biases. We want the
accuracy to be high, i.e., we want the neural net to predict a
value that is as close to the actual output as possible, every
single time. The process of improving a neural net’s accuracy
is called training, just like with other machine learning
methods. Here’s that forward prop again – to train the net, the
output from forward prop is compared to the output that is
known to be correct, and the cost is the difference of the two.
The point of training is to make that cost as small as possible,
across millions of training examples. Once trained well, a
neural net has the potential to make accurate predictions each
time. This is a neural net in a nutshell (refer to Figure 1).

Three reasons to consider deep learning
When the patterns get really complex, neural nets start to
outperform all of their competition. Neural nets truly have the
potential to revolutionise the field of artificial intelligence.
We all know that computers are very good with repetitive
calculations and detailed instructions, but they’ve historically
been bad at recognising patterns. Thanks to deep learning,
this is all about to change. If you only need to analyse
simple patterns, a basic classification tool like an SVM or
logistic regression is typically good enough. But when your
data has tens of different inputs or more, neural nets start to
win out over the other methods.

The Connect Between


Deep Learning and AI


94 | DECEMBER 2017 | OPEN SOURCE FOR YOU | http://www.OpenSourceForU.com

For U & Me Overview
Free download pdf