Open Source For You — December 2017

(Steven Felgate) #1

Developers How To


64 | DECEMBER 2017 | OPEN SOURCE FOR YOU | http://www.OpenSourceForU.com

The model can be compiled with compile(), as follows:

model.compile(loss=’categorical_crossentropy’,
optimizer=’sgd’,
metrics=[‘accuracy’])

Keras is very simple. For instance, if you want to
configure the optimiser given in the above mentioned code,
the following code snippet can be used:

model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.SGD(lr=0.01, momentum=0.9,
nesterov=True))

The model can be fitted with the fit() function:

model.fit(x_train, y_train, epochs=5, batch_size=32)

In the aforementioned code snippet, x_train and y_train
are Numpy arrays. The performance evaluation of the model
can be done as follows:

loss_and_metrics = model.evaluate(x_test, y_test, batch_
size=128)

The predictions on novel data can be done with the
predict() function:

classes = model.predict(x_test, batch_size=128)

The methods of Keras layers
The important methods of Keras layers are shown in Table 1.

Method Description

get_weights()
This method is used to return the
weights of the layer

set_weights() This method is used to set the
weights of the layer

get_config()

This method is used to return the
configuration of the layer as a
dictionary

Table 1: Keras layers’ methods

MNIST training
MNIST is a very popular database among machine learning
researchers. It is a large collection of handwritten digits. A
complete example for deep multi-layer perceptron training on
the MNIST data set with Keras is shown below. This source is
available in the examples folder of Keras (https://github.com/
fchollet/keras/blob/master/examples/mnist_mlp.py):

from __future__ import print_function

import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import RMSprop

batch_size = 128
num_classes = 10
epochs = 20

# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype(‘float32’)
x_test = x_test.astype(‘float32’)
x_train /= 255
x_test /= 255

print(x_train.shape[0], ‘train samples’)
print(x_test.shape[0], ‘test samples’)

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Dense(512, activation=’relu’, input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation=’relu’))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation=’softmax’))

model.summary()

model.compile(loss=’categorical_crossentropy’,
optimizer=RMSprop(),
metrics=[‘accuracy’])

history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))

score = model.evaluate(x_test, y_test, verbose=0)
print(‘Test loss:’, score[0])
print(‘Test accuracy:’, score[1])

If you are familiar with machine learning terminology,
the above code is self-explanatory.
Free download pdf