Implementing a Natural Language Classifier in iOS with Keras + Core ML

(Jeff_L) #1
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam

model = Sequential()
model.add(Dense(50, input_dim=len(train_x[0]),
activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(len(train_y[0]),
activation='softmax'))
model.summary()

The training of the model is even simpler, as per denition the


intents/utterances dataset used per input is very limited, and there’s no


space at all for creating validation and testing sets.


It basically trains the entire dataset with a sucient number of epochs


to obtain maximum possible accuracy.


model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
model.fit(np.array(train_x), np.array(train_y),
epochs=400)

Once the Keras/TensorFlow model is trained, this is easily exported to


Core ML using the Apple CoreMLTools python library:


import coremltools
coreml_model =
coremltools.converters.keras.convert(model,
input_names="embeddings", output_names="entities")
coreml_model.save('SwiftNLC.mlmodel')

Core ML Swift Wrapper and Word


Embedding preparation


Once the exported Core ML model is imported in a client app to predict


the intent from a new utterance, it will need to be encoded using the


same one-hot embedding logic used for preparing the training dataset.

Free download pdf