net - UK (2020-03)

(Antfer) #1

VOICES
Interview


Charlie Gerard is fascinated with
human-computer interaction, in
particular how you can use web
technologies in unusual ways outside of
the browser. She realised that if there’s
an API or JavaScript framework, you don’t
actually have to use devices as they were
intended but instead you can hack them
and build your own prototypes. For her
first side project, she controlled a Sphero
robotic ball with a Leap Motion and a
sprinkle of Node.js in order to make it
move with hand gestures. It eventually
led her to wow conference audiences
around the world with a talk about
controlling things with the brain using JS.
“I was researching other devices I could
buy and came across brain sensors,”
Gerard remembers. “I became interested
in neurotechnology and it really opened

up a world for me. I had absolutely no idea
that you could just buy a brain sensor and
that as a dev I could build something with
it. As soon as I discovered that, I had to
tell people.”
Gerard learned that some sensors like
the NeuroSky give you access to raw data,
which can be used for machine learning.
Others give you access to mental
commands or facial expressions, making
it possible to interact with a web interface
through eye movements. At the time there
wasn’t a JavaScript framework to use with
the Emotiv Epoc sensor, so to build an
interface, Gerard wrote her own open-
source JavaScript framework, based on
the sensor’s C++ SDK, as well as a Node.js
add-on. It enabled other developers to
play around with this technology without
having to learn C++ or Java. The

experiments she built with it include a
brain keyboard (look right or left to
highlight a letter, blink to select and
display it in the input field), pushing a
three.js 3D cube and even flying a mini
drone with mental commands.
Gerard became more interested in
machine learning and found it wasn’t as
scary as it seems. “At first I thought only
data scientists would understand it,” she
explains. “But understanding the basics
doesn’t actually take that long. It’s about
coming up with an idea and starting with
a small goal, then diving a bit deeper.”
Front-end developers can try out
machine learning through open-source
JavaScript libraries such as ml5.js and
TensorFlow.js, developed by Google. The
latter currently offers three features:
using a pre-trained model, transfer
learning and defining, training and
running a model entirely in the browser,
all in JavaScript (for a more detailed
explanation, it’s worth having a read of
Gerard’s Smashing Magazine article: http://www.
smashingmagazine.com/2019/09/machine-
learning-front-end-developers-
tensorflowjs/).
“Go in steps, use a pre-trained model
first and then combine it with custom
training data before doing everything in
the browser,” Gerard advises. “It requires
a bit of knowledge and the training can
take a lot of time. Doing it all in JavaScript
is interesting if you wanted to visualise
the training process and how the accuracy
of the predictions change over time but
you probably would fall back to Python if
you were building something for
production because it’s faster. So it’s still
very early stages for web developers. Most
of them are building experiments to learn
more about machine learning but now
more and more people are joining because
they realise it’s doable in JavaScript.”
For one of her creative coding
experiments, Gerard added detection
model PoseNet, which recognises body
shapes in images and videos, to
TensorFlow.js so that she could play VR
game Beat Saber in the browser just with
her hand movements – without the need
for expensive equipment. She also
prototyped a gesture recognition system

Photo: Nordicjs
Free download pdf