2019-09-01_Computer_Shopper

(C. Jardin) #1

120 SEPTEMBER 2019 |COMPUTERSHOPPER|ISSUE 379


Because networkswith just thesetwo
layers arelimited in their capabilities,
mosthaveatleast one additional
layer of neurons –called hidden
layers –between the inputand the
outputlayer,asshown in the diagram
on the oppositepage.
As an exampleof howthis sort of
network could be used,imaginea
scaled-upversion of this network with
manymoreinputsand 10 outputs.
Now, if thoseinputswere connected
to individualpixels of adigitised
handwritten digit, and the outputs
representedthe figures 0-9, the
network could be used for
handwritingrecognition.Youmay
wonder howanetwork to do this
would differ from anyother ANN
with the samenumberof neurons,
and the answerisinthe value of
the weightings on each of the
various neurons’ inputs.
Programmingan ANN,therefore,
involves setting up each of those
weights, and this is usuallydoneby
the process of training.
To cut along story short,this
involves inputtingtest datatothe
network –for example,lots of
handwritten figures –and foreach
such figure, adjustingthe weights to
minimisethe error; that’sthe
differencebetween the outputs
produced by the ANN and the
known identity of each digit.
Finally,ifyou’re
wonderingwhether
ANNsare analogueor
digital, theycould be
either.Generally
speaking,however,the
signalsin the digital
variety arespikes, as
opposedto gradually
varyingvoltages in analogue

ANNs. This digital type of ANN is
oftenreferred to as the spiking
neural network, or SNN.

IntroducingSpiNNaker
Whilemanyofthe majorcommercial
players in the world of computing
haveongoingresearch programmes
in this area, it’sinterestingto note
someground-breakingresearch
that’stakingplacehereinthe UK.
Called SpiNNaker,which stands for
Spiking Neural Network Architecture,
this research is taking placeatthe
Universityof Manchester,which is, of
course, wherethe computer that
paved the waytothe digital
revolutionwasdevelopedin 1948.
Lookingat the specificationof the
SpiNNaker chip, it’sclear that it owes
alot to conventional computer
architectureratherthan the sort of
network that’sembodiedin the brain.
Explainingwhythis approach had
beentakenratherthan attempting to
replicatebiology, Dr Simon Davidson,
of the University’sSchoolof Computer
Science, alludesto the fact that we’re
notyet in the endgame.
“The shortansweristhat research
on artificialneural networksisstill in
its infancyand thereisalot of
experimentation goingon. It’smuch
easierto do this in software, where
one cantweak alearningrule easily,
than it is to do usingan analogue
circuit,”Davidsonsays, before
explainingwhereManchester’s
solutionfits in.
“This is whereplatformssuch as
SpiNNaker offeragreat advantage
forthe researcher.SpiNNaker
provides the parallelismto simulate

networksofmillionsof neurons in real
time, but anyaspectof the network
(the type of neuron, their connectivity
and the learningrules governingtheir
interaction)can be readily changed.
Once the researcher understands
whattheyrequirefromtheir network,
that knowledgecanbeembedded
into acustom pieceofhardware. But
the abilityto tinker with anyofthe
network’sparameters and algorithms
is vital to the designprocess, and that
is whereSpiNNaker has somethingto
offer,”henotes.
However,SpiNNaker is alot more
than an ordinarycomputer,albeit one
with lots of cores, and its architecture
also differs from that of manyof
today’ssupercomputers, which
employhuge numberof GPU cores.
“GPUsare geared to performing
multiply-and-accumulateoperations
on manydatawords in parallel. This
worksmost efficiently whenthe data
to be multipliedis arrangedin neat
contiguousrows,”Davidsonsays.
“Real brain networksare sparsely
connected, so the synaptic weights
arenot arrangedin neat contiguous
rows. The weightmatrices that define
the connectivityaretypically sparse.
GPUsdonot traditionallyhandle
sparsematrices well and so they
don’t makebest use of their parallel
multiply-accumulatehardware.
“SpiNNaker is designedto cope
with the sparsenessof the
connectivitythroughthe inclusionof
hardwaretofetch sparsedatafrom

ANNs SHED LIGHTONTHE BRAIN


The emphasisin this article has beenon deliveringpreviouslyunattainable performancein
demandingcomputing applications by mimickingthe human brainelectronically. However, this
isn’t the onlyreasonforresearchinto ANNs. The other motivation is to better understandhowthe
brainreallyworks,with the aim of developingimproved treatments fordegenerative neurological
conditions, such as Alzheimer’s,as well as methodsof treating strokesand mental healthissues.
In addition to helpingdoctorsunderstandthe brain,the application of ANNs is also
showingpotential forthe earlydiagnosisof someconditions. Jointresearchin the
US, Australiaand Malaysia has beendevelopingthe use of ANNs to address
the difficultiesin diagnosingmild cognitiveimpairmentdue to Alzheimer’s
disease,usingthe currentlyavailable clinicaldiagnosticcriteriaand
neuropsychologicalexaminations. This involvesusingdeep neuralnetwork
languagemodels to analyse and learnthe linguistic
changesthatcharacterisethe condition.
The teamconcluded thattheir systemhad
sufficientlylearnedseverallinguisticbiomarkersin
polysyllabic wordsto distinguishaffectedpatients
from healthyones with reasonable accuracy.

ABOVE:The
Universityof
Manchester’s
SpiNNakerchip
allowsdevelopers
tofine-tunetheir
neuralnetwork
designsbefore
committingthem
tosilicon

RIGHT:Neuralnetworksarealready
workingonourbehalf,behindthescenes,
in Google’s datacentres

LEFT:Analysing
verbalutterances
usingANNsmight
offerimproved
diagnosisof
Alzheimer’sdisease
Free download pdf