BBC Knowledge AUGUST 2017

(Jeff_L) #1
OBOTS are customarily portrayed in sci-fi
movies as futuristic creations that walk on
two legs and think like a human. But this isn’t
really an accurate portrayal, as we’ve been using
robots of one kind or another for some time –
they just look a bit different. Some of the earliest
programmable machines ever invented were
looms made to weave fabric in the early 1800s,
while robot arms have been used in our factories
since the 1960s, and the military have used robotic
weaponry such as cruise missiles since WWII.
In fact, these days, our everyday lives are
practically overrun by robots hiding in plain sight.
Our dishwasher is a robot that stands permanently
in the kitchen, washing away the remnants
of our meals; our vehicles are robotic devices
that listen to the movement of our hands and feet,
and manage the firing and transmission of
a combustion engine, the movement of suspension,
and the braking of wheels. Even our alarm clocks
are little robots that follow a simple programme
to make sure we wake up at the right time.
But how close are we to creating the thinking
machines of science fiction?

ROBOT SEE, ROBOT DO
In the last few years, a sea change has begun
to take place. Breakthroughs in artificial
intelligence and ‘machine learning’ research
are now allowing us to create devices capable of
more than following a set of simple instructions –
these robots are capable of learning for

themselves. For example, the new generation of cars can study
our driving styles and adjust how they respond to us. Some can park
themselves, perform emergency braking, or drive themselves on
motorways. The best digital recording devices can now anticipate
or predict the kinds of programmes you might want to watch,
and store them without you even asking them to.
And this is just the beginning. Take ‘Paul’, a portrait-drawing
robot that was created by London-based artist Patrick Tresset.
Paul understands what it sees by using a software simulation
of the neurons used in the human brain’s visual cortex – the region
that processes information from our eyes. Paul finds the important
features and draws what it sees, using lines of different lengths.
The images that are produced have a sketch-like quality that makes
them almost impossible to distinguish from something that’s been
drawn by a human.
“Trying to do anything that a human does with a robot makes
us realise the complexity of the tasks we perform naturally
without thinking,” explains Tresset. “It also shows us the
complexity of physical reality.”
It’s one thing to paint a portrait on a fixed canvas, but it’s
quite another to learn the skills of our most highly-trained
and responsible professionals. For example, could an AI ever fly
a passenger plane with the same skill as a human pilot, and keep
the passengers safe no matter what? Computer scientist Haitham
Baomar thinks it could. His research at University College London
adds an additional layer of intelligence to aircraft autopilots,
enabling them to cope even when the aircraft is faced with
unpredictable weather or damage.
“Our Intelligent Autopilot System is capable of performing
many piloting tasks while handling severe weather conditions
and emergency situations such as engine failure or fire,
rejected take-off, and emergency landing, which are far beyond

“Our Intelligent


Autopilot System


is capable of


performing many


piloting tasks while


handling severe


weather conditions


and emergency


situations”


R


PHOTOS: PHILIP EBELING, GETTY ILLUSTRATIONS: JOE WALDRON

Robotics


science


72
Free download pdf