Web User - UK (2020-01-22)

(Antfer) #1

38 22 Jan - 4 Feb 2020


Google is using AI to help with wildlife conservation, and is making


the data freely available to explore, as David Crookes explains


Wildlife Insights


What is it?
Wildlife Insights (www.wildlifeinsights
.org) is a new project from Google that
has been set up in partnership with seven
conservation organisations: Conservation
International, World Wildlife Fund,
Zoological Society of London, Map of
Life, Wildlife Conservation Society, the
North Carolina Museum of Natural
Sciences and the Smithsonian. It is
mapping millions of animals in the wild
using photos taken with motion-
activated cameras. By employing artificial
intelligence, it aims to speed up data
analysis so that wildlife researchers can
make better and more timely decisions.


How does it work?
All around the world, camera traps are
taking snaps of animals as they wander
around their natural habitats, triggering
carefully located sensors. The resulting
images are used by biologists and land
managers to assess the health of the
wildlife, but sorting through them can be
an arduous task and many of the images
end up languishing on old hard drives.
With Wildlife Insights, the photos can be
uploaded to Google, where specially
created AI models that have been trained
to identify species promptly get to work.


What is the AI doing?
To get technical, the developers of
Wildlife Insights have employed a ‘deep
convolutional neural network’ for multi-
class classification. This uses Google’s
open-source TensorFlow framework and
was set up by manually inputting 8.7
million photos of animals gathered by
various conservation organisations. Such


a large set of data has allowed the
technology to be trained to recognise the
differences between the animals, with the
training data including information about
class, order, family, genus and species.
As a result, fresh uploaded images can
be analysed and compared against
learned data points so that the animals
in the photographs can be predicted
with the highest probability. As it stands,
there are more than 4.5 million images
taken from camera traps on the system.
You can read more about the AI at
bit.ly/wildlife493.

What is the AI
looking for?
According to Google, the
AI models analyse
patterns, colours and
textures, along with other
attributes, to make a
prediction. Where it
struggles to identify an
animal, Wildlife Insights
will say “No CV result”
(which means No
Computer Vision result).
Species labels are only
returnedif theAIis
relativelysureaboutits
prediction.Identifications

can be edited and corrected, which will
enable the AI to grow and improve.

How much quicker can it be?
Far quicker than any human, that’s for
sure. According to Google, a human
expert could sift through 300 to 1,000
images each hour, identifying the animal
pictured in each photo. Wildlife Insights,
however, can process a staggering 3.6
million photos an hour – some 3,000
times more. Humans may still need to
add GPS coordinates for the traps that
took the snaps but the time saving for
researchers is clearly immense. Scientists

FAQ


Everything you need to know about the most


interesting new technology trends and events


Google’s AI models identify wildlife species
in uploaded photos, such as this British fox


Wildlife Insights identifies millions of photos an hour,
such as this Tanzanian duiker
Free download pdf