New Scientist - USA (2019-11-09)

(Antfer) #1

44 | New Scientist | 9 November 2019


To understand how this vertical edge acts as
a camera, and how, ironically, it can allow you
to see the very scene it is obscuring, you first
have to notice that the floor by the corner
of the wall is in shadow. Known technically
as a penumbra, this dark patch is easy to miss.
Most of the floor is at the same brightness
due to light scattering from everywhere in
the corridor. At the penumbra, however,
it is slightly dimmer because light from
around the corner can’t quite reach it.
Given a photo of the floor near to the corner,
a computer could subtract the contribution
made by light that stays the same brightness
everywhere to leave only the diminished light
in the shadow region – that is, the contribution
from around the corner. This would tell you
the average brightness and colour of the
hidden scene, which is pretty useless on its
own. But the existence of the corner tells you
something else about the light striking the
shadow region.
To understand why, imagine standing
with your shoulder to the wall, next to a
corner, but so you can’t see round it. This is
where the shadow is deepest. As you sidestep
away from the wall, your view around the
corner steadily improves. In the same way,
the portion of the hidden scene exposed at
any one point within the shadow depends
on how far away that point is from the wall.
It is this constraint that makes the maths for
converting the shadow to an image solvable,
as Torralba and Freeman, together with Ye and
others at MIT, discovered in 2017. Armed with

GA

MP

E/W

IKIM

ED

IA^ C

OM

MO

NS

nothing more than the basic geometry
of a corner and video footage of the ground
beside it, taken by an ordinary digital
camera, their algorithm could reconstruct
a video of two people moving about,
completely outside the frame.
There was a big snag with this work. The
“images” making up the reconstructed video
were only one-dimensional, like thin strips of
normal photographs. That was enough to
disclose movement, but not to recognise
anyone. The reason was that the accidental
camera itself, a vertical edge, was one-
dimensional. As a result, moving away from
the wall improved the view around the corner,
but shifting up or down did not.

What the yucca sees
In January this year, however, a group
led by Vivek Goyal at Boston University
managed to reproduce, from around a corner,
two-dimensional, colour images of what was
being shown on an LCD monitor. The feat
required a slightly different accidental camera:
a credit card-sized occluder set back from the
corner, casting a shadow not onto the floor,
but onto a wall even further back (see diagram,
right). “We’re getting two-dimensional
reconstruction because the occluder itself
is two-dimensional,” says Goyal.
Goyal hasn’t stopped there. Determined
to make round-the-corner imaging more
applicable to everyday situations, his group
recently demonstrated improved algorithms

Listening to the inaudible might
sound like a paradox, but not
according to a group including one
of the pioneers of round-the-corner
imaging. In 2014, William Freeman
at the Massachusetts Institute of
Technology and his colleagues
captured high-speed video
footage – without audio – of various
objects from glasses of water
to empty crisp bags while an
instrumental version of Mary
Had a Little Lamb played in the
background. The almost-
imperceptible vibrations of the
objects caused by the sound waves
were enough to enable them to
reconstruct the nursery rhyme.
Sound can see through walls, too.
In June this year, David Lindell of
Stanford University in California
and his colleagues hid an H-shaped
object behind a wall. They then
used speakers to bounce sound
off another wall so it would go
behind the first wall. Deploying
microphones to pick up the
returning sound waves, they could
build up an image of the hidden
object. They believe the set-up is a
faster and cheaper way to see round
corners than light-based systems.
If all this hidden imaging sounds
a little cloak-and-dagger, be
warned that people can be tracked
through the walls of a home or
office using ambient Wi-Fi and
a smartphone. You need an app
developed by Yanzi Zhu at the
University of California, Santa
Barbara, and his colleagues,
which can detect the faint swelling
of a Wi-Fi signal caused by human
movement, so long as you walk up
and down a few times first to map
the Wi-Fi environment. The
researchers, who created the app
to expose the privacy risk, are now
developing defensive systems for
Wi-Fi transmitters.

HEARING THE SIGHTS


Walls have eyes:
clever tricks can
tease images
from shadows
Free download pdf