Street Photography Magazine

(Elle) #1

Secondly, depth of field depends on the
size of the aperture you use – the larger the
opening, the shallower the in-focus area
behind and in front of the plane of focus will
be. Additionally, closing the aperture down
produces greater depth of field and
automatically increases the exposure time
required to capture the same image, but
increases the risk of camera shake and blurred
details. The settings selected by the
photographer or the camera’s systems always
represent a compromise between these two
fundamental factors.
Finally, all lenses produce optical errors
due to refraction and reflection, whether the
compound lens they are part of is correctly
focused or not. The larger the aperture, the
greater the differences in necessary refraction
for inner and outer light rays and the degree
of optical inaccuracies in the captured image.
While it is no longer necessary to keep a
portrait subject’s head still using metal
supports, there are still plenty of situations –
indoors or at dusk for example – in which we
have to use either a tripod or flash to capture
enough light.
Until very recently, the desire to correct
focus settings after image capture posed
insurmountable technical problems.
Conventional cameras register the amount
and color of the light reflected by the subject
and save the corresponding data in the form
of a two-dimensional mosaic. The camera’s
image sensor adds all the photons that hit
each pixel together to calculate a definitive
value for the brightness of the light that
reaches it. The only way a point on a subject
can be reproduced in focus in a digital image
is if the camera’s lens focuses the light
emanating from each point on the subject
onto the sensor. Multiple pixels capture light
from points on the subject that cannot be
focused together with the light emanating
from other points on the sensor plane,
producing out-of-focus detail in the captured
image.


Light Field Photography


Light field photography is based on the idea
that if we know which point on the subject
produces each point of light on the sensor,


we can capture them all and ‘re-sort’ them
later. As well as registering the color and
intensity of each point of light, a light field
camera also registers the direction and
distance it comes from. The term ‘light field’
was coined by the Russian scientist
Alexander Gershun way back in 1936 and
describes the distribution of light rays in
space in mathematical terms. In order to
capture light rays in a way that can be
interpreted as a light field, the camera has to
establish a relationship between the
direction a ray comes from and the place
where it hits the image sensor. The
technology used in the Lytro camera
combines contemporary digital photo
technology with ideas for a ‘plenoptic
camera’ first published by physicist and
Nobel Prize winner Gabriel Lippmann in
1908.
A digital plenoptic camera has an array of
microlenses positioned between the lens and
the image sensor. Each microlens captures
only some of the light emanating from each
point on the subject, and the pixel on the
sensor that the microlens steers the light
toward depends on where the light source is
positioned within the three-dimensional
space being photographed. This means that
for every light ray within the captured space,
the camera stores a two-dimensional model
of its color, intensity and the beginning and
end of its path through that space (see the
illustration on page 36). The Lytro software
then uses this data to create images with
varying planes of focus.
The technical demands when capturing so
much light distribution detail are huge. The

sensor resolution and memory capacity
requirements are far greater than for
conventional digital image capture systems
and you need a lot of computing power to
turn all that data into usable images.
The basic principles used to transform
plenoptic image data into a two-dimensional
image have their origins in computer
graphics. In 1996, Marc Levoy and Pat
Hanrahan at Stanford University used the light
field principle to develop algorithms capable
of determining where and how light within
space will be reflected while rendering a
scene (see [2] below).
The resulting principle is called ‘ray tracing’
and can be used to separate the data relating
to a single plane of focus from the mass of
data captured by a light field camera. The
resulting photo is the product of the average
of all the light rays that make up the
(subsequently) selected plane of focus. This
way, you can change the plane of focus of the
image after it has been captured.
The first attempts to capture light fields
were made at Stanford in the 1990s using a
veritable wall of cameras and mainframe
computers to calculate each refocus. The first
prototype of a handheld light field camera
was exhibited in 2005. This was a
conventional digital camera with an array of
296 x296 microlenses that bundled the
incoming light and redirected it toward a
4000 x4000-pixel image sensor (see [3]
below).
The steps involved in processing the image
data produced by this Contax prototype
clearly show how light field techniques
extend conventional digital image processing

Light Field Photography | Introduction

Image: Eric Cheng

Researchers at the Stanford Computer
Graphics Laboratory used this multi-camera
array to take the first experimental light
field photos in the early 2000s
Free download pdf