Smart_Photography_-_September_2019

(Rick Simeone) #1
High Definition (HD): This is currently
an all digital system. Unfortunately,
there is no fixed standard on what this
means either in terms of resolution or
the frame rates. This is also why when
you purchase a TV, you will hear terms
like Full HD, HD Ready and so on. Table
1 summarises the various resolutions
which qualify under the definition of
HDTV. In terms of pixels, each 1080
frame has 2.1 MP and the 720 frame has
0.9 MP. One common feature of HDTV is
that they all use 16:9 as the aspect ratio.

Ultra-HD (UHD): This is the so-called 4K
standard. There is also an 8K standard
covered under this, but this does not
currently exist in our country. Each 4K
frame has, in terms of pixels, as many
as 8 MP. Thus, there is a humongous
amount of data that is generated as well
as displayed. The resolution for 4K UHD
TVs is 3840 x 2160 (Table 1 has more
details) though in theatres 4096 x 2160
is used. Right now, in our country the
TVs capable of UHD are still expensive
but this will change in due course.

Why should you know all this?
Remember that this issue is very similar
to the pixel count in still photography.
Just as you choose the number of pixels
you want depending on your application
(low number of pixels for web and
higher for prints, etc.), you will have to
do a similar thing based on your end use.
For example, there is no point capturing
video in 4K when your TV cannot display
it. Most cameras in their video menus
will show at least some of the options
given in Table 1 so that you can select
what you want. However, please note
that all the options listed in Table 1
may not be available in your camera.
Please check your camera manual for
exact details.

Codec
Video data capture, transmission and
finally display involves handling of huge
amounts of data. To reduce the amount
of data and thus the costs, many video
compression / decompression schemes
are used. This process is done by a codec
which is short for coding/decoding.

Let us look at some numbers. As you
have just read each 1080 frame is about
2 MP. This when uncompressed will
be around 5.4 Mbytes per frame. If you
were to record and/or display at 25 fps

uncompressed, you have to deal with
almost a giga bit of data per second
(even if you ignore the overheads). If you
record in 4K, the data will be four times
as much as each frame will now be 8MP.
By any standards this is humongous
amount of data even if you are only
at HDTV level. Thus, compression is a
must unless you need very high levels
of quality. Modern codecs can achieve a
compression of 10:1 or even more.

When the D-SLR video first starred
and the video resolution was not high,
compression was achieved by what is
called the Motion JPEG. This as the name
implies is a set of individual JPEG images
played one after another with each
image forming a single frame. However,
this scheme is inefficient and impractical
as the resolution increases. So, newer
codecs have been introduced, the
most common being H.264 / MPEG-4
Part 10 standard. Others are DivX, HEVC
(H.265), etc.

The following is a highly simplified way
to understand on how a codec works.
If you notice any scene carefully on a
TV or in a movie you will observe that
mostly, only a part of the scene will
actually move or change with time. Rest
will be same as the previous frame. So,
engineers have come up with a clever
trick. Instead of recording each frame
fully, a frame is recorded and thereafter
only changes in the frame compared
to the previous frame are transmitted.
There is one more trick too. You cannot
see great detail when something is
moving. So when everything in a scene
is changing in each frame like when
the camera is panning, the resolution
itself can be reduced. These are a
couple of ways how compression is
done but remember that this is gross
simplification.

Bit rate
In the previous section you have read
about the codec process. In brief, this
is a video compression scheme, where
instead of recording every frame, mostly
only changes are recorded. However,
if these changes are large, they have to
be compressed further to reduce the
demand on resources. A measure of this
compression is the bit rate and hence is a
measure of quality.

This is a bit tricky to understand, and it’s

natural to have some doubts. If you have
fixed the resolution and the frame rate,
isn’t the number of bits to be transmitted
per second also fixed? This is partly
true and as you would expect, a 4K / 30
fps recording will use a higher bit rate
than 1080p /30 fps. But that’s not the
whole story. Look at this example to get
a clearer idea.

Consider a still image of 2000 x 1000
pixels. This will be a 2 mega pixel image,
written commonly as 2MP. While most
would say the size is 2000 x 1000 pixels,
more accurately this is the dimension of
the image in pixels. The size as we will
call it here would be the space this image
occupies on the computer storage (disk).
How much is that? Well, that depends
much on the quality you want. Let us
look at an indicative example. Starting
from a typical 2 MP image, if you want
to keep the image at the highest quality
without any compression by using TIFF
format, the size would be 5.4 Mega
Bytes (MB). If stored as a JPEG file with
mild compression to get a good quality
image, the size would be around 1.75 MB
and with heavy compression (poorest
quality) it can be as low as 0.215 MB
(215 Kilo Bytes). The ratio thus is as
much as 25 times! Now, what if you want
to transmit these images over the net in
one second. Taking each byte as 8 bits,
the speeds of transmission (to transmit a
single image) needs to be:

5.4 x 8 = 43.2, 1.75 x 8 = 14 and
0.0215 x 8 = 1.72 megabits per sec
(or Mbps), respectively.

From this it is clear that the bit rate will
drop for higher compression, and
hence it is a measure of quality for
a given resolution and frame rate.
ILCs offer different bit rate options
(compressions) for the same resolution
and frame rates.

One recently released ILC, for example,
offers 1080p at 25 fps with two different
bit rates, 28 Mbps and 14 Mbps,
corresponding to two different levels of
image quality, the former being better.

As you would expect, higher bit rates are
supported by more expensive cameras
for superior image quality. Also, it is
generally not correct to directly compare
bit rates of two different cameras as this
will also depend on the codec used.

94 | Smart Photography | September 2019 http://www.smartphotography.in


LEARNING

Free download pdf