P1: C-46
Tuttle WL040/Bidgolio-Vol I WL040-Sample.cls June 20, 2003 17:30 Char Count= 0
558 VIDEOSTREAMINGhigh bandwidth requirements of videos that have not been
compressed.
Scaling and compressing video do affect the quality of
the video. The quality of the video is impacted by frame
rate, color, and resolution. Frame rate is the number of
still images that make up one second of a moving video
image. Images move fluidly and naturally at 30 frames
per second, which is the National Television Standards
Committee (NTSC) standard for full motion video. How-
ever, film is usually 24 frames per second (Compaq, 1998l).
Videos with a frame rate of less than 15 frames per second
become noticeably jumpy. It should be noted that most
phone and modem technology limits the frame rate to 10
frames per second (Videomaker Magazine, 2001).
The second quality variable, color depth, is the num-
ber of bits of data the computer assigns to each pixel of
the frame. The more bits of color data assigned to each
pixel, the more colors can be displayed on the screen. Most
videos are either 8-bit 256-color, 16-bit 64,000-color, or
24 bit 16.8-million color. The 8-bit color is very grainy
and not suitable for video. The 24-bit color is the best, but
it greatly increases the size of the streaming file, so the 16-
bit color is normally used (Videomaker Magazine, 2001).
The third quality variable, resolution, is measured by
the number of pixels contained in the frame. Each pixel
displays the brightness and color information that it re-
ceives from the video signal. The more pixels in the frame,
the higher the resolution. For example, if the video is
640 ×480, there are 640 pixels across each of the 480 ver-
tical lines of pixels. Streamed video ranges from postage
stamp size, which is 49×49 pixels, to full PC monitor
screen, which is 640×480 pixels, and beyond (Video-
maker Magazine, 2001).SCALING
As mentioned previously, scaling involves reducing video
to smaller windows. For example, this can be accom-
plished by reducing the frame resolution from a full screen
(640×480) to a quarter screen (320×240). In addi-
tion, frame rate and color depth can also be scaled. For
example, the frame rate can be reduced from 30 to 15
frames per second. The color depth can be scaled from
24-bit to 16-bit. According to Compaq (1998), the process
noted in this example would reduce the video file size from
216 Mbps to 18 Mbps and the quality of the video would be
reduced. However, as can be seen from the available band-
widths shown in Table 1, many delivery methods would
not support a data rate of 18 Mbps. Therefore, to further
reduce the data rate, video compression is necessary.COMPRESSING AND ENCODING
The goal of compression is to represent video with as few
bits as possible. Compression of video and audio involves
the use of compression algorithms known as codecs. The
termcodeccomes from the combination of the terms
encoder and decoder—cod from encoder and dec from
decoder (RealNetworks, 2000). An encoder converts a
file into a format that can be streamed. This includes
breaking a file down into data packets that can be sent
and read as they are transmitted through the network. A
decoder sorts, decodes, and reads the data packets as theyare received at the destination. Files are compressed by
encoder/decoder pairs for streaming over a network.
Encoders generally accept specific input file formats
used in the capture and digitizing process. The encoders
then convert the input formats into proprietary streaming
formats for storage or transmission to the decoder. Some
codecs may be process-intensive on the encode side in or-
der to create programs one time that will be played many
times by the users. Other codecs are divided more equally
between encoding and decoding; these are typically used
for live broadcasts (Compaq, 1998).
As mentioned above, each of the three major streaming
technologies has its preferred encoding and compressing
formats. Many users opt to work with one of these three
technologies because they are relatively easy to use, and
technical support is provided by each of the technologies.
These technologies provide options to users for selecting
video quality and data transmission rates during the com-
pression and encoding process. Depending on the appli-
cation and technology used, multiple streaming files may
have to be produced to match the different bandwidths of
the networks over which the video is streamed. Two of the
three major technologies have advanced options where a
streaming file can be produced that has a data transmis-
sion rate that will adapt to the varying bandwidths of the
networks. The specifics of these technologies will be dis-
cussed in a later section.
Even with the dominance of the three major technolo-
gies, there are some open standards for compression al-
gorithms. It is important to be aware of these standards
and understand how the compression algorithms work.
With this knowledge, the user can make better decisions
when creating, delivering, and viewing streaming video.
The compression algorithms will be discussed in more
detail later. However, they all utilize the same basic com-
pression techniques to one degree or another. Therefore,
it is essential to review the compression techniques before
discussing the algorithms.
First, compression techniques are either lossless or
lossy. Lossless compression is a process where data are
compressed without any alteration of the data in the com-
pression process. There are situations where messages
must be transmitted without any changes. In these cases,
lossless compression can be used. For example, lossless
compression is typically used on computers to compress
large files before emailing them (Vantum Corporation,
2001). A number of lossless techniques are available. How-
ever, for video files in particular, more compression is
needed than the lossless techniques can provide.
Lossy techniques involve altering or removing the data
for efficient transmission. With these techniques, the orig-
inal video can only be approximately reconstructed from
its compressed representation. This is acceptable for video
and audio applications as long the data alteration or
removal is not too great. The amount of alternation or
removal that is acceptable depends on the application
(Vantum Corporation, 2001).
A number of video compression techniques take
advantage of the fact that the information from frame to
frame is essentially the same. For example, a video that
shows a person’s head while that person is talking will
have the same background throughout the video. The only
changes will be in the person’s facial expressions and other