Pro Java 9 Games Development Leveraging the JavaFX APIs

(Michael S) #1
Chapter 2 ■ an IntroduCtIon to Content CreatIon: 2d new MedIa asset FundaMentals

For some video content, this will play back (look) the same as the 30 FPS content. The only way to test this
is to try different frame rate settings and observe the results during your video optimization (encoding)
process.
The next most optimal setting for obtaining a smaller data footprint would be the bit rate that you set
for a codec to try to achieve. This is shown on the left side of Figure 2-6, encircled in red. Bit rate equates to
the amount of compression applied and thus sets the quality level for the digital video data. It is important
to note that you could simply use 30 FPS, 1920-resolution HD video and specify a low-bit-rate ceiling. If
you do this, the results would not be as professional looking as they would be if you first experimented with
compression using a lower frame rate and (or) a lower resolution, in conjunction with using the higher
(quality) bit-rate setting. There is no set rule of thumb for this, as every digital video asset contains 100
percent different and unique data (from a codec algorithm’s point of view, that is).
The next most effective setting for obtaining a smaller data footprint is the number of keyframes, which
the codec uses to sample your digital video asset. This setting is seen encircled in red on the right side of
Figure 2-6. Video codecs apply compression by looking at each frame and then encoding any pixel changes
over the next several frames so the codec algorithm doesn’t have to encode every single frame in a video
data stream. This is why a talking head video will encode better than a video where every pixel moves on
every frame (like video with camera panning).
A keyframe is the setting in a codec that forces the codec to take a fresh sampling of the video data asset
every so often. There is usually an auto setting for keyframes, which allows a codec to decide how many
keyframes to sample, as well as a manual setting, which allows you to specify a keyframe sampling every
so often, usually a certain number of times per second or a certain number of times over the duration of the
entire video (the total frames).
Some codec setting dialogs have either a quality or sharpness setting (a slider) that controls the
amount of blur applied to the video frame before compression. In case you don’t know this trick, applying
a slight blur to your image or video, which is usually not desirable, can allow for better compression as
sharp transitions (sharp edges) in an image are harder to encode (these take more data to reproduce) than
softer transitions are. That said, I’d keep the quality (or sharpness) slider between an 85 percent and 100
percent quality level and then try to get your data footprint reduction using the other variables that we have
discussed here, such as decreasing the resolution, frame rate, or bit rate.
Ultimately, there will be a number of variables that you’ll need to fine-tune to achieve the best data
footprint optimization for any given digital video data asset. It is important to remember that each digital
video asset will “look” different (mathematically) from a digital video codec. For this reason, there can be
no standard settings that can be developed to achieve any given compression result. That said, experience
tweaking various settings will eventually allow you to get a better feel, over time, as to the various settings
that you need to change to get the desired end result.


Digital Audio Concepts: Amplitude, Frequency, Samples,


Waves


Those of you who are audiophiles already know that sound is created by sending sound waves pulsing
through the air. Digital audio is complex; part of the complexity comes from the need to bridge “analog”
audio technology created with speaker cones with digital audio codecs. Analog speakers generate sound
waves by pulsing them into existence. Our ears receive analog audio in exactly the opposite fashion, catching
and receiving those pulses of air, or vibrations with different wave lengths, and then turning them back into
“data” that our brain can process. This is how we “hear” the sound waves; our brain then interprets different
audio sound wave frequencies as different notes or tones.
Sound waves generate various tones depending on the frequency of each sound wave. A wide or
infrequent (long) wave produces a low (bass) tone, whereas a more frequent (short) wavelength produces a
higher (treble) tone. It’s interesting to notice that different frequencies of light will produce different colors,
so there is a close correlation between analog sound (audio) and analog light (color). There are many other

Free download pdf