A (175)

(Tuis.) #1
CHAPTER 12: Digital Audio: Providing Aural Feedback for UI Designs Using SoundPool 461

for long-form audio (and video) data, such as albums, songs, audio books, or movies. SoundPool is
best used for short-form audio snippets, especially when they need to be played in rapid succession
and (or) combined together, such as in a game, eBook, or gamified application.


You can load your SoundPool collection of samples into memory from one of two places. The first,
and most common, would be from inside the APK file, which I call captive new media assets, in
which case, they would live in your /res/raw project folder, as they will for your HelloUniverse app.
The second place you can load samples from is an SD Card or similar storage location. This is what
one would term the Android OS file system.


The SoundPool uses the Android MediaPlayer Service to decode an audio asset into memory. We’ll
be covering Android Service classes in the next chapter in this book (are you starting to see the
logical progression here?). It does this using uncompressed 16-bit PCM mono or stereo audio. This
is the main reason that I’ve been teaching you a work process which optimizes the audio using a
16-bit sampling resolution, because if you use 8-bit audio, Android up-samples it to 16-bit, and you
end up with wasted data that could have been “spent” on better quality.


This means that you should optimize your sample frequency but not your sample resolution (use
16-bit). Don’t use stereo audio unless you absolutely need to. It is very important to conform your
optimization work process to how SoundPool works to get optimal results across the largest number
of consumer electronics devices. The 48 kHz is the best sample frequency to use if you can, with
the 44.1 kHz coming in second, and 32 kHz coming in third. To optimize, keep a sample short and
mono, and use a modern codec, such as MPEG4 AAC or FLAC, to retain the most quality and still
get a reasonable amount of data compression for your APK file. Calculate memory use with raw
audio size!


When the SoundPool object is constructed in Java, as you will be doing later on in this chapter, a
developer will set a maxStreams parameter using an integer value. This parameter will predetermine
how many audio streams you can composite, or render, at the same time. Be sure to set this
parameter precisely, as it sets aside memory.


Setting the maximum number of streams parameter to as small a number as possible is a good
standard practice. This is because doing so will help to minimize CPU cycles used for processing
audio samples, and will reduce any likelihood that your SoundPool audio mixing will impact other
areas of your application performance.


The SoundPool engine will track the number of active audio streams (samples) to make sure that it
does not exceed the maxStreams setting. If this maximum number of audio streams is ever exceeded,
SoundPool will abort a previously playing stream. It will do this based upon a sample priority value
which you can specify.


If SoundPool finds two or more audio samples playing that have an equal sample priority value, it will
make a decision regarding which sample to stop playing based upon sample age, which means the
sample that has been playing the longest is the one that’s terminated (playback stopped). I like to
call this the Logan’s Run principle!


Priority level values are evaluated from low to high numeric values. This means that higher (large)
numbers will represent the higher priority levels. Priority is evaluated when a call to the SoundPool
.play( ) method causes the number of active streams to exceed the maxStreams value which is set
when a SoundPool object is instantiated.

Free download pdf