A (175)

(Tuis.) #1
CHAPTER 9: Android Graphic Design: Making Your UI Designs Visual 307

To be able to accomplish seamless image compositing using layers, the images used in each layer
need to have an alpha channel (a transparency level) value associated with each pixel in the image.
We can utilize this alpha value for each pixel in the image to precisely control the blending of that
pixel with the other pixels in the same image coordinate or location, but which are on the other
layers above and below that particular image layer. It is because of this stacked layer paradigm that
I refer to this compositing as 3D, as the layers are stacked into place along or using a Z-axis, and
can be said to have a particular Z-order. Don’t get this confused with 3D modeling software such
as Blender, as the end result of a digital image compositing (layer) stack is still a resulting 2D digital
image asset.


Like the other RGB channels, the alpha channel also has 256 levels of transparency which are
represented via the first two slots in the hexadecimal representation for the ARGB data value, which
has 8 slots (32-bits) of data, rather than the 6 slots used to represent a 24-bit image. A 24-bit image
could thus be thought of as being a 32-bit image with no alpha channel data. Indeed, if there is no
alpha channel data, why waste another 8 bits of data storage, even if that alpha channel is filled with
F values, representing fully opaque pixel values. A 32-bit image with an alpha channel filled with
F values has 25% more data than a 24-bit image with no alpha channel, and yet it would yield the
same exact result, so don’t use a 32-bit image unless you need to use transparency values for the
pixels in that image (for compositing purposes).


So, to summarize, a 24-bit image has no alpha channel, and is not going to be used for image
compositing, unless it is the bottom plate in the compositing layer stack, whereas a 32-bit image is
going to be used as a compositing layer, on top of something else which will need the ability to show
through (via transparency values defined in the image alpha channel) in some of the pixel locations in
order to create the final seamlessly composited digital image.


How does having an alpha channel and using digital image compositing factor into my Android
graphics design pipeline, you might be wondering. The primary advantage here is the ability to split
what looks like one single image into a number of component layers. The reason for doing this is to
be able to apply Java code logic to individual layer elements, in order to control various parts of a
2D image which you could not otherwise individually control, were the image simply one single
24-bit image, which can be transformed as a whole, but not on a per-subject (per pixel area) basis
as it could be if each image element were on its own layer, and thus a separate element in memory,
which could later be controlled using program logic.


Algorithmic Image Compositing: Blending Modes


There is another more powerful aspect of image compositing called blending modes. Any of you
familiar with Photoshop or GIMP have seen that you can set each layer in an image composite to
use different blending modes. Blending modes are algorithms that specify how the pixels contained
within that layer are blended (mathematically) with the previous layers (underneath that layer). This
pixel blending algorithm will take into account your transparency level. By combining blending
modes and transparency, you can achieve virtually any digital image compositing result, or special
effect, that you are trying to achieve. Since there are entire books written on using blending modes,
and the effects you can create, I won’t get into that too much here!

Free download pdf