308 CHAPTER 10: OpenGL ES 2, Shaders, and...^
replaced with the new iOS5 GLKit math libraries, so the learning curve is just a little less
steep now.
As to be expected, version 2 is way too large to cover in a single chapter, so what
follows is a general overview that should give you a good feel for the topic and whether
it is something you’d want to tackle at some point.
Shaded Pipelines
If you have had a passing familiarity with OpenGL or Direct3D, the mere mention of the
term shaders might give you the shivers. They seem like a mysterious talisman
belonging to the innermost circles of graphics priesthood.
Not so.
The ‘‘fixed function’’ pipeline of version 1 refers to the lighting and coloring of the
vertices and fragments. For example, you are permitted to have up to eight lights, and
each light has various properties. The lights illuminate surfaces, each with their own
properties called materials. Combining the two, we get a fairly nice, but constrained,
general-purpose lighting model. But what if you wanted to do something different? What
if you wanted to have a surface fade to transparency depending on its relative
illumination? The darker, the more transparent? What if you wanted to accurately model
shadows of, say, the rings of Saturn, thrown upon its cloud decks, or the pale
luminescent light you get right before sunrise? All of those would be next to impossible
given the limitations of the fixed-function model, especially the final one, because the
lighting equations are quite complicated once you start taking into consideration the
effect of moisture in the atmosphere, backscattering, and so on. Well, a programmable
pipeline that lets you model those precise equations without using any tricks such as
texture combiners is exactly what version 2 gives us.
Back to Where We Started
Let’s go back to the very first example given in Chapter 1, the two cubes. You have
already tweaked one of the shaders and lived to tell about it, but now we can go a little
deeper.
The pipeline architecture of ES 2 permits you to have different access points in the
geometric processing, as shown in Figure 10-1. The first hands you each vertex along
with the various attributes (for example, xyz coordinates, colors, and opacity). This is
called the vertex shader, and you have already played with one in the first chapter. At
this point, it is up to you to determine what this vertex should look like and how it should
be rendered with the supplied attributes and the state of your 3D world. When done, the
vertex is handed back to the hardware, rasterized with the data you calculated, and
passed on as 2D fragments to your fragment shader. It is here where you can combine
the textures as needed, do any final processing, and hand it back to eventually be
rendered in the frame buffer.