Publishers of technology books, eBooks, and videos for creative people

Home > Articles

  • Print
  • + Share This
This chapter is from the book

Putting Sprites in Action

In the file Sprite.m, you will see the following arrays:

static const GLfloat vertices[] = {

    0.0f, 0.0f,
    1.0f, 0.0f,
    0.0f, 1.0f,
    1.0f, 1.0f,
static const GLfloat texCoords[] = {
    0.0f, 1.0f,
    1.0f, 1.0f,
    0.0f, 0.0f,
    1.0f, 0.0f,
static const GLushort cubeIndices[] =
    0, 2, 1,
    1, 2, 3,

The first array defines the vertices of the quad. The second array defines the texture mapping coordinates.

A texture is always mapped using a quad with the range of 0-1 in height and width (Figure 4.4), which is called a texture coordinate system. The texture coordinate system is usually refered to as the (s,t) system, while the geometry system is referred to as the (u,v) system. So when mapping a texture to a quad, it is said that you are mapping (s,t) coordinates onto the (u,v) coordinates, which is sometimes called UV mapping in 3D applications.

Figure 4.4

Figure 4.4 Coordinates for a texture to be mapped

Using Figure 4.2 as the reference for the (u,v) coordinates and Figure 4.4 for the (s,t) coordinates, vertex 1 would map to coordinates of (0,0), vertex 2 of (0,1), and so on.

You don’t need to use the full 0-1 range to map the texture. For example, you could map half the texture by using the (s,v) coordinate range of (0,0), (1,0), (0, 0.5), (1, 0.5).

Interleaved Vertex Data

In most beginning examples of OpenGL ES, it is usual to have separate arrays for vertex points and texture vertex points. In fact, Raiders does just that, as you can see in Figure 4.4.

This is a reasonable approach and is relatively easy to read and maintain, which is all that is necessary for simple quads with textures.

For the sake of full understanding, however, you should know that for more complicated models, Apple recommends using interleaved vertex data, which uses an array of structs rather than the series of arrays. In a struct, data is stored in a single structure (array) and interleaved together. So the information for vertex 1 is read together, then the information for vertex 2 is read together, and so on (Figure 4.5). This technique provides memory locality for each vertex and is superior to separate arrays (Figure 4.6).

Figure 4.5

Figure 4.5 Interleaved vertex data

Figure 4.6

Figure 4.6 Separate vertex arrays

Sprite Class

With a background in place, it’s time to look at Raider’s Sprite class. The Sprite class is instantiated with the method initWithImageNamed which takes a NSString as a parameter that is the name of an image file to be used as a texture.


Look at the code in initWithImageNamed (code first, then description):

textureInfo = [GLKTextureLoader textureWithCGImage: [UIImage imageNamed:imageName].CGImage options: nil error:nil];

name =;
self.width = textureInfo.width;
self.height = textureInfo.height;

Prior to the introduction of GLKit, it took many, many lines of code to load images into a texture buffer. As you can see above, you can now do this with only one line of iOS 5 code.

The next line assigns the texture’s OpenGL-friendly name to a property for later use. The width and height of the image are also stored for later use.

[self initVertexInfo];

[self initEffect];

The next two lines allocate and assign the vertex information into memory, and initialize a GLKit effect that will create the appropriate shaders.


This method sets up the buffers for holding the quad and texture vertices.



These two lines of code are needed to allow the textures to be mapped with a transparent background.

glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, 0, 0, texCoords);


The next four lines are the meat of this method. They set up and assign the quad vertices and the texture coordinates into memory buffers. The use of the GLKit constants GLKVertexAttribPosition and GLKVertexAttribTexCoord0 provides vertex position and texture coordinates to the shader. Either a hand-coded shader, or one that GLKit will create automatically.


As you’ve learned, in “vanilla” Open GL ES 2.0 code, shaders must be written that calculate rendering effects. They are written in their own languages, and while they can be complicated to write and understand, they aren’t really necessary for a simple 2D game. Luckily, Apple has abstracted shaders away in GLKit by introducing a set of effects classes for handling shading, lighting, and texturing.

effect = [[GLKBaseEffect alloc] init]; = name;

effect.texture2d0.enabled = GL_TRUE; = GLKTextureTarget2D;
effect.light0.enabled = GL_FALSE;

The effect usage in Raiders is minimal. Because Raiders has no lighting or shading and only 2D textures, the code is fairly simple. The texture name that was assigned in initWithImageNamed: is assigned to the effect and the lighting is switched off. You can apply different lighting effects to the world, such as diffuse or spot lighting. This isn’t needed for our 2D game; but in a 3D game, lighting can give a sense of depth and extra realism.


updateTransforms creates the projection matrix, which transforms coordinates in an object from vector space to screen space. Raiders uses an orthographic projection that has no vanishing point or perspective, and places parallel lines of an object at a constant distance. Because no perspective is applied in this projection, objects can’t get distorted.

GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0.0f, 320, 0.0f, 480, 0.0f, 1.0f);

    effect.transform.projectionMatrix = projectionMatrix;
    GLKMatrix4 modelViewMatrix = GLKMatrix4MakeScale(self.width, self.height, -1.0f);
    modelViewMatrix = GLKMatrix4Multiply(transformation, modelViewMatrix);
    effect.transform.modelviewMatrix = modelViewMatrix;

The first line sets up the projection matrix to the width and height of the screen. Then the matrix is assigned to the effect.

Another matrix is created that will act as the transformation matrix for the sprite. It is first scaled to the size of the image, then moved using the transformation matrix that is created if the sprite needs to be positioned at specific screen coordinates (explained in the next section). Then, this new matrix is assigned to the effect.


drawAtPosition allows the sprite to be rendered at a specific point. The position parameter passed in is stored in a property for later use, and a boolean variable called dirtyBit is set to YES. Finally, the draw method is called.


The draw method does the actual sprite rendering.

if (dirtyBit) {

    transformation = GLKMatrix4MakeTranslation (position.x, 480.0 - position.yself.height, 0.0f);

First, the method checks to see if the dirtyBit is set. If so, a transformation matrix is created based on the position in which the sprite should be drawn. Because OpenGL ES has coordinates opposite to screen coordinates, the y coordinates must be subtracted from the height of the viewport. The transformation matrix is then used by the update method from the GLKView.

[effect prepareToDraw];

glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, cubeIndices);
dirtyBit = NO;

This renders the quad and the texture on the screen, given the cubeIndices array of vertex information defined earlier.

The last line renders to the screen all the information that was previously set up.


Sprites can now be drawn onto the screen, so all you need is a scene in which to do so. Most games start with a menu screen, and Raiders will be no different. The last class to explore is MenuSceneController.

MenuSceneController is the first scene that the player will see after opening the game. At this stage, we will include just a background sprite and a sprite to act as a play button.

At the moment the play button doesn’t do anything. We’ll save that for the next chapter.

  • + Share This
  • 🔖 Save To Your Account