Texture Atlases – Part 4 – Compressing Them

OK, so we have loved them, made them, used them, now I recommend compressing them.

There are four of options for compressions technologies in the apple PVRTC tool, texturetool:

For realistic looking images, they recommend perceptual, but there’s still a lot of artifacting with that technology.

You might try compressing them with the PVRTexTool by Imagination Technologies.


They have a command line version that run on Mac, Windows, etc. There’s a window GUI that is nice too.

There are some nice things about the PVRTexTool, my favorites being that it let’s you resize the image during the compression and works on non-square images.

The resize feature of the PVRTexTool came in handy for my last project. The only way I could get decent looking compressed images was to doubled the texture atlases in Photoshop (using bicubic sampling) and then shrink them back down again during compression. Thankfully, I was able to automate it as part of the build process.

Texture Atlases – Part 3 – Using Them

OK, so we have seen how cool they are, we’ve discussed how to make them, now we are going to look at how to read them into your OpenGL application.

First you need to understand how textures map in Open GL. Jeff Lemarche does an amazing job explaining that in his blog, http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html.

I love that tutorial, thanks Jeff.

So, a texture is bound to a set of vertices via texture coordinates. They are usually passed as arrays of vertices and texture coordinates. A texture is mapped to the vertices based on a values between 0 < 1. (0,0) being (top, left) and (1,1) being the (bottom, right) of the image.

For example, if you wanted the top half of the image to be displayed, you would give the coordinates (0, 0, .5, .5).

Here’s another example. If we have an 200×200 image that is at 20, 300 in our texture atlas and our texture atlas is 1024×1024. We would give it’s coordinages as (top, left, width, height) = (20/1024, 300/1024, 200/1024, 200/1024) = (0.01953…, 0.2929…,0.1953…, 0.1953…).

Non-rectangular buttons for the iPhone

I haven’t checked the iPhone 4.0 sdk yet, but last time I looked you could not do non-rectangular buttons or UIViews with the UIKit.

This becomes a problem when you do UI’s that overlap clickable areas of non-rectangular shape. You see this a lot in games.

In order to get this working I had to add the following code to my UIImageView derived class -

- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
	if ([super pointInside:point withEvent:event])
		uint components, x, y;
		uint imgWide, imgHigh;      // Real image size
		uint rowBytes, rowPixels;   // Image size padded by CGImage
		CGBitmapInfo info;            // CGImage component layout info
		CGColorSpaceModel colormodel; // CGImage colormodel (RGB, CMYK, paletted, etc)
		uint * pixels = NULL;

		CGImageRef image = self.image.CGImage;

		// Parse CGImage info
		info       = CGImageGetBitmapInfo(image);		// CGImage may return pixels in RGBA, BGRA, or ARGB order
		colormodel = CGColorSpaceGetModel(CGImageGetColorSpace(image));
		size_t bpp = CGImageGetBitsPerPixel(image);
		if (bpp < 8 || bpp > 32 || (colormodel != kCGColorSpaceModelMonochrome && colormodel != kCGColorSpaceModelRGB))
			// This loader does not support all possible CGImage types, such as paletted images
			return false;
		components = bpp>>3;
		rowBytes   = CGImageGetBytesPerRow(image);	// CGImage may pad rows
		rowPixels  = rowBytes / components;
		imgWide    = CGImageGetWidth(image);
		imgHigh    = CGImageGetHeight(image);

		// Get a pointer to the uncompressed image data.
		// This allows access to the original (possibly unpremultiplied) data, but any manipulation
		// (such as scaling) has to be done manually. Contrast this with drawing the image
		// into a CGBitmapContext, which allows scaling, but always forces premultiplication.
		CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(image));
		pixels = (uint *)CFDataGetBytePtr(data);

		uint32_t *p = (uint32_t *)pixels;
		int i, num = imgWide * imgHigh;

		if ((info & kCGBitmapByteOrderMask) != kCGBitmapByteOrder32Host)
			// Convert from ARGB to BGRA
			for (i = 0; i < num; i++)
				p[i] = (p[i] << 24) | ((p[i] & 0xFF00) << 8) | ((p[i] >> 8) & 0xFF00) | (p[i] >> 24);

				// check pixel
		uint32_t alpha = p[(int)((point.y*rowPixels)+point	.x)]; // << 24;

		return alpha;

	return false;

Texture Atlases – Part 2 – Making Them

Making texture atlas is either the responsibility of the artist or the developers. In my last project it was the developers

So I created a tool that produced square pngs files with widths and heights a power of two (128×128, 256×256, 512×512, 1024×1024).  These texture atlases contained as many smaller images as possible.

I then stored information about the texture atlas filename and the top, left, width and height of original image in texture atlas.

There are alot of “rect packing” algorithms out there, but my favorite was Nate’s blog. He has a great example that is simple and elegant. I was able to easily customize it for my scenario.

Texture Atlases – Part 1 – Loving Them

Texture atlases are a great way to minimize your memory footprint when you have an Open GL application with lots of images. I ran into this when porting a game from Mac/Windows to the iPhone.

Texture atlases allow you to pack multiple images in one image allowing for multiple images to be loaded at once. This is especially true when using opengl es on the iphone, since all textures need to be square with a width/height = power of two. As a result textures need to be enlarged at run time and all the extra space is wasted memory.

The final texture atlases can be PVR compressed as well and stay compressed in memory at run time which is the ultimate memory saver.

Here’s an example of a texture atlas that shrunk > 5 MB of data down to 500K once created and compressed.

In future