One of the most problematic constraints when developing applications for resource constrained devices, such as mobile or Set Top Box is ‘video memory’.
You often will not have control over how much video memory is allocated to your application, or what the fallback behaviour is when your application uses too much.
Video memory (sometimes called texture memory, or VRAM) is usually at a premium on small devices, especially on embedded systems. But low-resolution graphics with 4 colours and no animation just doesn’t cut it in today’s design hungry world. This can be an issue, especially when you need to create/load some off-screen bitmaps for caching or compositing, in order to improve UI performance.
If your application runs too slowly, that’s a PROBLEM. If your application crashes due to excessive memory usage, that’s a REAL PROBLEM!
A good example of tackling this problem was in a feature I needed to build into an application (YouView TV), which required a lot external images (in this case, TV channel logos) to be loaded into a Set Top Box device for in-process caching. All Bitmap surfaces were allocated on the platform as ARGB. But, luckily, the images were only monochrome. So, I could store the images efficiently AND make them available for hardware-accelerated compositing by storing just a single channel of each image in a separate channel of an in-process cache Bitmap surface.
You can see in this image what a mish-mash of logos is created. For debugging purposes, you can also see that very same cache surface in 3 columns to its right, with only the R, G and B channels visible in each column. As such, we have effectively ‘folded’ 3 times as many images into the same space, by overlaying the 3 channels. When a logo is requested, a dictionary finds the relevant Bitmap slot it exists in (given by a Rectangle and BitmapDataChannel number). When a new image is loaded, its single channel is copied into the next available slot in the cache, in FIFO fashion.
You might be wondering, if all Bitmaps are ARGB anyway, why didn’t you use the Alpha channel too? Good question! The alpha channel of the cache surface wasn’t used due to the ‘pre-multiplication’ problems you’ll get – though this can be worked around if you can ensure there are no zero-alpha pixels. The result is an in-process cache requiring no image decoding to composite images – that is, images are not byte arrays, or otherwise compressed, for example, we can just do a super quick ‘copy channel’ operation to blit the logo somewhere, which will be hardware accelerated, assuming both the source and target surface live in VRAM at the time.
Like magic, we’ve stored triple the number of images as a regular Bitmap FIFO. #FTW 🙂