YouView is a Smart TV service in the UK, which runs on Set Top Boxes and Smart TVs. I worked on the original UI for years and thought I’d share some insights into best practices for building applications on such resource constrained devices.
The YouView UI team was quite large and the code base humongous. During my time there, much of my work focused on developing the Electronic Programme Guide and optimising the performance / resource usage of the UI – which involved everything from developing best practices for hardware acceleration and performance of the UI, to minimising its memory footprint in various ways (e.g. optimising code and design assets).
I eventually self-defined a role as what I’d call a ‘UX Engineer’. That is, being responsible for bridging the gap between design and development, so that only the right things get built and ensuring that design assumptions and delivery are tested and continuously improved.
To assist in this, I created several innovations, including:
- A blit scrolling technique to improve scrolling performance in components. Other titles for this technique include “viewport chunk scrolling” or “the moveable pixel feast”. The idea was to do graphics work as low-level as possible, so no deeply nested components (we also could not use Starling or Stage3D)
- A faster, lighter (though much less feature-rich) animation library than TweenMax / TweenLite, specifically tailored for our brand of UI animation. We could not use open source and needed to save every byte off heap usage
- A version of green-threading which prepared graphics in the background while the UI was being used (and then disposes of the source assets to save memory)
- A fully unit-tested object pooling utility to optimise object reuse
- A CI/CD pipeline using Jenkins to automate UI builds
- A method of quickly blitting text for off-screen compositing which was faster than the native textfield component
- A fast mechanism for automatically truncating all textfields across the UI with the ellipsis, to fit with design specs (even if someone had not built this into their component)
- A skinnable 9-slice scaling UI component which loaded small graphics to build larger components (we did not want to bundle any Flex framework into the UI)
- An efficient bitmap FIFO buffer which saved memory by storing monochromatic graphics in separate colour channels of a single bitmap (a technique I call bitmap folding)
- A technique for making code more testable with dependency injection which creates no API ‘leakage’, which I call dependency injection by extension
- I also created many graphical test cases for Adobe, to isolate rendering bugs in the Stagecraft (AIR for TV) graphics engine
Finally, I documented some of these techniques to present them as best practices for YouView’s content partners, who were developing their own portal applications for the service.
Techie breakdown
The YouView UI which launched the service in 2012 was AIR (Adobe Integrated Runtime) based, written in AS3 and ran on Stagecraft 2, AKA Adobe AIR for TV. As the name suggests, AIR for TV is a special version of the AIR runtime for embedded systems, such as Set Top Boxes and Smart TVs. The prototype of the YouView UI (back when it was code-named Project Canvas) was for Stagecraft 1, which meant coding in AS2 and suffering the abysmal performance that comes with running on AVM1 (ActionScript Virtual Machine 1).
You can read more about YouView here.
Despite the delays and the need to code the UI from scratch in AS3, I think it was ultimately the right decision. Stagecraft 2 is a much better platform (Stagecraft 2.5.1 to be precise). It was a great opportunity to learn how to write highly optimised code and use hardware acceleration effectively on such a resource constrained device – the YouView Set Top Box in based on a System On Chip, with a pretty fast GPU (at least for 2D compositing), but a rather slow CPU and limited memory.
Regardless of which technology you’re using, here are some key things to be aware of when developing for such platforms:
Limit your pre-composite calculations
In AIR/Stagecraft we’re talking about limiting display list hierarchy complexity; in HTML5 we’re talking about reducing the DOM complexity. Stagecraft (or whatever display engine you’re using) needs to traverse through the display list (or DOM), working out which areas of the screen to redraw. This is somewhat similar to how the desktop Flash Player handles redraws, but with some key differences to how it decides what needs redrawing, how it tackles moving display objects and how it delegates the work of updating the frame buffer – a subject for another time. Mostly importantly, if you’re developing for a resource constrained device (such as mobile or Set Top Box), you’ll have very limited CPU power, even if the device’s GPU (Graphics Processing Unit) affords you great hardware acceleration capabilities. So, before you can delegate any graphics compositing work to hardware, you must enumerate changes in the display list in software, right? Complex display lists are a headache for some of the low-powered CPUs found in mobiles and Set Top Boxes and this will show up as rocketing CPU usage, low framerates and few spare work cycles – AKA ‘DoPlays’ in Stagecraft. By keeping your display list shallow, with only the bare minimum of display objects on stage at any one time, you’ll be making life easier for the CPU – whether or not graphics are thereafter drawn in software or hardware.
Benchmark everything
When building an application for a resource constrained device, you should be able to run each component in isolation, to assess its drain on CPU and system/video memory. There’s no point optimising the hell out of one component, when it’s actually another one that is the source of your performance bottleneck.
Know thine hardware acceleration capabilities
There’s no point blindly using cacheAsBitmap and cacheAsBitmapMatrix everywhere, if it’s not going to speed things up on the target device. Worse still, too many cacheAsBitmaps and you may be just wasting valuable video memory, or causing unnecessary redraws (again, the subject of a future article). A lot of platforms will accelerate bitmaps, even if stretched, but not necessarily if flipped or rotated. Alpha on bitmaps (or anything cached as bitmap) will usually be accelerated too, but this is not necessarily the case with all colour transforms. Benchmarking any component you’re building will quickly tell you where you might have pushed it too far, but you should also have a way of verifying that a particular set of transforms is indeed hardware accelerated. Stagecraft provides this when using its –showblit command line parameter. I’ll be going into more detail about this in another post.
Remember memory
When using various hardware acceleration tricks, especially on resource constrained devices, video memory is at a premium and usually in limited supply. You will need to know your limits and have a way of seeing how much video memory your application is using at any one time – ensuring you reclaim memory from unused bitmaps you’re finished with too. Under Stagecraft, this isn’t as simple as dereferencing the bitmap and it may also not as simple on your platform – find out! If your platform uses DirectFB for its rendering, as YouView does, the ‘dfdump’ tool can show you just where your video memory is going. This is something else I’ll get into in another article.
Blit blit blit
This refers to blitting, where blocks of pixels are copied from one bitmap to another. This technique is used a lot in games, where graphics performance is critical, you should arm yourself with the basics of how old video games used blitting of multiple graphics to a single bitmap for performance and video memory efficiency.