All posts by Liam O'Donnell

Give C# a chance

Since I’ve started playing around with the Unity game engine, I much prefer C# than JS for scripting. This is probably because I’m coming from an ActionScript background, but I’ve always cringed at the loose typing in JS. If I’m coerced to do any JS, I usually prefer cooking it up with TypeScript, too. Call me stoical, but that just feels more like a real programming language – yeah I said it!

JS does have a ‘low centre of gravity’, since everyone and his dog seems to be rocking JS now. But I’d encourage anyone starting out with Unity to give C# a chance.

David Beckham Academy games

I was asked by Tribal DDB to create this multi-award winning games site for a joint campaign between The David Beckham Academy and Volkswagen.

Play the game here

I used filmed action of Beckham himself and the video alpha channel support of Flash 8, which was rather new at the time. I was consulted on all aspects of filming and production. After we agreed game concepts, I met with the film crew at ‘Off The Radar'; I drew up the shot-list and we decided to shoot on HD at 50p, to get the cleanest possible key.

Tech used

  • Panasonic VariCam
  • Green screen at the Flash Studio Norte, Madrid
  • After Effects
  • Photoshop
  • Flash
  • A football

FAQ

  • How long did the project take? Nearly 12 weeks
  • Did you meet Beckham? Yes, he’s very easy to work with

I went to Madrid for the green screen shoot with Beckham as visual effects supervisor and was responsible for treating and editing the footage for game production and related media.

I stitched some of the sequences together with morphs to create almost seamless blends between shots and added filters to the keyed out footage to match lighting and improve the compositing.
I coded a 3-D projection system in Flash and perspective-matched each scene, so that objects move around the screen convincingly. I worked with the designers at DDB, who created the backgrounds and UI elements. I included ‘Express Install’ capability for those users without Flash Player 8, so 95% users can upgrade painlessly from Flash Player 6 or 7. All the games are mouse-controlled and were tested by kids for usability and game balancing.

A ‘Site of the Day’ winner at Adobe and Favourite Website Awards, also a runner-up at Creative Showcase.

How to verify that something implements an interface with Mockito

If you want to write a unit test which verifies that something implements a particular interface, or extends a particular class, here’s how…

I recently wanted to use Mockito to verify that a particular object passed to some method implements a given interface. I noticed that the anyOf Hamcrest matcher inspects the exact type of the given object and therefore fails if it doesn’t find an exact match:

This hampers refactoring to interfaces and polymorphism somewhat. The simple solution was to use a custom matcher with argThatMatches:

#WIN

Sneaky tricks for developing on small devices – Bitmap ‘folding’

DSC_1583One of the most problematic constraints when developing applications for mobile or Set Top Box is video memory (AKA VRAM). You often will not have control over how much video memory is allocated to your application, or what the fallback behaviour is when your application uses too much.

This can be a pain, especially when you wish to create some off-screen surfaces for caching or compositing to improve performance.

If your application runs too slowly, that’s an ISSUE; if your application crashes due to excessive memory usage, that’s a PROBLEM.

I recently built a feature into an application which required a bunch of external images to be loaded into a Set Top Box device for in-process caching. Since all Bitmap surfaces are allocated on the platform as ARGB, but the images were monochrome, I could store the images efficiently AND make them available for hardware-accelerated compositing by storing just the single monochrome channel of each image in a separate channel of an in-process cache Bitmap surface.

You can see in the attached image what a a mish-mash of logos is created. For debugging purposes, you can also see the same surface viewed with each other channel turned off. When a logo is requested, a dictionary finds the relevant Bitmap slot it exists in (given by a Rectangle and BitmapDataChannel number). When a new image is loaded, its single channel is copied into the next available slot in the cache, in FIFO fashion.

The alpha channel of the cache surface wasn’t used, due to the pre-multiplication problems you’ll get – though this can be worked around if you can ensure there are no zero-alpha pixels. The result is an in-process cache requiring no image decoding to composite images, storing 3 times as many images as a regular Bitmap FIFO. #WIN

Super Kickups returns… as an Android app

Super Kickups mobile gameAddictive as crack – apparently

I finally decided to relaunch one of my classic old Flash games as a mobile app. I picked Super Kickups since I think the game mechanic works nicely on a touchscreen – that, and someone once told me that it’s addictive as crack – which I take as a compliment. I’ve recently added a leaderboard and I’ll be adding various other features, such as pickups and achievements, in the near future.

I used Starling for the rendering and it runs smoothly at 60fps on both my shiny new HTC and my somewhat slower old Sony Xperia. It’d be good to know how it runs on a range of other devices – especially if anyone finds the performance slow.

I’m not planning to target iOS just yet though, being a bigger fan of Android (perhaps Apple could make iOS less of a pain in the proverbial). You can download it from the Google Play Store here.

O’Donnell’s 3 Laws of User Dynamics

Remember kids: You don’t have to please ALL your customers, just the ones you want to keep.

sheepThe first Law: conservation of users

Users are not created or destroyed, only converted to or from using a competitor’s product.

All other things being equal, you should remember that brand loyalty counts for less and less these days. If you don’t want to do what your users are asking for, maybe your competitors will.

polar bearsThe second law: the progress of disorder

Evolve your product, in order to fight ‘design entropy’.

As users’ needs change, so should your product. If you don’t have the right metrics in place, you won’t realise that your product is obsolete until you become the next Woolworths.

designThe third law: chasing perfection

Invent something idiot-proof and someone will invent a better idiot.

Humans are complex and often unpredictable. Therefore, human-computer interfaces are, at best, imperfect systems. Test your design assumptions and always have documented justification for design decisions that can be re-tested against new iterations of your product.

Wait for it…

Improving the usability of an interface, by making it do more or less what the user actually expects of it, is a pretty good route to an overall sound user experience. Yet, there’s one key mistake almost every interface I’ve looked at makes in this regard – what I call the Spurious Stimulus Response. That is…

responding to user input in the context of stimuli they haven’t been given the time to acknowledge.

For example, consider a dialog box, suddenly appearing centre-stage in an interface, as the result of an incoming message, error condition, or some such situation. If the user was to click on it, or press a key, within 250 milliseconds of it appearing, then they are not reacting to its appearance – instead they were probably intending to action something else.

In an interface without a pointing device, such as on TV, using a traditional remote control, the problem is exacerbated, since the user only need press OK to commit whatever action happens to come into focus. When using an infra-red remote control, this problem is compounded further, because it usually takes a fraction of a second for the receiving device to recognise the incoming IR pulses as something it needs to deal with and push that signal up through the software stack to the UI layer.

I suggest employing a simple fix, which draws its inspiration from the behaviour of nerve stimulation – called the refractory period. That is…

to render a control inactionable, after a change in UI state, long enough for the user to assimilate said state change.

The few interfaces which loosely employ some such technique, do so usually just as a side-effect of having some animated transition when a dialog appears, for example. Sometimes, a button or other control is disabled until the transition completes. This is actually a very good way of ‘easing the user into’ the change in UI state – but a refractory period should still be implemented where animation is absent.

The refractory period could be some function of the amount/importance of information provided. A good example of this is the Firefox Add-on confirmation dialog, which forces you to wait a few seconds, rather than letting you mash RETURN and install some random plugin.

But, in its most simple incarnation, a refractory period may simply be used to filter out double-click mania. Please start designing this kind of behaviour into your UIs – your users will thank you.

IPTV development with AIR for TV

Having just finished building the UI for the YouView set top box, I thought I’d share some of my insights into best practices when building applications for such resource constrained devices. The YouView UI is AIR based, written in AS3 and runs in Stagecraft 2, also known as ‘AIR for TV’. As the name suggests, AIR for TV is a special version of the Flash player for embedded systems, such as set top boxes. The first incarnation of the YouView UI (back when it was just codenamed ‘canvas’) was for Stagecraft version 1, which means coding in AS2 and suffering the abysmal performance that comes with running on AVM1 (ActionScript Virtual Machine 1).

Despite the delays and the need to code the UI from scratch in AS3, I think it was ultimately the right decision. Stagecraft 2 is a much better platform – Stagecraft 2.5.1 to be precise. It was a great opportunity to learn how to write optimal code and use hardware acceleration effectively on resource constrained devices. I’ll be doing some tutorials on this in the near future, but here’s the key points to observe when developing for such platforms:

  • Limit the complexity of your display list heirarchy
    This may sound obvious, but ensure you nest as few things as possible, keeping the display list as shallow as possible. Stagecraft needs to traverse through the display list, working out which areas of the screen to redraw. This is similar to how the desktop Flash Player handles redraws, but with some key differences to how it decides what needs redrawing, how it tackles moving display objects and how it delegates the work of updating the frame buffer – a subject for another time. Mostly importantly, if you’re developing for a resource constrained device (such as mobile or set top box), you’ll have very limited CPU power, even if the device’s GPU (graphics processing unit) affords you great hardware acceleration capabilities. So, before Stagecraft can delegate any work to hardware, it enumerates changes in the display list in software. Complex display list heirarchies are a headache for some of the low-powered CPUs found in mobiles and set top boxes and this’ll show up as rocketing CPU usage, low framerates and few spare ‘DoPlays’ in Stagecraft (spare work cycles). By keeping your display list shallow, with only the bare minimum of display objects on stage at any one time, you’ll be making life easier for Stagecraft by doing less work on the CPU – whether or not graphics are drawn in software or hardware.
  • Benchmark everything
    When building an application for a resource constrained device, you should be able to run each component in isolation, to assess its drain on CPU and system/video memory. There’s no point optimising the hell out of one component, when it’s actually another one that is the source of your performance bottleneck.
  • Know thine hardware acceleration capabilities
    There’s no point blindly using cacheAsBitmap and cacheAsBitmapMatrix everywhere, if it’s not going to speed things up on the target device. Worse still, too many cacheAsBitmaps and you may be just wasting valuable video memory, or causing unnecessary redraws (again, the subject of a future article). A lot of platforms will accelerate bitmaps, even if stretched, but not necessarily if flipped or rotated. Alpha on bitmaps (or anything cached as bitmap) will usually be accelerated too, but this is not necessarily the case with all colour transforms. Benchmarking any component you’re building will quickly tell you where you might have pushed it too far, but you should also have a way of verifying that a particular set of transforms is indeed hardware accelerated. Stagecraft provides this when using its –showblit command line parameter. I’ll be going into more detail about this in another post.
  • Mind your memory
    When using various hardware acceleration tricks, especially on resource constrained devices, video memory is at a premium and usually in limited supply. You will need to know the limits and have a way of seeing how much video memory your application is using at any one time – ensuring you dispose and dereference any bitmaps you’re finished with too. If your platform uses DirectFB for its rendering, as YouView does, the executable ‘dfdump’ can show you just where your video memory is going. This is something else I’ll get into in another article.
  • Blit blit blit
    This refers to blitting, where blocks of pixels are copied from one bitmap to another. This technique is used a lot in games, where graphics performance is critical, you should arm yourself with the basics of how old video games used blitting of multiple things to a single bitmap for performance and video memory efficiency.

I’ll probably go into more depth on each of these things in forthcoming posts. Stay tuned.