Unity project structure – reminds me of Flash Pro

unityThe more I play with Unity, the more it feels like the workflow of Flash Pro, whereby you attach scripts to instances of actors on the stage.

I’m not talking about the ‘pure code’ approach that all ActionScripters have become used to now; but the decentralised collection of independent scripts associated with timeline movieclip instances (behaviours, if you will).

So far, I’m liking it though. Unity (and its scripting IDE MonoDevelop) feels like all the best bits of Virtools, Flash Pro, FlashBuilder, Blender, Poser and 3D Studio MAX. I’m hoping that in future versions, Unity Editor and MonoDevelop will be more tightly integrated, or even combined. And, just maybe, replace JavaScript with a TypeScript option – although I like C# anyway.

For those who have played with any of the above and want a good tutorial to get stuck right into games development with Unity, this (intermediate level) tutorial is great.

Unity – what Virtools should have become

I’ve been playing around with the Unity game engine and keep having flashbacks to a little know 3-D game dev tool I used over 10 years ago – called Virtools.

virtools1virtools2

Most people will not have heard of Virtools, which itself was called NemoCreation in a previous life, until legal problems forced them to go away and rebrand. It was way ahead of its time, supporting real-time ray-tracing, hardware acceleration, full Havok physics an easy to integrate multi-player solution, long before the more popular Shockwave 3D and WildTangent had anything close.

The workflow was very similar to Unity and I had originally pinned a lot of hope on it. But, the platform was too restrictive, provided no sensible scripting alternatives and was prohibitively expensive to license. Getting hold of a trial license was notoriously difficult, too. So there were simply not enough people creating worthwhile content for it.

The licensing fubar and possibly the fact that it was way ahead of its time, were probably its death knell. But I tip my hat to what could have been.

How to verify that something implements an interface with Mockito

If you want to write a unit test which verifies that something implements a particular interface, or extends a particular class, here’s how…

I recently wanted to use Mockito to verify that a particular object passed to some method implements a given interface. I noticed that the anyOf Hamcrest matcher inspects the exact type of the given object and therefore fails if it doesn’t find an exact match:

This hampers refactoring to interfaces and polymorphism somewhat. The simple solution was to use a custom matcher with argThatMatches:

#WIN

Sneaky tricks for developing on small devices – Bitmap ‘folding’

DSC_1583One of the most problematic constraints when developing applications for mobile or Set Top Box is video memory (AKA VRAM). You often will not have control over how much video memory is allocated to your application, or what the fallback behaviour is when your application uses too much.

This can be a pain, especially when you wish to create some off-screen surfaces for caching or compositing to improve performance.

If your application runs too slowly, that’s an ISSUE; if your application crashes due to excessive memory usage, that’s a PROBLEM.

I recently built a feature into an application which required a bunch of external images to be loaded into a Set Top Box device for in-process caching. Since all Bitmap surfaces are allocated on the platform as ARGB, but the images were monochrome, I could store the images efficiently AND make them available for hardware-accelerated compositing by storing just the single monochrome channel of each image in a separate channel of an in-process cache Bitmap surface.

You can see in the attached image what a a mish-mash of logos is created. For debugging purposes, you can also see the same surface viewed with each other channel turned off. When a logo is requested, a dictionary finds the relevant Bitmap slot it exists in (given by a Rectangle and BitmapDataChannel number). When a new image is loaded, its single channel is copied into the next available slot in the cache, in FIFO fashion.

The alpha channel of the cache surface wasn’t used, due to the pre-multiplication problems you’ll get – though this can be worked around if you can ensure there are no zero-alpha pixels. The result is an in-process cache requiring no image decoding to composite images, storing 3 times as many images as a regular Bitmap FIFO. #WIN

O’Donnell’s 3 Laws of User Dynamics

Remember kids: You don’t have to please ALL your customers, just the ones you want to keep.

sheepThe first Law: conservation of users

Users are not created or destroyed, only converted to or from using a competitor’s product.

All other things being equal, you should remember that brand loyalty counts for less and less these days. If you don’t want to do what your users are asking for, maybe your competitors will.

polar bearsThe second law: the progress of disorder

Evolve your product, in order to fight ‘design entropy’.

As users’ needs change, so should your product. If you don’t have the right metrics in place, you won’t realise that your product is obsolete until you become the next Woolworths.

designThe third law: chasing perfection

Invent something idiot-proof and someone will invent a better idiot.

Humans are complex and often unpredictable. Therefore, human-computer interfaces are, at best, imperfect systems. Test your design assumptions and always have documented justification for design decisions that can be re-tested against new iterations of your product.

Wait for it…

Improving the usability of an interface, by making it do more or less what the user actually expects of it, is a pretty good route to an overall sound user experience. Yet, there’s one key mistake almost every interface I’ve looked at makes in this regard – what I call the Spurious Stimulus Response. That is…

responding to user input in the context of stimuli they haven’t been given the time to acknowledge.

For example, consider a dialog box, suddenly appearing centre-stage in an interface, as the result of an incoming message, error condition, or some such situation. If the user was to click on it, or press a key, within 250 milliseconds of it appearing, then they are not reacting to its appearance – instead they were probably intending to action something else.

In an interface without a pointing device, such as on TV, using a traditional remote control, the problem is exacerbated, since the user only need press OK to commit whatever action happens to come into focus. When using an infra-red remote control, this problem is compounded further, because it usually takes a fraction of a second for the receiving device to recognise the incoming IR pulses as something it needs to deal with and push that signal up through the software stack to the UI layer.

I suggest employing a simple fix, which draws its inspiration from the behaviour of nerve stimulation – called the refractory period. That is…

to render a control inactionable, after a change in UI state, long enough for the user to assimilate said state change.

The few interfaces which loosely employ some such technique, do so usually just as a side-effect of having some animated transition when a dialog appears, for example. Sometimes, a button or other control is disabled until the transition completes. This is actually a very good way of ‘easing the user into’ the change in UI state – but a refractory period should still be implemented where animation is absent.

The refractory period could be some function of the amount/importance of information provided. A good example of this is the Firefox Add-on confirmation dialog, which forces you to wait a few seconds, rather than letting you mash RETURN and install some random plugin.

But, in its most simple incarnation, a refractory period may simply be used to filter out double-click mania. Please start designing this kind of behaviour into your UIs – your users will thank you.

Chinese Handwriting Recognition App

Unlike the iPad, the BlackBerry PlayBook has rather poor international keyboard support, with no method for entering chinese characters. I like the way iOS achieves this, so went about building my own version in ActionScript.

This was mainly an academic exercise and to help me to learn to write chinese. My approach was to sample the strokes drawn into the app as a series of up to 8 directions, including the relative position of a given stroke to the previous stroke, again as one of 8 directions. This pattern, represented as a string of numbers is then put through a smoothing algorithm, to remove some unnecessary noise and then compared with a dictionary of pattern keys, which may contain one or more suggested characters. If there are no hits, an advanced search occurs, by mutating the given pattern in specific ways, in order to find alternative suggestions. I can also find characters based on the next most likely character to the one you’ve just entered, using frequency analysis on given sample text.

The app will eventually be a PlayBook App, but is still unfinished and currently in ‘training mode’, so that character patterns can be trained into the database. It’s currently primed with some simplified sample data, from which it picks the most popular few characters to learn. If you write chinese, give it a go.