12 April 2017

Using a messenger to communicate between objects in HoloLens apps

Intro

I know I am past the ‘this has to work one way or the other’ stage of a new development environment when I start spinning off reusable pieces of software. I know that I am really getting comfortable when I am start thinking about architecture and build architectural components.

Yes my friends, I am going to talk architecture today.

Unity3D-spaghetti – I need a messenger

Coming from the clean and well-fleshed out of UWP XAML development using MVVM (more specifically MVVMLight), Unity3D can be a bit overwhelming. Apart from the obvious – the 3D stuff itself - there is no such thing as data binding, there is no templating (not sure how this would translate to a 3D environment anyway) and in samples (including some of my own) components communicate by either getting references to each other by looking in parent or child objects and calling methods in those components. This is a method that breaks as soon as 3D object hierarchies change and it’s very easy to make spaghetti code of epic proportions. Plus, it hard links classes. Especially speech commands come in just ‘somewhere’ and need to go ‘somewhere else’. How lovely it would be to have a kind of messenger. Like the one in MVVMLight. There is a kind of messaging in Unity, but in involves sending messages up or down the 3D object hierarchy. No way to other branches in that big tree of objects without a lot of hoopla. And to make things worse, you need to call methods by (string) name. A very brittle arrangement.

Good artist steal…

I will be honest up front – most of the code in the Messenger class that I show here is stolen. From here, to be precisely. But although it solves one problem – it creates a generic messenger – it still uses strings for event names. So I adapted it quite heavily to use typed parameters, and now – in usage – it very much feels like the MVVMLight messenger. I also made it a HoloToolkit Singleton. I am not going to type out all the details – have a look in the code if you feel inclined to do so. This article concentrates on using it.

So basically, you simply drag this thing anywhere in your object hierarchy – I tend to have a special empty 3D object “Managers” for that in the scene – and then you have the following simple interface:

  • To subscribe to a message of MyMessageType, simply write code like this
Messenger.Instance.AddListener<MyMessage>(ProcessMyMessage);

private void ProcessMyMessage(MyMessage msg)
{
    //Do something
}
  • To be notified to a message of MyMessageType, simply call
 Messenger.Instance.Broadcast(new MyMessage());
  • To stop being notified of MyMessageType, call
Messenger.Instance.RemoveListener<MyMessage>(ProcessMyMessage);

Example setup usage

imageI have revisited my good old CubeBouncer, the very first HoloLens app I ever made and wrote about (although I never published it as such) that basically uses everything a HoloLens can do: it uses gaze, gestures, speech recognition, spatial awareness, interaction of Holograms with reality, occlusion, and spatial sound. Looking back at it now it looks a bit clumsy, which is partially because of my vastly increased experience with Unity3D and HoloLens, but also because of the progress of the HoloToolkit. But anyway, I rewrote it using the new HoloToolkit and using the Messenger class as a working demo of the Messenger.

In the Managers object that I use to group, well, manager-like scripts and objects, I have placed the a number of components that basically control the whole app. You see the messenger, a ‘Speech Command Handler’ and a standard HoloToolkit Keyword manager. This is a enormous improvement over building keyword recognizing script manually, as I did in part 4 of the original CubeBouncer series. In case you need info on how the Keyword Manager works, see this post on moving objects by gestures where it plays a supporting role.

Note, by the way, that I also assigned a keyboard key to all speech commands. This enables to test quickly within the Unity3D editor without actually speaking, thus preventing distracting (or getting funny looks and/or remarks) from your colleagues ;).

 

 

The SpeechCommandHandler class is really simple

using CubeBouncer.Messages;
using UnityEngine;
using HoloToolkitExtensions.Messaging;

namespace CubeBouncer
{
    public class SpeechCommandHandler : MonoBehaviour
    {
        public void CreateNewGrid()
        {
            Messenger.Instance.Broadcast(new CreateNewGridMessage());
        }

        public void Drop(bool all)
        {
            Messenger.Instance.Broadcast(new DropMessage { All = all });
        }

        public void Revert(bool all)
        {
            Messenger.Instance.Broadcast(new RevertMessage { All = all });
        }
    }
}

It basically forwards all speech commands as messages, for anyone who is interested. Notice now, as well, that in the Keyword Manager both “drop” and “drop all” call the same method, but if you you look at the image above you will see a checkbox that is only selected for ‘drop all’. This is pretty neat, the editor that goes with this component automatically generates UI components for target method parameters.

Indeed, very similar to how it's done in MVVMLight

Example of consuming messages

image

Now the CubeManager, the thing that creates and manages cubes (it was called “MainStarter” in the original CubeBouncer) is sitting in the HologramCollection object. This is for no other reason than to prove the point that the location of the consumer in the object hierarchy doesn’t matter. This is (now) the only consumer of messages. It's start method goes like this.

void Start()
{
    _distanceMeasured = false;
    _lastInitTime = Time.time;
    _audioSource = GetComponent<AudioSource>();
    Messenger.Instance.AddListener<CreateNewGridMessage>(p=> CreateNewGrid());
    Messenger.Instance.AddListener<DropMessage>( ProcessDropMessage);
    Messenger.Instance.AddListener<RevertMessage>(ProcessRevertMessage);
}

It subscribes to three types of messages. To process those messages, you can either used a Lambda expression or just a regular method, as shown above.

The processing of the message is like this:

public void CreateNewGrid()
{
    foreach (var c in _cubes)
    {
        Destroy(c);
    }
    _cubes.Clear();

    _distanceMeasured = false;
    _lastInitTime = Time.time;
}
	
private void ProcessDropMessage(DropMessage msg)
{
    if(msg.All)
    {
        DropAll();
    }
    else
    {
        var lookedAt = GetLookedAtObject();
        if( lookedAt != null)
        {
            lookedAt.Drop();
        }
    }
}

private void ProcessRevertMessage(RevertMessage msg)
{
    if (msg.All)
    {
        RevertAll();
    }
    else
    {
        var lookedAt = GetLookedAtObject();
        if (lookedAt != null)
        {
            lookedAt.Revert(true);
        }
    }
}

For Drop and Revert, if the “All” property of the message is set, all cubes are dropped (or reverted) and that’s it, the rest works as before. Well kind of – for the actual revert method I now used two LeanTween calls to move the Cube back to it’s original location – the actual code shrank from two methods of about 42 lines together to one 17 line method – that actually has an extra check in it. So as an aside – please use iTween, LeanTween or whatever for animation. Don’t write them yourself. Laziness is a virtue ;).

Conclusion

I will admit it’s a bit contrived example, but the speech recognition is now a thing on it’s own and it’s up to any listener to act on it – or not. My newest application “Walk the World” uses the Messenger quite a bit more extensively and components all over the app communicate via that Messenger to receive voice commands, show help screen, and detect the fact the user has moved too far from the center and the map should be reloaded. These components do not need to have hard links to each other, they just put their observations on the Messenger and other components can choose to act. This makes re-using components for application assembly a lot easier. Kind of like in the UWP world.

01 April 2017

A ‘roller blind’ animation component for HoloLens applications

Intro

This is a cool little tidbit that I wrote in the cause of a project that required 2D images to be shown in a 3D context. Not everyone has 3D models of everything, and sometimes you just have a schematic drawing, a picture, or whatever 2D thing you want to see on a ‘screen’. That does not excuse you from making a good user experience, and I made this little tidbit to give just that little extra pizzazz to a boring ole’ 2D image in a HoloLens. So you click on a 3D device, out comes a schematic drawing. So it’s 2D in a 3D context, not 2D per se.

Say what ?

It basically pulls an image ‘down’ like a roller blind is being expanded. Typically you ‘hang’ this below a ceiling or the object the image is connected to/needs to clarify. Without much further ado, let’s just show what I mean

Nice, eh? I have the feeling my study is becoming quite a household scene by now for the regular readers of this blog ;).

Setting the stage

Being the lazy b*st*rd that I am, I just made a branch of my previous post, deleted the floating 3D objects, implemented the roller blind as a Togglable, and used the 3D ‘button’ already present in the project as a Toggler. So now I have something to click on and start the animation. I also reused my DynamicTextureDownloader from the post before that to show this image of daffodils in my front garden because that what less work than actually making a texture. Did I mention already I can be pretty lazy at times?

Unity setup

imageWhat we have, initially, is just the button and a floating plane. There are some important settings to its rotation and it’s scaling. The rotation is because we want use see the image head-on. This important, as a Plane has only one side – if you look from it from behind you look right trough it.

The default position of a plane is horizontal, like a flat area. So in order so see it head-on, we first need to rotate in 90⁰ over x (that will put it upright) and then 180⁰ over z to see the ‘front’ side (that used to be the top). Don’t try to be a clever geometrist and say “Hey, I can just rotate it 270⁰ and then I will look at the front side as well”. Although you are technically right, the picture will appear upside down. So unless you are prepared to edit all your textures to compensate for that, follow the easy path I’d say. The picture left shows the result, and the picture below it how it’s done.

 

image

So to the Plane, called RollerBlind, we add two components. First the DynamicTextureDownLoader. Set it’s Image Url to http://www.schaikweb.net/dotnetbyexample/daffodils.jpg, which is a nice 1920x1080 picture of dwarf daffodils on the edge of my front garden

(yeah, I was a bit pessimistic about the ‘return rate’ and the buggers turned out to be multi headed too – so I am aware it’s a bit overdone for the space). Important is not to check the “Resize Plane” checkbox here as that will totally mess up the animation. You have to manually make sure the x and z(!!) sizes match up. So as the image is 1920*1080, horizontal size = 1.78 x vertical size. As the horizontal size is 0.15, so vertical size should be 0.15 / 1.78 = 0.084375. Be aware that a standard plane’s size – at scale 1 = is 10x10m, so this make the resulting picture appear as about 150 by 84 cm. I will never understand why standard shapes in Unity3D have different default sizes at scale 1 – for instance a cube =1x1x1m, a capsule roughly 1x2x1m, and a plane 10x10m – but I am sure there’s logic in that. Although I still fail to see it. But I digress.

I stuck the RollerBlind into a “Holder” and used that to position the whole thing around. I place it 1.5 meters from the camera (same distance as the rotating button) and 70cm below it. Go convert that to feet if you must ;)

image

The only thing missing is the RollerBlindAnimator itself

Code! Finally!

We start with the first part - basically all state date and the stuff that collects the initial data

using HoloToolkit.Unity.InputModule;
using UnityEngine;

namespace HoloToolkitExtensions
{
    public class RollerBlindAnimatior : Togglable
    {
        public float PlayTime = 1.5f;

        private AudioSource _selectSound;

        private float _foldedOutScale;
        private float _size;

        private bool _isBusy = false;

        public bool IsOpen { get; private set; }

        public void Start()
        {
            _selectSound = GetComponent<AudioSource>();
            _foldedOutScale = gameObject.transform.localScale.z;
            var startBounds = gameObject.GetComponent<Renderer>().bounds;
            _size = startBounds.size.y;
            AnimateObject(gameObject, 0, 0);
        }
    }
}

_selectSound is a placeholder for a sound to be played when this thing is being toggled, but the button is already taking care of that, so that’s not used here. Now the roller blind is going to be animated over what appears to be y, but since it’s rotated over x, that should now be the z-axis. So we collect the initial z scale. We also collect the objects apparent y–size. That we get from the bounds of the renderer, that apparently gives back it’s value in absolute values, not taking rotation into account. And then it quickly ‘closes’ the blind so it’s primed for use.

Why do we need to know this size and center stuff?

The issue, my friends, is that a Plane’s origin is at it’s center. So if you start shrinking the scale of the z-axis, the plane does not collapse to the top or bottom, but indeed - to it’s center. So rather than a roller blind going up, we get the effect of an old tube CRT TV being turned off (who is old enough to remember that? I am) – the picture collapses to a line in the middle. In order to compensate for that, for every decrease of scale by n, we need to move the whole thing 0.5*n up.

And that is exactly what AnimateObject does:

private void AnimateObject(GameObject objectModel, float targetScale, float timeSpan)
{
    _isBusy = true;

    var moveDelta = (targetScale == 0.0f ? _size : -_size) / 2f;
    LeanTween.moveLocal(objectModel,
            new Vector3(objectModel.transform.localPosition.x,
                objectModel.transform.localPosition.y + moveDelta,
                objectModel.transform.localPosition.z), timeSpan);

    LeanTween.scale(objectModel,
               new Vector3(objectModel.transform.localScale.x,
                   objectModel.transform.localScale.y,
                   targetScale), timeSpan).setOnComplete(() => _isBusy = false);
}

As you can see I have taken a liking to LeanTween over iTween, as I find it a bit easier to use – no hashes but method chaining, that supports IntelliSense so I don’t have to remember that much names (did I mention I was lazy already?).

The last thing missing is the Toggle methods that you can override in Togglable. That’s not very special and only mentioned here for the sake of completeness

public override void Toggle()
{
    if (_isBusy)
    {
        return;
    }

    AnimateObject(gameObject, !IsOpen ? _foldedOutScale : 0, PlayTime);

    if (_selectSound != null)
    {
        _selectSound.Play();
    }

    IsOpen = !IsOpen;
}

Two final things

We need to tell the toggle button that it needs to toggle the roller blind when it’s tapped.So we set it’s Togglers Size value to 1 and drag the RollerBlind object from the hierachy to the Element 0 field.

image

And the very final thing: this app accesses the internet. It downloads the daffodil image after all. Do not forget to set the ‘internet client’ capability. I did. And spent an interesting time cursing my computer before the penny dropped. Sigh.

Concluding words

I hope I have added once again a fun new tool to your toolbox to make HoloLens experiences just a bit better. I notice I get a bit of a feeling for this – past the ‘OMG how am I ever going to make this work’, now into spinning of reusable components and pieces of architecture. As I said, I was too lazy to set up a proper repo, so I’ve put this in a branch of the code belonging to previous blog post. Enjoy!