11 February 2017

A behaviour for dynamically loading and applying image textures in HoloLens apps


After two nearly code-less posts it’s time for something more code-heavy, although it’s still out of my ordinary mode of operation: it’s fairly short and not much code. So rest assured, not the code equivalent of “War and Peace”, as usual ;)

For both a customer app and one of my own projects I needed to be able to download images from an external source to use as texture on a Plane (this is a flat object with essentially only width and height). Now that’s not that hard – on the Unity scripting reference there’s a clear example how to do that. But for my own project I need to make sure I could also change the image (so reload an image on a plane that already had loaded a texture before) and I must also be able to make sure the image was not distorted by width/height ratio differences between the Plane and the image. That required a radical different approach.

Enough talk: code!

The behaviour itself is rather small and simple, even if I say so myself. It starts as follows:

using UnityEngine;

public class DynamicTextureDownloader : MonoBehaviour
    public string ImageUrl;
    public bool ResizePlane;

    private WWW _imageLoader = null;
    private string _previousImageUrl = null;
    private bool _appliedToTexture = false;

    private Vector3 _originalScale;

    void Start()
        _originalScale = transform.localScale;

    void Update()

The ImageUrl is property you can either set from code or the editor and points to the location of the desired image on the web, ResizePlane (default false) determines whether or not you want the Plane to resize to fit the width/height ratio of the image. You may not always want that, as the center of the Plane stays in place. For instance, if the Plane’s top is aligned with something else. If the resizing makes the Plane’s height decrease, that may ruin your experience.

The other first three privates are status variables, the last one is the original scale of the plane before we started messing with it. We need to retain that, as can’t trust that scale once we start messing with it. I have seen the Plane become smaller and smaller when I alternated between portrait and landscape pictures.

The crux is the CheckLoadImage method:

private void CheckLoadImage()
    // No image requested
    if (string.IsNullOrEmpty(ImageUrl))

    // New image set - reset status vars and start loading new image
    if (_previousImageUrl != ImageUrl)
        _previousImageUrl = ImageUrl;
        _appliedToTexture = false;
        _imageLoader = new WWW(ImageUrl);

    if (_imageLoader.isDone && !_appliedToTexture)
        // Apparently an image was loading and is now done. Get the texture and apply
        _appliedToTexture = true;
        GetComponent<Renderer>().material.mainTexture = _imageLoader.texture;

        if (ResizePlane)

This might seem mightily odd if you are .NET developer, but that’s because of the nature of Unity. Keep in mind this method is called from Update, so it’s called 60 times per second. The flow is simple:

  • If ImageUrl is null, just forget it
  • If an ImageUrl is set and it is a new one, reset the two status variables and make a new WWW object. You can see this as a kind of WebClient. Key to know it’s async, and it has a done property, that only gets true when it’s downloading. So while it’s downloading, the next part is skipped
  • If, however the WWW object is done, we will need to apply it to the texture, but only if we did not do so before. So then we actually apply it.

So after the image is applied, the first if clause is false, because we have an ImageUrl. The second one is false, because the last loaded url is equal to the current one. And finally, the last if clause is false because the texture is applied. So although it’s called 60 times a second, it essentially does nothing. Until you change the ImageUrl.

An important note – you see that I first destroy the existing Render’s texture, then load the WWW’s texture into the renderer, and then destroy the WWWs texture again. If you are using a lot of these objects in one project and have them change image regularly, Unity’s garbage collection process cannot keep up and on a real device (i.e. a HoloLens) you will run out of memory soon. The nasty thing is this won’t happen soon in the editor or an emulator. This is why you always need to test on a real device. And this is also why I had to update this post later ;)

Resizing in correct width/height ratio

Finally the resizing, that’s not very hard it turns out. As long a you keep in mind the Plane’s ‘natural posture’  is ‘flat on the ground’, so what you tend to think of a X is indeed X, but what you tend to think of as Y, is in fact Z in the 3D world.

private void DoResizePlane()
    // Keep the longest edge at the same length
    if (_imageLoader.texture.width < _imageLoader.texture.height)
        transform.localScale = new Vector3(
            _originalScale.z * _imageLoader.texture.width / _imageLoader.texture.height,
            _originalScale.y, _originalScale.z);
        transform.localScale = new Vector3(
            _originalScale.x, _originalScale.y,
            _originalScale.x * _imageLoader.texture.height / _imageLoader.texture.width);

It also turns out a loaded texture comes handily with it’s own size attributes, which makes it pretty easy do to the resize.

Sample app

I made this really trivial HoloLens app that shows two (initially empty) Planes floating in the air. I have given them different with/height ratios on purpose (in fact they mirror each other):


I have dragged the behaviour on both of them. One will show my blog’s logo (that’s a landscape picture) and one comes from an Azure Blob container and shows portrait oriented picture of… well, see for yourself. If you deploy this app – or just hit the play button in Unity, the will initially show this:


If you air tap on one of the pictures you get this:


In the second picture, the pictures are a lot larger as they fit ‘better’ into the pane. If you click a picture again they will swap back. The only thing that actually changes it the value of ImageUrl.

Bonus brownie points and eternal fame, by the way, for the first one who correctly tells me who the person in the picture is, and at what occasion this picture was taken :D.

Some concluding remarks

If you value your sanity, don’t mess with the rotation of the Planes themselves. Just pack them into an empty game object and rotate that, so the local coordinate system is still ‘flat’, as far as the Planes are concerned. I have had all kinds of weird effects if you start messing with Plane orientation and location. Not sure why this is, probably things I don’t quite understand that but – you have been warned.

This is (or will be part) of something bigger, but I wanted to share the lessons learned separately, preventing them to get lost in some bigger picture. In the mean time, you can view the demo project with source here.

03 February 2017

Using a HoloLens scanned room inside your HoloLens app

HoloLens can interact with reality – that’s why it’s a mixed reality device, after all. The problem is, sometimes, the reality you need is not always available. For example, if the app needs to run in a room or facility at a (client) location you only have limited access to. And you have to locate stuff on places relative to places in the room. Now you can of course use the simulation in the device portal and capture the room.


You can save the room into an XEF file and upload that to (another) HoloLens. That works fine runtime, but in Unity that doesn’t help you much with getting a feeling of the space, and moreover, it messes with your HoloLens’ spatial mapping. I don’t like to use it on a real live HoloLens.

There is another option though, in the 3D view tab


If you click “update”, you will see a rendering of the space the HoloLens has recognized, belonging to the ‘space’ it finds itself in. Basically, the whole mesh. In this case, my ground and first floor of my house (I never felt like taking the HoloLens to 2nd floor). If you click “Save” it will offer to save a SpatialMapping.obj. That simple WaveFront Object format. And this is something you actually can use in Unity.

Only it looks rather crappy. Even if you know what you are looking at. This is the side of my house, with left bottom the living (the rectangular thing is the large cupboard), on top of that the master bedroom* with the slanted roof, and if you look carefully, you can see the stairs coming up from the hallway a little right of center, at the bottom of the house.


What is also pretty confusing it the fact meshes can have only one side. This has the peculiar effect that at a lot of places you can look into the house from outside, but not outside the house from within. Anyway. This mesh is way too complex (the file is over 40mb) and messy.

imageFortunately – there’s Meshlab. And it’s free too. Thank heavens, because after you have bought a HoloLens you are probably quite of money ; )

Meshlab has quite some tools to make your Mesh a bit smoother. Usually, when you look at a piece of mesh, like for instance the master bedroom, it looks kinda spikey – see left. But after choosing Filter/Remeshing, Simplication and Reconstruction/Simplification: Quadratic Edge Collapse Decimation


My house starts to look at lot less like the lair of the Shrike – it’s more like an undiscovered Antonio Gaudi building now. Hunt down the material used (in the materials subfolder), set it to transparent and play with color and transparency. I thought this somewhat transparent brown worked pretty well. Although there’s definitely still a lot of ‘noise’, it now definitely looks like my house, good enough for me to know where things are – or ought to be.

Using this Hologram of a space you can position Kinect-scanned objects or 3d models relative to each other based upon their true relative positions without actually being in the room. Then, when you go back to the real room, all you have to to is to make sure the actual room coincides with the scanned room model – add world anchors to the models inside the room, and then get rid of the room Hologram. Thus, you can use this virtual room as a kind of ‘staging area’, which I successfully did for a client location to which physical access is very limited indeed.

You might notice a few odd things – there are two holes in the floor in the living room – that is where the black leather couch and the black swivel chair are. As I’ve noticed before, black doesn’t work well with the HoloLens spatial mapping. Fascinating I find also the rectangular area that seems to float about a meter from left side of the house. That’s actually a large mirror that hangs on the bedroom wall, but the HoloLens spatial mapping apparently sees it as a reclined area. Very interesting. So not only this gives you a view of my house, but also a bit about HoloLens quirks.

The project showed above, with both models (the full and the simplified one) in it, can be found here.

* I love using the phrase “master bedroom” in relation to our house – as it conjures up images of a very large room like found in a typical USA suburban family house. I can assure you neither our house nor our bedroom do justice to that image. This is a Dutch house. But is is made out of concrete, unlike most houses in the USA, and will probably last way beyond my lifespan