13 August 2019

Migrating to MRTK2–interacting with the Spatial Map


One of the HoloLens’ great features is the ability to interact with real physical objects. This allows apps to place holograms on or adjacent to real objects, enables occlusion (the ability to let holograms appear to be hidden because they disappear behind physical objects), etc. This is all done using the Spatial Map, a graphical representation of whatever the HoloLens has observed to be present in the physical reality. Interacting with the Spatial map used to be easy – and it actually still isn't that hard, it’s just that - as with most of the things in the MRTK2 - quite some cheese has been moved

This blog post handles a common and a not so common scenario for interacting with the Spatial Map:

  1. Placing objects on the Spatial Map
  2. Programmatically enabling and disabling/clearing the Spatial Map

I have included a demo project that allows you to place cylinders on the Spatial Map by air tapping - and you can turn the Spatial Map on and off using a floating button.

Placing objects on the Spatial Map, MRKT2 style

I wrote about this already in November 2017 in my article about finding the floor using a HoloLens. In MRTK2, that process is a bit much different. Create a raycast from the Camera along the camera viewing angle and try to hit the Spatial Map. For this, you need the Spatial Map Layer mask. In the HoloToolkit you could simply access.


to get to that layer mask. Finding that now is a wee bit more complicated. You see, first, you need to extract the configuration from the Spatial Awareness System service like this:

var spatialMappingConfig =
CoreServices.SpatialAwarenessSystem.ConfigurationProfile as

The spatial mapping config contains a property called ObserverConfigurations containing a list of of configurations (apparently taking provisions there might actually be more than one configuration). For each configuration you can take the profile from it's ObserverProfile property - that you have to cast to MixedRealitySpatialAwarenessMeshObserverProfile. Then you find the layer used by this config in it's MeshPhysicsLayer property.

I repeat - you can find the layer.

That is not the layer mask. It took me quite some time debugging to find out what was going on here - because if you feed that layer number into the raycast, it won't 'see' the Spatial Map. I have no idea why this was changed. Anyway, to get the layer mask, as required by raycast methods, you have to bit shift the actual layer number, like this

1 << observerProfile.MeshPhysicsLayer

So what used to be a single property, now requires this method:

private static int GetSpatialMeshMask()
    if (_meshPhysicsLayer == 0)
        var spatialMappingConfig = 
          CoreServices.SpatialAwarenessSystem.ConfigurationProfile as
        if (spatialMappingConfig != null)
            foreach (var config in spatialMappingConfig.ObserverConfigurations)
                var observerProfile = config.ObserverProfile
                    as MixedRealitySpatialAwarenessMeshObserverProfile;
                if (observerProfile != null)
                    _meshPhysicsLayer |= (1 << observerProfile.MeshPhysicsLayer);

    return _meshPhysicsLayer;

private static int _meshPhysicsLayer = 0;

And I added a static backing variable to speed up this process, otherwise this whole loop will be run 60 times a second in my TapToPlaceController, as well as every time you air tap to place a cylinder.

The method to find a point on the Spatial Map simply is then simply this:

public static Vector3? GetPositionOnSpatialMap(float maxDistance = 2)
    RaycastHit hitInfo;
    var transform = CameraCache.Main.transform;
    var headRay = new Ray(transform.position, transform.forward);
    if (Physics.Raycast(headRay, out hitInfo, maxDistance, GetSpatialMeshMask()))
        return hitInfo.point;
    return null;

This sits in the updated LookingDirectionHelpers class. In the demo project you can see how it is actually used.

In the TapToPlaceController, the Update method will flip the text from “Please look at the spatial map max 2m ahead of you" to "Tap to select a location" when the gaze strikes the Spatial Map (and the Spatial Map ONLY, not another hologram).

protected override void Update()
    _instructionTextMesh.text =
         LookingDirectionHelpers.GetPositionOnSpatialMap(_maxDistance) != null ?
         "Tap to select a location" : _lookAtSurfaceText;

If you then air tap, it will place a squatted cylinder on the spatial map at the place you are looking to. This is done in the OnPointerDown method - using the same call to LookingDirectionHelpers.GetPositionOnSpatialMap to get a point to place the cylinder.

You will notice a floating cube as well. You can't place a cylinder on the cube - it only finds the Spatial Map. Demonstrating that you can't place a cylinder on it, is the cube's sole purpose ;). What might happen is that you place a cylinder behind the cube on the Spatial Map, if your opposite wall is closer than 2 meters. It requires additional logic to handle that situation, but that is beyond the scope of this blog post.

Starting, stopping and clearing the Spatial map

For some apps, most notably my AMS HoloATC app, the Spatial Map is used to help getting an initial place to put an object but then it needs to go away, as to not get the view blocked by occlusion. Making the Spatial Map transparent sometimes helps, but then still the walls get in the way of selecting objects as they block the gaze and other cursors. Long story short – it is sometimes desirable to be able to turn the Spatial map on and off. And this is actually pretty simple:

public void ToggleSpatialMap()
     if( CoreServices.SpatialAwarenessSystem != null)
         if( IsObserverRunning )

Note that “ClearObservations” is necessary, as merely calling Suspend only stops the updating of the Spatial Map – the graphic representation still stays active. This was actually added after feedback from yours truly ;)

As to checking whether or not the observer is / observers are actually running I have devised this little trick

private bool IsObserverRunning
         var providers =
         return providers.FirstOrDefault()?.IsRunning == true;

I check if there’s an observer and assume that if the first one is running, so is probably the rest. Although in practice, on a HoloLens, there will be only one observer running anyway.

You can activate and de-activate the Spatial Map by pressing the floating button, where the SpatialMapToggler behaviour is attached to.


If you run and deploy the demo project you will find a button floating before you (in the direction that you looked when the app started) that you can use to toggle the Spatial Map, and to the right a little cube. In addition, a text floating in your vision instructs you either to look at the spatial map or air tap when you actually do – and then a cylinder will appear. Like this in this little video:

30 July 2019

Fixing error Failed to locate “CL.exe” or MSB8020 when deploying IL2CPP solution


You have created a Unity project to create an app using MRTK2, and you want to use the new IL2CPP backend. You open the solution in Visual Studio 2019, you try to deploy it by using Build/Deploy and all the way at the end the compiler complains about “CL.exe” missing.

Alternatively, you might get the slightly more verbose error:

error MSB8020: The build tools for Visual Studio 2017 (Platform Toolset = 'v141') cannot be found. To build using the v141 build tools, please install Visual Studio 2017 build tools.  Alternatively, you may upgrade to the current Visual Studio tools by selecting the Project menu or right-click the solution, and then selecting "Retarget solution".


You have most likely used the the recommended Unity version (2018.4.2f1) to create the project. This version – the name gives it away – was released before Visual Studio 2019, and therefore assumes the presence of Visual Studio 2017 and it’s accompanying C++ tools set, ‘V141’. So Unity generated a C++ solution referencing that tool set.

But now it’s 2019, you have kissed Visual Studio 2017 goodbye, installed Visual Studio 2019. And that comes with tool set V142.


Either you install V141 using the Visual Studio Installer, or you tell the generated solution to use V142. I personally prefer the last one, because newer is always better right ;)

Simply right-click the project in the solution that has “(Universal Windows)” behind it’s name, select properties, tab general and then the problem is already pretty evident:

Simply select Visual Studio 2019 (142) for Project Toolset and you are good to go. This setting will stay as long as you don’t delete the generated project – Unity will simply change what needs to be changed, and leave as much as it can (to speed up the generation process).


Simple fix, but can be hard to find. Hence a simple blog about it

29 July 2019

Minimal required software for MRTK2 development for HoloLens 2 and Immersive headsets


A short one this time – and codeless to. You see, next Saturday I will be giving an workshop for MixUG Netherlands about development with the Mixed Reality Toolkit 2 for Immersive headsets, together with my colleague, partner in crime and fellow MVP Alexander Meijers. One of the things that came up preparing for this workshop was what you would actually need to develop with the Mixed Reality Toolkit 2. Since ye olden days of the HoloToolkit, quite a few things have changed – Unity, the minimal OS version, and there’s even a new version of Visual Studio. So I set out to complete a minimal shopping list with a few optional items. Fortunately, our friends over at Microsoft Azure make it quite simple to spin up a totally pristine machine so you don’t run into the typical developer machine issues – multiple versions of Visual Studio with different workloads and a myriad of Unity versions – which makes it hard to tell sometimes what is required for what app.

OS version

Easy one. Windows 10, 1809 or (recommended) 1903. Everything I tested, I tested on Windows 10 Pro

Visual Studio

You will need Visual Studio 2019 community edition. 2017 will work too, but is much slower. Download Visual Studio 2019 community from this link and choose the following work loads:

  • UWP development with optional components USB connectivity and C++ (V142) UWP tools checked
  • Game development with Unity with the optional component 2018.3 64-bit editor unchecked

In images:

Make sure you install Visual Studio before Unity.

Offline installer

A fun trick – if you want to make an offline installer for the community edition for these particular workloads, open a command prompt after downloading the installer, and type (on one line):

vs_community.exe --layout c:\vsinstaller
--add Microsoft.VisualStudio.Workload.ManagedGame
--add Microsoft.VisualStudio.Workload.Universal
--add Microsoft.VisualStudio.Component.Windows10SDK.IpOverUsb
--add Microsoft.VisualStudio.ComponentGroup.UWP.VC --lang en-US

In c:\vsinstaller you will then find a complete install ‘layout’ for all the necessary components. Might be useful if you want to prepare multiple computers.


2018.4.2f1, taken from ProjectSettings/ProjectVersion.txt in the mrtk_development branch. This particular version can be downloaded directly from this link.

Choose as minimal components

  • Unity 2018.4.2f1
  • UWP Build Support

Mind you – this sets you op for HoloLens 2 and Windows Mixed Reality Immersive headsets only.

Optional – HoloLens 2 emulator

I have already written extensively about it. You can get it here. Be aware that it requires Hyper-V being installed. If you have installed Windows 10 1903, it will run right away. On 1809 you will need some trickery.


It’s not that hard to get up and running for MRTK2 development for HoloLens 2 and Windows Mixed Reality Immersive headsets. And now you have a nice complete ‘shopping list’ for when you want to prepare your PC.

14 July 2019

Migrating to MRTK2–manipulating holograms by grabbing


To be honest, the title of this blog post is a bit weird, because in Mixed Reality Toolkit 1 the concept of grabbing was unknown, as HoloLens 1 does not support this kind of gestures. But nevertheless, as I am on this quest of documenting all the gems I discover while migrating an existing app to Mixed Reality Toolkit 2, this is one of the things I came across so I am shoehorning it in this blog post series – the 8th installment of it already. And the fun thing about this one if that although there is a demo project available, I am going to write no code at all. The whole concept of manipulation by grabbing can be done by simply dragging MRTK2 components on top of a game object.

'Far manipulation'

This is really extremely simple. If I want to make a cube draggable in the 'classic' sense - that is, point a cursor to it, pinch and move my hand, and then the cube follows, all you have to do is add a ManipulationHandler to the cube, with default settings:

And then you simply point the 'hand ray' to it, pinch and move:

But as you could see, I can only drag it. I can't move it anymore - or rotate - as my hand comes closer, like at the end of the movie. I fact, I can't do anything anymore.

Allow grabbing and moving

For that, we will need to add another script: Near Interaction Grabbable.

And now, if the hand comes close to the cube, you can do all kinds of crazy stuff with it

Some settings to consider

  • If you don't want to allow 'far manipulation' (the first type) but only want to allow the manipulation by grapping, you can uncheck the "Allow Far Manipulation" on the ManipulationHandler.
  • If you want to see where the actual grab connection point is, check the "Show Tether When Manipulating" checkbox on Near Interaction Grabbable. This will look like this:

I bet there are more settings to consider, but I haven't tried those yet (or felt the need to do so).


The code of this completely code-less sample can be found here. I can't wait to add code like this to real-world HoloLens 2 projects. But alas, we still need to wait for the device :)

09 July 2019

Migrating to MRTK2– handling tap, touch and focus ‘manually’ (in code)

Wait a minute – you did handle tap before, right?

Indeed, dear reader, I did. But I also had signed up for a MixUG session on Wednesday July 3. And while making demos for that I learned some other ways to handle interaction. Once again it shows that the best way to learn things is to try to teach them – because the need to explain things induces the need to actually obtain a deeper knowledge.

Ye olde way

In the MRTK 1, it was thus:

  • Handle tap – implement IInputClickHandler
  • Handle drag – implement IManipulationHandler
  • Handle focus – implement
  • Handle touch – forget it. ;)

The new way

In the MRTK 2 it is now

  • Handle tap – implement IMixedRealityPointerHandler
  • Handle drag – see above
  • Handle focus – implement IMixedRealityFocusHandler
  • Handle touch – IMixedRealityTouchHandler

Now I am going to ignore drag for this tutorial, and concentrate me on tap, focus and touch


This requires you to implement four methods:

  • OnPointerDown
  • OnPointerDragged
  • OnPointerUp
  • OnPointerClicked

OnPointerClicked basically intercept a tap or an air tap, and will work as such as you deploy the demo project to a HoloLens 1. After being bitten by people tapping just a tiny bit to slow and therefore not getting response, I tend to implement OnPointerDown rather than OnPointerClicked to capture a 'tap' event, but that's a matter of preference.


You will need to implement:

  • OnFocusEnter
  • OnFocusExit

The method names are the same as in MRKT1, only the signatures not - you now get a parameter of type FocusEventData which does give you some more information - by what the object was focused (we have multiple ways of doing that now!), what the previous focused object was, and what the new focused object is.


This requires you to implement

  • OnTouchStarted
  • OnTouchCompleted
  • OnTouchUpdated

But there is a twist to that. As we will soon see.


To show off how it all works, I have created a little demo project. You can run it either in the emulator or the Unity editor (or a HoloLens 2, if you are in the HoloLens team and part some very few selected parties - I am unfortunately neither).

I have created a little script CodedInteractionResponder that shows off how this works. This script implements all the three interfaces I just wrote about. If you open the demo project in Unity show itself like this. All three cubes have the script attached to them.

The text above the cubes will show how much times a cube has been either focused, touched or clicked. I you press play and then the space bar, the right hand will appear (or use ctrl for the left hand). Moving the hand can be done by using the mouse - if you move the hand ray over the cubes it will trigger a focus event, if you tap the left mouse button you will trigger a tap, and if you move the hand towards the cube (using the WASD keys) it will trigger a touch event.

That is to say - you would expect that. But that is not always the case

What happens is this:

  • You can click or focus the green cube, but you cannot touch it. Nothing happens if you try.
  • You can click, focus or touch the red cube, but if you touch it, the number of times it's clicked is increasing. Not the number of touches.
  • Only the blue cube works as expected.

Yet they all have the CodedInteractionResponder. How does this compute?


The best way to explain this, is an image showing the bottom half of all the three cubes

The green cube misses the NearInteractionTouchable. This script is necessary to have touch events being fired at all. So unlike IMixedRealityPointerHandler and IMixedRealityFocusHandler, where a mere implementation of the interface will trigger an event, a touch event - that is, methods in IMixedRealityTouchHandler being called - requires the addition of a NearInteractionTouchable script.

And NearInteractionTouchable has another trick up it's sleeve. Suppose you have a button - whether it's (air) tapped or actually touched/pressed, you want to activate the same code. If you change "Events to Receive" from it's default "Touch" to "Pointer" (as I did with the red cube) touching the cube will actually trigger a pointer event. This saves you a few lines of code. So basically NearInteractionTouchable can act as a kind of event router. And this is why the red cube never shows a touch event - but a click event instead.

Be aware NearInteractionTouchable needs a collider to work on. This collider needs to be on the same object the script is on. So if you make an empty game object as a hat stand for a a bunch of smaller game objects, make sure to manually add a collider that envelops all game objects, otherwise the 'touch' won't seem to work.

What, no code?

Yes, there is code, but it's pretty straightforward and if you want to have a look at CodeInteractionResponder, have a look in GitHub. It's actually so simple I felt it a little bit overdone to verbatim repeat parts in this blog post itself.

19 June 2019

Migrating to MRTK2 - missing Singleton and 3DTextPrefab


If you are migrating from the HoloToolkit to Mixed Reality Toolkit 2 'cold turkey', as I am doing for my AMS HoloATC app, a lot of things break, as I already said in the first post of this series. For things that you can tap, you can simply change the implementing interface from IInputClickHandler or IManipulationHandler to a couple of other interface and change the signature a bit - that's not complex, only tedious, depending on how much you have used it.

What I found really hard was the removal of the Singleton class and the 3DTextFab. I used both quite extensively. The first one I needed for like data access classes as the concept of services that was introduced in the Mixed Reality Toolkit 2  was not yet available, and the other... well basically all my texts were 3DTextPrefabs so any kind of user feedback in text format was gone. Because so much breaks at the same time, it's very hard to step by step rebuilding your app to a working condition. Basically you have to change everything before something starts to work again. Since I was still learning by doing, there was no way to test if I was doing things more or less right. I got stuck, and took a radical approach.

Introducing HoloToolkitCompatiblityPack

I have created a little Unity Package that contains the things that made it hard for me to get a step-by-step migration to the MRTK2 and christened it the HoloToolkitCompatiblityPack. It contains minimal amount of scripts and meta files to have Singleton and 3DTextFab working inside an MRTK2 built app. As I will be migrating more apps, I will probably update the package with other classes that I need. You can find the package file here and the project here. If you take your existing HoloToolkit based app, yank out the HoloToolkit, replace it by the MRTK2, then import the HoloToolkitCompatiblityPack package, you at least have to fix a few less things to at least get your app to a minimal state of function again.

Caveat emptor

Yes, of course you can use the HoloToolkitCompatiblityPack in your production app, and ship a kind of Frankenbuild using both MRTK2 and this. Do let yourself be tempted to do that. See this package as a kind of scaffolding, or a temporary beam to hold up the roof while you are replacing a bearing wall. For 3DTextFab I tend to turn a blind eye, but please don't use Singleton again. Convert those classes into services one by one. Then remove the Singleton from the HoloToolkitCompatiblityPack to make sure everything works without. This is for migration purposes only.

Take the high road, not the low technical debt road.


Making this package helped me forward with the migration quite a lot. I hope it helps other too. I'd love to hear some feedback on this.

29 May 2019

Migrating to MRTK2 - looking a bit closer at tapping, and trapping 'duplicate' events


In my previous post I wrote about how game objects can be made clickable (or 'tappable') using the Mixed Reality Toolkit 2, and how things changed from MRTK1. And in fact, when you deploy the app to a HoloLens 1, my demo actually works as intended. But then I noticed something odd in the editor, and made a variant of the app that went with the previous blog post to see how things work- or might work- in HoloLens 2.

Debugging ClickyThingy ye olde way

Like I wrote before, it's possible to debug the C# code of a running IL2CPP C++ app running on a HoloLens. To debug using breakpoints is a bit tricky when you are dealing with rapidly firing event - stopping through the debugger might actually have some influence on the order events play out. So I resorted to the good old "Console.WriteLine-style" of debugging, and added a floating text in the app that shows what's going on.

The ClickableThingy behaviour I made in the previous post then looks like this:

using Microsoft.MixedReality.Toolkit.Input;
using System;
using TMPro;
using UnityEngine;

public class ClickableThingyGlobal : BaseInputHandler, IMixedRealityInputHandler
    private TextMeshPro _debugText;

    public void OnInputUp(InputEventData eventData)
        GetComponent<MeshRenderer>().material.color = Color.white;
        AddDebugText("up", eventData);

    public void OnInputDown(InputEventData eventData)
        GetComponent<MeshRenderer>().material.color = Color.red;
        AddDebugText("down", eventData);

    private void AddDebugText( string eventPrefix, InputEventData eventData)
        if( _debugText == null)
        var description = eventData.MixedRealityInputAction.Description;
        _debugText.text += 
            $"{eventPrefix} {gameObject.name} : {description}{Environment.NewLine}";

Now in the HoloLens 1, things are exactly like you expect. Air tapping the sphere activates Up and Down events exactly once for every tap (because the Cube gets every tap, even when you don't gaze at it - see my previous post for an explanation)

When you run the same code in the editor, though, you get a different result:

Tap versus Grip - and CustomPropertyDrawers

The interesting thing is, when you 'air tap' in the editor (using the space bar and the left mouse button), thumb and index finger of the simulated hand come together. This, now, is recognized as a tap followed by a grip, apparently.

So we need to filter the events coming in through OnInputUp and OnInputDown to respond to the actual events we want. This is where things get a little bit unusual - there is no enumeration of sorts that you can use to compare you actual event against. The available events are all in the configuration, so they are dynamically created.

The way to do some actual filtering is to add a property of type MixedRealityInputAction to your behaviour (I used _desiredInputAction). Then the MRTK2 automatically creates a drop down with possible to events to select from:

How does this magic work? Well, the MRTK2 contains a CustomPropertyDrawer called InputActionPropertyDrawer that automatically creates this drop down whenever you add a property of type MixedRealityInputAction to your behaviour. The values in this list are pulled from the configuration. This fits with the idea of the MRTK2 that everything must be configurable ad infinitum. Which is cool but sometimes it makes things confusing.

Anyway, you select the event you want to test for in the UI, in this case "Select":

And then you have to check if the event methods if the event matches your desired event.

if (eventData.MixedRealityInputAction != _desiredInputAction)

And then everything works as you expect: only the select event results in an action by the app.

How about HoloLens 2?

I could only test this in the emulator. The odd things is, even without the check on the input action, only the select action was fired, even when I pinched the hand using the control pane:

So I have no idea if this is actually necessary on a real live HoloLens 2, but my friends and fellow MVPs Stephen Hodgson and Simon 'Darkside' Jackson have both mentioned this kind of event type check as being necessary in a few on line conversations (although I then did not understand why). So I suppose it is :)


Common wisdom has it that the best thing about teaching is that you learn a lot yourself. This post is excellent proof of that wisdom. If you think this here old MVP is the end-all and know-all of this kind of stuff, think again. I knew of customer editors, but I literally just learned the concept of CustomPropertyDrawer while I was writing this post. I had no idea it existed, but I found it  because I wanted to know how the heck the editor got all the possible MixedRealityInputAction from the configuration and show that in such a neat list. Took me quite some searching, actually - which is logical, if you don't know what exactly you are looking for ;).

I hope this benefits you as well. Demo project here (branch TapCloseLook).

22 May 2019

Migrating to MRTK2 - IInputClickHandler and SetGlobalListener are gone. How do we tap now?


Making something 'clickable' (or actually more 'air tappable') was pretty easy in the Mixed Reality Toolkit 1. You just added the IInputClickHandler interface like this:

using HoloToolkit.Unity.InputModule;
using UnityEngine;

public class ClickableThingy: MonoBehaviour, IInputClickHandler
    public void OnInputClicked(InputClickedEventData eventData)
        // Do something

You dragged this behaviour on top of any game object you want to act on being air tapped and OnInputClicked was being activated as soon as you air tapped. But IInputClickHandler does no longer exist in MRTK2. How does that work now?

Tap – just another interface

To support the air tap in MRTK2, it's simply a matter of switching out one interface for another:

using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class ClickableThingy : MonoBehaviour, IMixedRealityInputHandler
    public void OnInputUp(InputEventData eventData)
        //Do something else

    public void OnInputDown(InputEventData eventData)
        //Do something

I don't have HoloLens 2, but if you put whatever was in OnInputClicked in OnInputDown it's being executed on a HoloLens 1 when you do an air tap and the object is selected by a the gaze cursor.. So I guess that's a safe bet if you want to make something that runs on both HoloLens 1 and 2.

‘Global tap’ – add a base class

In the MRTK 1 days, when you wanted to do a ‘global tap’, you could simply add a SetGlobalListener behaviour to the game object that contained your IInputClickHandler implementing behaviour:

Adding this object meant that any airtap would be routed to this IImputClicked object - even without the gaze cursor touching the game it, or touching anything anything at all, for what matters. This could be very useful in situations where you for instance were placing objects on the spatial map and some gesture is needed to stop the movement. Or some general confirmation gesture in a situation where some kind of UI was not feasible because it would get in the way. But the SetGlobalListener behaviour is gone as well, so how do get that behavior now?

Well, basically you make your ClickableThingy not only implement IMixedRealityInputHandler, but also be a child class of BaseInputHandler.

using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class ClickableThingyGlobal : BaseInputHandler, IMixedRealityInputHandler
    public void OnInputUp(InputEventData eventData)
        // Do something else

    public void OnInputDown(InputEventData eventData)
        // Do something

This has a property isFocusRequired that you can set to false in the editor:

And then your ClickableThingy will get every tap. Smart people will notice it makes sense to always make a child class of BaseInputHandler as the IsFocusRequired property is default true – so the default behavior ClickableThingyGlobal is to act exactly the same as ClickableThingy, but you can configure it’s behavior in the editor, which makes your behavior applicable to more situations. Whatever you can make configurable saves code. So I'd always go for a BaseInputHandler for anything that handles a tap.

Proof of the pudding

This is exactly what the demo project shows: a cube that responds to a tap regardless whether there is a gaze or hand cursor on it, and a sphere that only responds to a tap when there is a hand or gaze cursor on it. Both use the ClickableThingyGlobal: the cube has the IsFocusRequired check box unselected, on the sphere it is selected. To this end I have adapted the ClickableThingyGlobal to actually do something usable:

using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class ClickableThingyGlobal : BaseInputHandler, IMixedRealityInputHandler
    public void OnInputUp(InputEventData eventData)
        GetComponent<MeshRenderer>().material.color = Color.white;

    public void OnInputDown(InputEventData eventData)
        GetComponent<MeshRenderer>().material.color = Color.red;

or at least something visible, which is to change the color of the elements from white to red on a tap (and back again).

In an HoloLens 1 it looks like this.

The cube will always flash red, the sphere only when there is some cursor pointing to it. In the HoloLens 2 emulator it looks like this:

The fun thing now is that you can act on both InputUp and InputDown, which I use to revert the color setting. To mimic the behavior of the old OnInputClicked, adding code in OnInputDown and leaving OnInputUp is sufficient I feel.


Yet another part of moved cheese, although not dramatically so. Demo code is very limited, but can still be found here. I hope me documenting finding my way around Mixed Reality Toolkit 2 helps you. If you have questions about specific pieces of your HoloLens cheese having been moved and you can't find them, feel free to ask me. In any case I intend to write lots more of these posts.

15 May 2019

Migrating to MRTK2 - MS HRTF Spatializer missing (and how to get it back)


One the many awesome (although sadly underutilized) capabilities of HoloLens is Spatial Audio. With just a few small speakers and some very nifty algorithms it allows you to connect audio to moving Holograms that sound as if they are coming from a Hologram . Microsoft have applied this with such careful precision that you can actually hear Holograms moving above and behind you, which greatly enhances the immersive experience in a Mixed Reality environment. It also has some very practical uses - for instance, alerting the user something interesting is happening outside of their field of vision - and the audio also provides a clue where the user is supposed to look.

Upgrade to MRKT2 ... where is my Spatializer?

In the process op upgrading AMS HoloATC to Mixed Reality Toolkit 2 I noticed something odd. I tried - in the Unity editor - to click an airplane, that should then start to emit a pinging sound. In stead, I saw this error in the editor pop up:

"Audio source failed to initialize audio spatializer. An audio spatializer is specified in the audio project settings, but the associated plugin was not found or initialized properly. Please make sure that the selected spatializer is compatible with the target."

Then I looked into the project's audio settings (Edit/Project Settings/Audio) and saw that the Spatializer Plugin field was set to "None" - and that the MS HRTF Spatializer (that I normally expect to be in the drop down) was not even available!

Now what?

The smoking - or missing - gun

The solution is rather simple. If you look in the Mixed Reality Toolkit 2 sample project, you will notice the MS HRFT Spatializer is both available and selected. So what is missing?

Look at the Packages node in your Assets. It's all the way to the bottom. You will probably see this;

But what you are supposed to see is this:

See what's missing? Apparently the spatializer has been moved into a Unity Package. When you install the Mixed Reality Toolkit 2 and click "Mixed Reality Toolkit/Add to Scene and configure" it is supposed to add this package automatically (at least I think it is) - but for some reason, this does not always happen.

Use the Force Luke - that is, the Unity Package Manager

Fortunately, it's easy to fix. In the Unity Editor, click Window/Package Manager. This will open the Package Manager Window. Initially it will only show a few entries, but then, near the bottom, you will see "Windows Mixed Reality" appear. Hit the "Install" button top right. And when its done the Windows Mixed Reality entry will appear in the Packages will appear.

And now, if you go to Edit/Project Settings/Audio, you will see that MS HRTF Spatializer has appeared again. If this a migrated project and you have not messed with the audio settings, it will probably be selected automatically again.


No code this time, as there is little to code. I do need to add a little word of warning here - apparently these packages are defined in YourProject/Packages/manifest.json. Make sure this gets added to your repo and checked in as well.

10 May 2019

Migrating to MRTK2–NewtonSoft.JSON (aka JSON.Net) is gone


In ye olde days, if you set up a project using the Mixed Reality Toolkit 1, NewtonSoft.JSON (aka JSON.Net) was automatically included. This was because part of the MRTK1 had a dependency on it –something related to the gLTF stuff used it. This is (apparently) no longer the case. So if you had a piece of code that previously used something like this

public class DeserializeJson : MonoBehaviour
    private TextMeshPro _text;

    void Start()
        var jsonstring = @"
   ""Property1"" : ""Hello"",
   ""Property2"" : ""Folks""
        var deserializedObject = JsonConvert.DeserializeObject<DemoJson>(jsonstring);

        _text.text = string.Concat(deserializedObject.Property1,
            Environment.NewLine, deserializedObject.Property2);

It will no longer compile when you use the MRTK2. You will need to get it elsewhere. There are two ways to solve this: the right way and the wrong way.

The wrong way

The wrong way, that I was actually advised to do, is to get a copy of an MRTK1 and drag the JSON.Net module from there into your project. It's under HoloToolkit\Utilities\Scripts\GLTF\Plugins\JsonNet. And it will appear to work, too. In the editor. And as long as you use the .NET scripting backend. Unity has announced, though, the .NET backend will disappear – you will need to use IL2CPP soon. And when you do so, you will notice your app will mysteriously fail to deserialize JSON. If you run the C++ app in debug mode from Visual Studio you will see something cryptic like this:

The reason why is not easy to find. If you dig deeper, you will see it complaining about it trying to use Reflection.Emit and this apparently is not allowed in the C++ world. Or not in the way it's done. Whatever.

The right way

Fortunately there is an another way - and a surprisingly one to boot. There is a free JSON.Net package in the Unity store, and it seems to do the trick for me – I can compile the C++ app, deploy it on the HoloLens 2 emulator and it actually parses JSON.


But will this work on a HoloLens 2?

The fun thing is of course the HoloLens 2 has an ARM processor, so the only way to test if this really works is to run in on an HoloLens 2. Unlike a few very lucky individuals, I don't have access to the device. But I do have something else - an ARM based PC that I was asked to evaluate in 2018.  I compiled for ARM, made a deployment package, powered up the ARM PC and wouldn't you know it...

So. I think we can be reasonably sure this will work on a HoloLens 2 as well.

Update - this has been verified.


I don't know whether all the arcane and intricate things JSON.Net supports are supported by this package, but it seems to do the trick as far as my simple needs are concerned. I guess you should switch to this package to prepare for HoloLens 2.

Code as usual on GitHub:

And yes, the master is still empty but I intend to use that for demonstrating a different issue.

06 May 2019

Migrating to MRTK2 - Mixed Reality Toolkit Standard Shader 'breaks'


At this moment I am trying to learn as much as possible about the new Mixed Reality Toolkit 2, to be ready for HoloLens 2 when it comes. I opted for using a rather brutal cold turkey learning approach: I took my existing AMS HoloATC app, ripped out the ye goode olde HoloToolkit, and replaced it by the new MRTK2 - fresh from GitHub. Not surprisingly this breaks a lot. I am not sure if this is the intended way of migrating - it's like renovating the house by starting with bulldozering a couple of walls away. But this is the way I chose do it, as it forces me to adhere to the new style and learn how stuff works, without compromises. It also makes me very clear where things are going to break when I do this to customer apps.

So I am starting a series of short blog posts, that basically documents the bumps in the road as I encounter them, as well as how I swerved around them or solved them. I hope other people will benefit from this, especially as I will showing a lot of moved cheese. And speaking of...

Help! My Standard Shader is broken!

So you had this nice Mixed Reality app that showed these awesome holograms:

and then you decided to upgrade to the Mixed Reality Toolkit 2

and you did not expect to see this. This is typically the color Unity shows when a material is missing or something in the shader is thoroughly broken. And indeed, if you look at the materials:

something indeed is broken.

How to fix this

There is good new, bad news, and slightly better news.

  • The good news - it's easy to fix.
  • The bad news is - you have to do this for every material in your apps that used the 'old' HTK Standard shader
  • The slightly better news - you can do this for multiple materials in one go. Provided they are all in one folder, or you do something nifty with search

So, in your assets select your materials:

Then in the inspector select the Mixed Reality Toolkit Standard Shader (again) :

And boom. Everything looks like it should.

Or nearly so,because although it carries the same name, it's actually a different shader. Stuff actually might look a wee bit different. In my sample app, especially the blue seems to look a bit different.

So what happened?

If you look what Git marks as changed, only the tree materials themselves are marked changed:

and if you look at a diff, you will see the referenced file GUID for the shader is changed. So indeed, although it carries the same name (Mixed Reality Toolkit Standard), as far as Unity is concerned it's a different shader.

(you might want to click on the picture to be able to actually read this).

As you scroll down through the diff, you will see lots of additions too, so this is not only a different shader id, it's actually a different or new shader as well. Why they deliberately chose to break the shader ID - beats me. Maybe to make upgrading from one shader to another possible, or have both the old and the new one simultaneously work in one project, making upgrade easier. But since they have the same name, this might also induce confusion. Whatever- but this is what causes the shader to 'break' at upgrade, an now you know how to fix it, too.


I hope to have eliminated once source of confusion today, and I wish you a lot of fun watching the //BUILD 2019 keynote in a few hours.

You can find a demo project here.

  • Branch "master" shows the original project with HoloToolkit
  • Branch "broken" shows the project upgraded to MRTK2 - with broken shaders
  • Branch "fixed" shows the fixed project

26 April 2019

HoloLens 2 Emulator - showing and manipulating hands in an MRTK2 app

Last week I wrote a first look at the new HoloLens 2 emulator and showed you how something of the hand movement in the HoloLens shell using an Xbox One controller. This was pretty hard to do as the hands were only intermittently displayed. It turns out that if you deploy an app made with the Mixed Reality Toolkit 2 you actually get a lot better graphics assisting you in manipulating. It takes some getting used to, but I was able to play the piano and press some buttons, just like Julia Schwarz was able to do in her now-famous MWC demo.

This then, looks like this:

As you can see, the mere act of moving the hand past or through the piano keys or the buttons above actually triggers the buttons (if you turn the sound on you can hear the piano and some audio feedback on the buttons too).

This is simply the HandInteractionExamples scene from the MRTK2 dev branch, generated into a C++ app and deployed into the emulator.

To show you how the hands can be moved, I made another little captioned movie:

Using the Xbox controller is a lot easier this way, although I am not quite sure how to do a two-hand-manipulation yet, as the sticks can only control one hand at a time (the left or right bumpers determine which hand you control.

17 April 2019

First look at the HoloLens 2 emulator


Today, without much fanfare, the HoloLens 2 emulator became available. I first saw Mike Taulty tweeting about it and later more people chiming in. I immediately downloaded it and started to get it to work, to see what it does and how it can be used. The documentation is a bit limited yet, so I just happily blunder along the emulator, trying some things, and showing you what and how

Getting it is half the fun

Getting it is easy - from this official download page you get get all the emulator versions, including all versions of the HoloLens 1 emulator - but of course we are only interested in HoloLens 2 now:

Just like the previous instances, the emulator requires Hyper-V. This requires you to have hardware virtualization enabled in your BIOS. Consult the manual of your PC or motherboard on how to do that. If you don't know what I am talking about, for heavens sake stop here and don't attempt this yourself. I myself found it pretty scary already. If you make mistakes in your BIOS settings, your whole PC may become unusable. You have been warned.

Starting the Emulator from Visual Studio

The easiest way to start is from Visual Studio. If you have installed the whole package, you will get this deployment target. You can choose whether you want debug or release - the latter is faster.

But mind using x86 as a deployment target. Otherwise the emulator is not available. It may be that the HoloLens 2 has an ARM processor, but your PC has not. For an an app I just cloned the Mixed Reality Toolkit 2 dev branch, opened up the project with Unity 2018.3.x and built the app. The I opened the resulting app with Visual Studio. See my previous post on how to do that using IL2CPP (that is, generating a C++ app)

If the emulator starts for the first time in your session, you might see this

Just click and the emulator starts up. Be aware this a heavy beast. It might take some time to start, it might also drag down the performance of your PC down somewhat. Accept the elevation prompt, and then most likely Visual Studio will thrown an error as it tries to deploy as soon as the emulator has started, but it's far from ready to accept deployment of apps - the HoloLens OS is still booting. After a while you will hear the (for HoloLens users familiar) "whooooooomp" sound indicating the OS shell is starting.

Starting the emulator directly

Assuming you have installed everything in the default folder, you should be able to start the emulator with the following command:

"%ProgramFiles(x86)%\Windows Kits\10\Microsoft XDE\10.0.18362.0\XDE.exe" /name "HoloLens 2 Emulator 10.0.18362.1005" /displayName "HoloLens 2 Emulator 10.0.18362.1005" /vhd "%ProgramFiles(x86)%\Windows Kits\10\Emulation\HoloLens\10.0.18362.1005\flash.vhdx" /video "1968x1280" /memsize 4096 /language 409 /creatediffdisk "%USERPROFILE%\AppData\Local\Microsoft\XDE\10.0.18362.1005\dd.1968x1280.4096.vhdx" /fastShutdown /sku HDE

This has been ascertained using the information in this blog post, that basically does the same trick for HoloLens 1 emulators

Either way, it will look like this:

If you have followed the Mixed Reality development in the 19H1 Insider's preview, you will clearly recognize that the Mixed Reality crew are aligning HoloLens 2 with the work that has been done for immersive WMR headsets.

Controlling the Emulator

The download page gives some basic information about how you can use keystrokes, mouse or an XBox Controller to move your viewpoint around and do stuff like air tap and bloom. This page gives some more information, but it indicates it this is still for HoloLens 1 emulator.

However, it looks like most of the keys are in there already. The most important one (initially) is the Escape key, which - just like in the HoloLens 1 emulator - will reset your viewpoint and your hand positions. And believe me, you are going to need them.

Basic control

This is more or less unchanged. You move around using the left stick, you turn around using the right stick. Rotating sideways van moving up/down is done using the D-pad. Selecting still happens using the triggers.

Basic hand control

If you use an Xbox Controller, you will need to do the following:

  • To move the right hand, press the right bumper, and slightly move the left stick. If you move it forward, you will see the right hand moving forward
  • To move the left hand, press the left bumper, and still use the left stick.

Hands are visualized as show on the right. The little circle visualizes the location of the index finger, the line it a projection form the hand forward, to a location you might activate from afar - like ye olde airtap, although I am not quite sure of the actual gesture in real life.

It's a bit hard to capture in a picture what's happening, so I made a little video of it:

With the right stick, you control the hand's rotation.

Additional hand control

If you click on red marked icon on the floating menu to the right of the emulator, you will get the perception control window. If you press the right bumper, the right hand panel expands, where you can select a gesture. Having a touch screen then comes in mightily handy, I can tell you

Some final thoughts (for now)

You can also see the buttons "Eyes". If you click that, I presume you can simulate eye tracking. But if you do that, the only thing I can see is that you can't move your position anymore.So I am probably missing something here.

I have done more things, like actually deploying an app (the demo shown by Julia Schwarz, the technical lead for the new input model who so amazingly demoed the HoloLens 2 at MWC) but that's for another time. This really wets my appetite for the real device, but for the mean time, we have this, and need to be patient ;) No code this time, sorry, but there is nothing to code. Just download the emulator and share your thoughts.

11 March 2019

Debugging C# code with Unity IL2CPP projects running on HoloLens or immersive headsets


My relation with Unity is a complex one. I adore the development environment. It allows me to weave magic and create awesome HoloLens and Windows Mixed Reality apps with it with an ease that defies imagination for someone who never tried it. I also have cursed them to the seventh ring of hell for the way the move (too) fast and break things. Some time ago Unity announced they would do away with the .NET backend. This does not mean you can't develop in C# anymore - you still do, but debugging becomes quite a bit more complicated. You can find how you can do it in various articles, forum posts, etc. They all have part of the story but not everything. I hope this fills the gap and shows the whole road to IL2CPP debugging in an easy to find article.


Typically, when you build an app for Mixed Reality, you have a solution with C# code that you use while working inside the Unity Editor. You use this for typing code and trying things out. I tend to call this "the Unity solution" or "the editor solution". It is not a runnable or deployable app, but you can attach the Visual Studio debugger to the Unity editor by pressing Start in Visual Studio, and the the play button in Unity. Breakpoints will be hit, you can set watches, all of it. Very neat.

When you are done or want to test with a device, you build the app. This generates another solution (I call that the deployment solution) that actually is an UWP app. You can deploy that to a HoloLens or to your PC with Mixed Reality headset attached. This is essentially the same code but in a different solution. The nice part of that is that if you compile it for debug, you can also put in breakpoints and analyze code on a running device. Bonus: if you change just some code you don't have to rebuild the deployment solution over and over again to do another test on the device.

Enter IL2CPP (and bit of a rant, feel free to skip)

Unity, in their wisdom, have decided the deployment solutions in C# are too slow, they have deprecated the .NET 'backend', and so in stead of generating a C# UWP solution, they generate a C++ UWP solution. If you build, your C# code will be rewritten in C++, you will need to compile that C++ and deploy the resulting app to your device. Compilation takes a whole lot longer, if you change as much as a comma you need to build the whole deployment solution again, and the actual running code (C++) no longer resembles any code you have written yourself. And when they released this, you could also forget about debugging your C# code in a running app. Unity did not only move the cheese, they actually blew up part of the whole cheese storehouse.

With Unity 2018.2.x - they've basically sent over some carpenters to cover up the hole with plywood at plaster. And now you can sort-of debug your C# code again. But it's a complicated and rather cumbersome process.

Brave new world - requirements

I installed all of Desktop and UWP C++ development bits, probably a bit over the top.

At one point I got complaints about the "VC++ 2015 (140) toolset" missing while compiling so I added that too. This is apparently something the Unity toolchain needs. Maybe this can be more efficient, needing less of this stuff, but this works on my machine. I really don't know anything about C++ development. I tried somewhere in the mid 90s and failed miserably.

Also crucial: install the Visual Studio tools for Unity, but chances are you already have, because we needed this too met .NET backends:

I did uncheck the Unity Editor option, as I used Unity 2018.3.6f1 in stead of the one Visual Studio tries to install. I tend to manage my Unity installs via the Unity Hub.

Build settings

In Unity, I use this settings for building the debuggable C++ app

I am not entirely sure if the "Copy References" is really necessary but I have added it anyway. The warning about missing components is another nice Unity touch - apparently something is missing, but they don't tell you what. My app is building, so I suppose it's not that important for my setup.

App capability settings

Now this one is crucial. To enable debugging, Unity builds a specialized player with some kind of little server in it that enables debuggers to attach to it. This means, it needs to have network access. The resulting app is still a UWP app, so it's network capabilities need to be set. You can do that in either the resulting C++ solution's manifest or in the Unity editor, using the "Player Settings" button. Under "Publishing Settings" you will find this box where you can set capabilities

I just added all network related stuff for good measure. The advantage of doing it here is that it will be added back even if you need to rebuild the deployment solution from scratch. The drawback is that you might forget to remove capabilities you don't need and you will end up with an app asking for a lot of capabilities it doesn't use. For you to decide what works best.

Selecting the IL2CPP backend

In case Unity or the MRTK2 does not do this for you automatically, you can find this setting by pressing the Player Settings button as well. In "Other settings" you can find the "Scripting Backend". Set this to IL2CPP.

Building and deploying the UWP C++ app.

A C++ UWP app generated by Unity looks like this:

Now maybe this is obvious for C++ developers but make sure the app that is labeled "Universal Windows" is the startup project. I was initially thrown off kilter by the "Windows Store 10.0" link and assumed that was the startup project.

Important is to build and deploy the app for Debug, that has not changed since the .NET backend days. Choose target and processor architecture as required by your device or PC

Make sure the app actually gets deployed to wherever you want to debug it. Use deploy, not build (from the Build menu in Visual Studio)

And now for the actual debugging

First, start the app on the machine - be it a PC or a HoloLens, that does not matter - where it needs to be debugged

Go back to your Unity C# ('editor' ) solution. Set breakpoints as desired. And now comes the part that really confused me for quite some time. I am used to debug targets showing up here.

But they never do. So don't go there. This is only useful when you are debugging inside the Unity Editor. In stead, what you need to do is go to the Debug menu of the main Visual Studio Window and select "Attach Unity Debugger"

I've started the app on both my HoloLens and as a Mixed Reality app on my PC, and I can choose of no less than three debug targets now: the Unity editor on my PC, the app running on the HoloLens and the app running on the PC

"Fudge" is the name of the gaming quality rig kindly built by a colleague a bit over a year ago, "HoloJoost" is my HoloLens. I selected the "Fudge" player. If you select a player, you will get an UAC prompt for the "AppContainer Network Isolation Diagnostics Tool". Accept that, and then this pops open:

Leave this alone. Don't close it, don't press CTRL-C.

Now just go over to your Mixed Reality app, be it on your HoloLens or your Immersive Headset, and trigger an action that will touch code with a breakpoint in it. In my case, that happens when I tap the asteroid

And then finally, Hopper be praised:

The debugger is back in da house.


This is not something I get overly happy about, but at least we are about three-quarters of where we were before. We can again debug C# code in a running app, but with a more convoluted build process, less development flexibility, and the need to install the whole C++ toolchain. But as usual, in IT, the only way is forward. The Mixed Reality Toolkit 2, that is used to built this asteroid project, requires Unity 2018.2.x. HoloLens 2 apps will be built with MRTK2 and thus we will have deal with it, move forward and say goodbye to the .NET backend. Unless we don't want to build for HoloLens 2 - which is no option at all for me ;)

No test project this time, as this is not about code but mere configuration. I will start blogging MRTK2 tidbits soon, though.


There is a host of people who gave me pieces of the puzzle that made it possible for to piece the whole thing together. In order of appearance: