19 June 2019

Migrating to MRTK2 - missing Singleton and 3DTextPrefab

Intro

If you are migrating from the HoloToolkit to Mixed Reality Toolkit 2 'cold turkey', as I am doing for my AMS HoloATC app, a lot of things break, as I already said in the first post of this series. For things that you can tap, you can simply change the implementing interface from IInputClickHandler or IManipulationHandler to a couple of other interface and change the signature a bit - that's not complex, only tedious, depending on how much you have used it.

What I found really hard was the removal of the Singleton class and the 3DTextFab. I used both quite extensively. The first one I needed for like data access classes as the concept of services that was introduced in the Mixed Reality Toolkit 2  was not yet available, and the other... well basically all my texts were 3DTextPrefabs so any kind of user feedback in text format was gone. Because so much breaks at the same time, it's very hard to step by step rebuilding your app to a working condition. Basically you have to change everything before something starts to work again. Since I was still learning by doing, there was no way to test if I was doing things more or less right. I got stuck, and took a radical approach.

Introducing HoloToolkitCompatiblityPack

I have created a little Unity Package that contains the things that made it hard for me to get a step-by-step migration to the MRTK2 and christened it the HoloToolkitCompatiblityPack. It contains minimal amount of scripts and meta files to have Singleton and 3DTextFab working inside an MRTK2 built app. As I will be migrating more apps, I will probably update the package with other classes that I need. You can find the package file here and the project here. If you take your existing HoloToolkit based app, yank out the HoloToolkit, replace it by the MRTK2, then import the HoloToolkitCompatiblityPack package, you at least have to fix a few less things to at least get your app to a minimal state of function again.


Caveat emptor

Yes, of course you can use the HoloToolkitCompatiblityPack in your production app, and ship a kind of Frankenbuild using both MRTK2 and this. Do let yourself be tempted to do that. See this package as a kind of scaffolding, or a temporary beam to hold up the roof while you are replacing a bearing wall. For 3DTextFab I tend to turn a blind eye, but please don't use Singleton again. Convert those classes into services one by one. Then remove the Singleton from the HoloToolkitCompatiblityPack to make sure everything works without. This is for migration purposes only.

Take the high road, not the low technical debt road.

Conclusion

Making this package helped me forward with the migration quite a lot. I hope it helps other too. I'd love to hear some feedback on this.

29 May 2019

Migrating to MRTK2 - looking a bit closer at tapping, and trapping 'duplicate' events

Intro

In my previous post I wrote about how game objects can be made clickable (or 'tappable') using the Mixed Reality Toolkit 2, and how things changed from MRTK1. And in fact, when you deploy the app to a HoloLens 1, my demo actually works as intended. But then I noticed something odd in the editor, and made a variant of the app that went with the previous blog post to see how things work- or might work- in HoloLens 2.

Debugging ClickyThingy ye olde way

Like I wrote before, it's possible to debug the C# code of a running IL2CPP C++ app running on a HoloLens. To debug using breakpoints is a bit tricky when you are dealing with rapidly firing event - stopping through the debugger might actually have some influence on the order events play out. So I resorted to the good old "Console.WriteLine-style" of debugging, and added a floating text in the app that shows what's going on.

The ClickableThingy behaviour I made in the previous post then looks like this:

using Microsoft.MixedReality.Toolkit.Input;
using System;
using TMPro;
using UnityEngine;

public class ClickableThingyGlobal : BaseInputHandler, IMixedRealityInputHandler
{
    [SerializeField]
    private TextMeshPro _debugText;

    public void OnInputUp(InputEventData eventData)
    {
        GetComponent<MeshRenderer>().material.color = Color.white;
        AddDebugText("up", eventData);
    }

    public void OnInputDown(InputEventData eventData)
    {
        GetComponent<MeshRenderer>().material.color = Color.red;
        AddDebugText("down", eventData);
    }

    private void AddDebugText( string eventPrefix, InputEventData eventData)
    {
        if( _debugText == null)
        {
            return;
        }
        var description = eventData.MixedRealityInputAction.Description;
        _debugText.text += 
            $"{eventPrefix} {gameObject.name} : {description}{Environment.NewLine}";
    }
}



Now in the HoloLens 1, things are exactly like you expect. Air tapping the sphere activates Up and Down events exactly once for every tap (because the Cube gets every tap, even when you don't gaze at it - see my previous post for an explanation)

When you run the same code in the editor, though, you get a different result:

Tap versus Grip - and CustomPropertyDrawers

The interesting thing is, when you 'air tap' in the editor (using the space bar and the left mouse button), thumb and index finger of the simulated hand come together. This, now, is recognized as a tap followed by a grip, apparently.

So we need to filter the events coming in through OnInputUp and OnInputDown to respond to the actual events we want. This is where things get a little bit unusual - there is no enumeration of sorts that you can use to compare you actual event against. The available events are all in the configuration, so they are dynamically created.

The way to do some actual filtering is to add a property of type MixedRealityInputAction to your behaviour (I used _desiredInputAction). Then the MRTK2 automatically creates a drop down with possible to events to select from:

How does this magic work? Well, the MRTK2 contains a CustomPropertyDrawer called InputActionPropertyDrawer that automatically creates this drop down whenever you add a property of type MixedRealityInputAction to your behaviour. The values in this list are pulled from the configuration. This fits with the idea of the MRTK2 that everything must be configurable ad infinitum. Which is cool but sometimes it makes things confusing.

Anyway, you select the event you want to test for in the UI, in this case "Select":

And then you have to check if the event methods if the event matches your desired event.

if (eventData.MixedRealityInputAction != _desiredInputAction)
{
    return;
}

And then everything works as you expect: only the select event results in an action by the app.

How about HoloLens 2?

I could only test this in the emulator. The odd things is, even without the check on the input action, only the select action was fired, even when I pinched the hand using the control pane:

So I have no idea if this is actually necessary on a real live HoloLens 2, but my friends and fellow MVPs Stephen Hodgson and Simon 'Darkside' Jackson have both mentioned this kind of event type check as being necessary in a few on line conversations (although I then did not understand why). So I suppose it is :)

Conclusion

Common wisdom has it that the best thing about teaching is that you learn a lot yourself. This post is excellent proof of that wisdom. If you think this here old MVP is the end-all and know-all of this kind of stuff, think again. I knew of customer editors, but I literally just learned the concept of CustomPropertyDrawer while I was writing this post. I had no idea it existed, but I found it  because I wanted to know how the heck the editor got all the possible MixedRealityInputAction from the configuration and show that in such a neat list. Took me quite some searching, actually - which is logical, if you don't know what exactly you are looking for ;).

I hope this benefits you as well. Demo project here (branch TapCloseLook).

22 May 2019

Migrating to MRTK2 - IInputClickHandler and SetGlobalListener are gone. How do we tap now?

Intro

Making something 'clickable' (or actually more 'air tappable') was pretty easy in the Mixed Reality Toolkit 1. You just added the IInputClickHandler interface like this:

using HoloToolkit.Unity.InputModule;
using UnityEngine;

public class ClickableThingy: MonoBehaviour, IInputClickHandler
{
    public void OnInputClicked(InputClickedEventData eventData)
    {
        // Do something
    }
}

You dragged this behaviour on top of any game object you want to act on being air tapped and OnInputClicked was being activated as soon as you air tapped. But IInputClickHandler does no longer exist in MRTK2. How does that work now?

Tap – just another interface

To support the air tap in MRTK2, it's simply a matter of switching out one interface for another:

using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class ClickableThingy : MonoBehaviour, IMixedRealityInputHandler
{
    public void OnInputUp(InputEventData eventData)
    {
        //Do something else
    }

    public void OnInputDown(InputEventData eventData)
    {
        //Do something
    }
}

I don't have HoloLens 2, but if you put whatever was in OnInputClicked in OnInputDown it's being executed on a HoloLens 1 when you do an air tap and the object is selected by a the gaze cursor.. So I guess that's a safe bet if you want to make something that runs on both HoloLens 1 and 2.

‘Global tap’ – add a base class

In the MRTK 1 days, when you wanted to do a ‘global tap’, you could simply add a SetGlobalListener behaviour to the game object that contained your IInputClickHandler implementing behaviour:

Adding this object meant that any airtap would be routed to this IImputClicked object - even without the gaze cursor touching the game it, or touching anything anything at all, for what matters. This could be very useful in situations where you for instance were placing objects on the spatial map and some gesture is needed to stop the movement. Or some general confirmation gesture in a situation where some kind of UI was not feasible because it would get in the way. But the SetGlobalListener behaviour is gone as well, so how do get that behavior now?

Well, basically you make your ClickableThingy not only implement IMixedRealityInputHandler, but also be a child class of BaseInputHandler.

using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class ClickableThingyGlobal : BaseInputHandler, IMixedRealityInputHandler
{
    public void OnInputUp(InputEventData eventData)
    {
        // Do something else
    }

    public void OnInputDown(InputEventData eventData)
    {
        // Do something
    }
}

This has a property isFocusRequired that you can set to false in the editor:

And then your ClickableThingy will get every tap. Smart people will notice it makes sense to always make a child class of BaseInputHandler as the IsFocusRequired property is default true – so the default behavior ClickableThingyGlobal is to act exactly the same as ClickableThingy, but you can configure it’s behavior in the editor, which makes your behavior applicable to more situations. Whatever you can make configurable saves code. So I'd always go for a BaseInputHandler for anything that handles a tap.

Proof of the pudding

This is exactly what the demo project shows: a cube that responds to a tap regardless whether there is a gaze or hand cursor on it, and a sphere that only responds to a tap when there is a hand or gaze cursor on it. Both use the ClickableThingyGlobal: the cube has the IsFocusRequired check box unselected, on the sphere it is selected. To this end I have adapted the ClickableThingyGlobal to actually do something usable:

using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class ClickableThingyGlobal : BaseInputHandler, IMixedRealityInputHandler
{
    public void OnInputUp(InputEventData eventData)
    {
        GetComponent<MeshRenderer>().material.color = Color.white;
    }

    public void OnInputDown(InputEventData eventData)
    {
        GetComponent<MeshRenderer>().material.color = Color.red;
    }
}

or at least something visible, which is to change the color of the elements from white to red on a tap (and back again).

In an HoloLens 1 it looks like this.

The cube will always flash red, the sphere only when there is some cursor pointing to it. In the HoloLens 2 emulator it looks like this:

The fun thing now is that you can act on both InputUp and InputDown, which I use to revert the color setting. To mimic the behavior of the old OnInputClicked, adding code in OnInputDown and leaving OnInputUp is sufficient I feel.

Conclusion

Yet another part of moved cheese, although not dramatically so. Demo code is very limited, but can still be found here. I hope me documenting finding my way around Mixed Reality Toolkit 2 helps you. If you have questions about specific pieces of your HoloLens cheese having been moved and you can't find them, feel free to ask me. In any case I intend to write lots more of these posts.

15 May 2019

Migrating to MRTK2 - MS HRTF Spatializer missing (and how to get it back)

Intro

One the many awesome (although sadly underutilized) capabilities of HoloLens is Spatial Audio. With just a few small speakers and some very nifty algorithms it allows you to connect audio to moving Holograms that sound as if they are coming from a Hologram . Microsoft have applied this with such careful precision that you can actually hear Holograms moving above and behind you, which greatly enhances the immersive experience in a Mixed Reality environment. It also has some very practical uses - for instance, alerting the user something interesting is happening outside of their field of vision - and the audio also provides a clue where the user is supposed to look.

Upgrade to MRKT2 ... where is my Spatializer?

In the process op upgrading AMS HoloATC to Mixed Reality Toolkit 2 I noticed something odd. I tried - in the Unity editor - to click an airplane, that should then start to emit a pinging sound. In stead, I saw this error in the editor pop up:

"Audio source failed to initialize audio spatializer. An audio spatializer is specified in the audio project settings, but the associated plugin was not found or initialized properly. Please make sure that the selected spatializer is compatible with the target."

Then I looked into the project's audio settings (Edit/Project Settings/Audio) and saw that the Spatializer Plugin field was set to "None" - and that the MS HRTF Spatializer (that I normally expect to be in the drop down) was not even available!

Now what?

The smoking - or missing - gun

The solution is rather simple. If you look in the Mixed Reality Toolkit 2 sample project, you will notice the MS HRFT Spatializer is both available and selected. So what is missing?

Look at the Packages node in your Assets. It's all the way to the bottom. You will probably see this;

But what you are supposed to see is this:

See what's missing? Apparently the spatializer has been moved into a Unity Package. When you install the Mixed Reality Toolkit 2 and click "Mixed Reality Toolkit/Add to Scene and configure" it is supposed to add this package automatically (at least I think it is) - but for some reason, this does not always happen.

Use the Force Luke - that is, the Unity Package Manager

Fortunately, it's easy to fix. In the Unity Editor, click Window/Package Manager. This will open the Package Manager Window. Initially it will only show a few entries, but then, near the bottom, you will see "Windows Mixed Reality" appear. Hit the "Install" button top right. And when its done the Windows Mixed Reality entry will appear in the Packages will appear.

And now, if you go to Edit/Project Settings/Audio, you will see that MS HRTF Spatializer has appeared again. If this a migrated project and you have not messed with the audio settings, it will probably be selected automatically again.

Conclusion

No code this time, as there is little to code. I do need to add a little word of warning here - apparently these packages are defined in YourProject/Packages/manifest.json. Make sure this gets added to your repo and checked in as well.

10 May 2019

Migrating to MRTK2–NewtonSoft.JSON (aka JSON.Net) is gone

Intro

In ye olde days, if you set up a project using the Mixed Reality Toolkit 1, NewtonSoft.JSON (aka JSON.Net) was automatically included. This was because part of the MRTK1 had a dependency on it –something related to the gLTF stuff used it. This is (apparently) no longer the case. So if you had a piece of code that previously used something like this

public class DeserializeJson : MonoBehaviour
{
    [SerializeField]
    private TextMeshPro _text;

    void Start()
    {
        var jsonstring = @"
{
   ""Property1"" : ""Hello"",
   ""Property2"" : ""Folks""
}";
        var deserializedObject = JsonConvert.DeserializeObject<DemoJson>(jsonstring);

        _text.text = string.Concat(deserializedObject.Property1,
            Environment.NewLine, deserializedObject.Property2);
    }
}

It will no longer compile when you use the MRTK2. You will need to get it elsewhere. There are two ways to solve this: the right way and the wrong way.

The wrong way

The wrong way, that I was actually advised to do, is to get a copy of an MRTK1 and drag the JSON.Net module from there into your project. It's under HoloToolkit\Utilities\Scripts\GLTF\Plugins\JsonNet. And it will appear to work, too. In the editor. And as long as you use the .NET scripting backend. Unity has announced, though, the .NET backend will disappear – you will need to use IL2CPP soon. And when you do so, you will notice your app will mysteriously fail to deserialize JSON. If you run the C++ app in debug mode from Visual Studio you will see something cryptic like this:

The reason why is not easy to find. If you dig deeper, you will see it complaining about it trying to use Reflection.Emit and this apparently is not allowed in the C++ world. Or not in the way it's done. Whatever.

The right way

Fortunately there is an another way - and a surprisingly one to boot. There is a free JSON.Net package in the Unity store, and it seems to do the trick for me – I can compile the C++ app, deploy it on the HoloLens 2 emulator and it actually parses JSON.

QED:

But will this work on a HoloLens 2?

The fun thing is of course the HoloLens 2 has an ARM processor, so the only way to test if this really works is to run in on an HoloLens 2. Unlike a few very lucky individuals, I don't have access to the device. But I do have something else - an ARM based PC that I was asked to evaluate in 2018.  I compiled for ARM, made a deployment package, powered up the ARM PC and wouldn't you know it...

So. I think we can be reasonably sure this will work on a HoloLens 2 as well.

Update - this has been verified.

Conclusion

I don't know whether all the arcane and intricate things JSON.Net supports are supported by this package, but it seems to do the trick as far as my simple needs are concerned. I guess you should switch to this package to prepare for HoloLens 2.

Code as usual on GitHub:

And yes, the master is still empty but I intend to use that for demonstrating a different issue.

06 May 2019

Migrating to MRTK2 - Mixed Reality Toolkit Standard Shader 'breaks'

Intro

At this moment I am trying to learn as much as possible about the new Mixed Reality Toolkit 2, to be ready for HoloLens 2 when it comes. I opted for using a rather brutal cold turkey learning approach: I took my existing AMS HoloATC app, ripped out the ye goode olde HoloToolkit, and replaced it by the new MRTK2 - fresh from GitHub. Not surprisingly this breaks a lot. I am not sure if this is the intended way of migrating - it's like renovating the house by starting with bulldozering a couple of walls away. But this is the way I chose do it, as it forces me to adhere to the new style and learn how stuff works, without compromises. It also makes me very clear where things are going to break when I do this to customer apps.

So I am starting a series of short blog posts, that basically documents the bumps in the road as I encounter them, as well as how I swerved around them or solved them. I hope other people will benefit from this, especially as I will showing a lot of moved cheese. And speaking of...

Help! My Standard Shader is broken!

So you had this nice Mixed Reality app that showed these awesome holograms:

and then you decided to upgrade to the Mixed Reality Toolkit 2

and you did not expect to see this. This is typically the color Unity shows when a material is missing or something in the shader is thoroughly broken. And indeed, if you look at the materials:

something indeed is broken.

How to fix this

There is good new, bad news, and slightly better news.

  • The good news - it's easy to fix.
  • The bad news is - you have to do this for every material in your apps that used the 'old' HTK Standard shader
  • The slightly better news - you can do this for multiple materials in one go. Provided they are all in one folder, or you do something nifty with search

So, in your assets select your materials:

Then in the inspector select the Mixed Reality Toolkit Standard Shader (again) :

And boom. Everything looks like it should.

Or nearly so,because although it carries the same name, it's actually a different shader. Stuff actually might look a wee bit different. In my sample app, especially the blue seems to look a bit different.

So what happened?

If you look what Git marks as changed, only the tree materials themselves are marked changed:

and if you look at a diff, you will see the referenced file GUID for the shader is changed. So indeed, although it carries the same name (Mixed Reality Toolkit Standard), as far as Unity is concerned it's a different shader.

(you might want to click on the picture to be able to actually read this).

As you scroll down through the diff, you will see lots of additions too, so this is not only a different shader id, it's actually a different or new shader as well. Why they deliberately chose to break the shader ID - beats me. Maybe to make upgrading from one shader to another possible, or have both the old and the new one simultaneously work in one project, making upgrade easier. But since they have the same name, this might also induce confusion. Whatever- but this is what causes the shader to 'break' at upgrade, an now you know how to fix it, too.

Conclusion

I hope to have eliminated once source of confusion today, and I wish you a lot of fun watching the //BUILD 2019 keynote in a few hours.

You can find a demo project here.

  • Branch "master" shows the original project with HoloToolkit
  • Branch "broken" shows the project upgraded to MRTK2 - with broken shaders
  • Branch "fixed" shows the fixed project

26 April 2019

HoloLens 2 Emulator - showing and manipulating hands in an MRTK2 app

Last week I wrote a first look at the new HoloLens 2 emulator and showed you how something of the hand movement in the HoloLens shell using an Xbox One controller. This was pretty hard to do as the hands were only intermittently displayed. It turns out that if you deploy an app made with the Mixed Reality Toolkit 2 you actually get a lot better graphics assisting you in manipulating. It takes some getting used to, but I was able to play the piano and press some buttons, just like Julia Schwarz was able to do in her now-famous MWC demo.

This then, looks like this:

As you can see, the mere act of moving the hand past or through the piano keys or the buttons above actually triggers the buttons (if you turn the sound on you can hear the piano and some audio feedback on the buttons too).

This is simply the HandInteractionExamples scene from the MRTK2 dev branch, generated into a C++ app and deployed into the emulator.

To show you how the hands can be moved, I made another little captioned movie:

Using the Xbox controller is a lot easier this way, although I am not quite sure how to do a two-hand-manipulation yet, as the sticks can only control one hand at a time (the left or right bumpers determine which hand you control.

17 April 2019

First look at the HoloLens 2 emulator

Intro

Today, without much fanfare, the HoloLens 2 emulator became available. I first saw Mike Taulty tweeting about it and later more people chiming in. I immediately downloaded it and started to get it to work, to see what it does and how it can be used. The documentation is a bit limited yet, so I just happily blunder along the emulator, trying some things, and showing you what and how

Getting it is half the fun

Getting it is easy - from this official download page you get get all the emulator versions, including all versions of the HoloLens 1 emulator - but of course we are only interested in HoloLens 2 now:

Just like the previous instances, the emulator requires Hyper-V. This requires you to have hardware virtualization enabled in your BIOS. Consult the manual of your PC or motherboard on how to do that. If you don't know what I am talking about, for heavens sake stop here and don't attempt this yourself. I myself found it pretty scary already. If you make mistakes in your BIOS settings, your whole PC may become unusable. You have been warned.

Starting the Emulator from Visual Studio

The easiest way to start is from Visual Studio. If you have installed the whole package, you will get this deployment target. You can choose whether you want debug or release - the latter is faster.

But mind using x86 as a deployment target. Otherwise the emulator is not available. It may be that the HoloLens 2 has an ARM processor, but your PC has not. For an an app I just cloned the Mixed Reality Toolkit 2 dev branch, opened up the project with Unity 2018.3.x and built the app. The I opened the resulting app with Visual Studio. See my previous post on how to do that using IL2CPP (that is, generating a C++ app)

If the emulator starts for the first time in your session, you might see this

Just click and the emulator starts up. Be aware this a heavy beast. It might take some time to start, it might also drag down the performance of your PC down somewhat. Accept the elevation prompt, and then most likely Visual Studio will thrown an error as it tries to deploy as soon as the emulator has started, but it's far from ready to accept deployment of apps - the HoloLens OS is still booting. After a while you will hear the (for HoloLens users familiar) "whooooooomp" sound indicating the OS shell is starting.

Starting the emulator directly

Assuming you have installed everything in the default folder, you should be able to start the emulator with the following command:

"%ProgramFiles(x86)%\Windows Kits\10\Microsoft XDE\10.0.18362.0\XDE.exe" /name "HoloLens 2 Emulator 10.0.18362.1005" /displayName "HoloLens 2 Emulator 10.0.18362.1005" /vhd "%ProgramFiles(x86)%\Windows Kits\10\Emulation\HoloLens\10.0.18362.1005\flash.vhdx" /video "1968x1280" /memsize 4096 /language 409 /creatediffdisk "%USERPROFILE%\AppData\Local\Microsoft\XDE\10.0.18362.1005\dd.1968x1280.4096.vhdx" /fastShutdown /sku HDE

This has been ascertained using the information in this blog post, that basically does the same trick for HoloLens 1 emulators

Either way, it will look like this:

If you have followed the Mixed Reality development in the 19H1 Insider's preview, you will clearly recognize that the Mixed Reality crew are aligning HoloLens 2 with the work that has been done for immersive WMR headsets.

Controlling the Emulator

The download page gives some basic information about how you can use keystrokes, mouse or an XBox Controller to move your viewpoint around and do stuff like air tap and bloom. This page gives some more information, but it indicates it this is still for HoloLens 1 emulator.

However, it looks like most of the keys are in there already. The most important one (initially) is the Escape key, which - just like in the HoloLens 1 emulator - will reset your viewpoint and your hand positions. And believe me, you are going to need them.

Basic control

This is more or less unchanged. You move around using the left stick, you turn around using the right stick. Rotating sideways van moving up/down is done using the D-pad. Selecting still happens using the triggers.

Basic hand control

If you use an Xbox Controller, you will need to do the following:

  • To move the right hand, press the right bumper, and slightly move the left stick. If you move it forward, you will see the right hand moving forward
  • To move the left hand, press the left bumper, and still use the left stick.

Hands are visualized as show on the right. The little circle visualizes the location of the index finger, the line it a projection form the hand forward, to a location you might activate from afar - like ye olde airtap, although I am not quite sure of the actual gesture in real life.

It's a bit hard to capture in a picture what's happening, so I made a little video of it:

With the right stick, you control the hand's rotation.

Additional hand control

If you click on red marked icon on the floating menu to the right of the emulator, you will get the perception control window. If you press the right bumper, the right hand panel expands, where you can select a gesture. Having a touch screen then comes in mightily handy, I can tell you


















Some final thoughts (for now)

You can also see the buttons "Eyes". If you click that, I presume you can simulate eye tracking. But if you do that, the only thing I can see is that you can't move your position anymore.So I am probably missing something here.

I have done more things, like actually deploying an app (the demo shown by Julia Schwarz, the technical lead for the new input model who so amazingly demoed the HoloLens 2 at MWC) but that's for another time. This really wets my appetite for the real device, but for the mean time, we have this, and need to be patient ;) No code this time, sorry, but there is nothing to code. Just download the emulator and share your thoughts.

11 March 2019

Debugging C# code with Unity IL2CPP projects running on HoloLens or immersive headsets

Intro

My relation with Unity is a complex one. I adore the development environment. It allows me to weave magic and create awesome HoloLens and Windows Mixed Reality apps with it with an ease that defies imagination for someone who never tried it. I also have cursed them to the seventh ring of hell for the way the move (too) fast and break things. Some time ago Unity announced they would do away with the .NET backend. This does not mean you can't develop in C# anymore - you still do, but debugging becomes quite a bit more complicated. You can find how you can do it in various articles, forum posts, etc. They all have part of the story but not everything. I hope this fills the gap and shows the whole road to IL2CPP debugging in an easy to find article.

Context

Typically, when you build an app for Mixed Reality, you have a solution with C# code that you use while working inside the Unity Editor. You use this for typing code and trying things out. I tend to call this "the Unity solution" or "the editor solution". It is not a runnable or deployable app, but you can attach the Visual Studio debugger to the Unity editor by pressing Start in Visual Studio, and the the play button in Unity. Breakpoints will be hit, you can set watches, all of it. Very neat.

When you are done or want to test with a device, you build the app. This generates another solution (I call that the deployment solution) that actually is an UWP app. You can deploy that to a HoloLens or to your PC with Mixed Reality headset attached. This is essentially the same code but in a different solution. The nice part of that is that if you compile it for debug, you can also put in breakpoints and analyze code on a running device. Bonus: if you change just some code you don't have to rebuild the deployment solution over and over again to do another test on the device.

Enter IL2CPP (and bit of a rant, feel free to skip)

Unity, in their wisdom, have decided the deployment solutions in C# are too slow, they have deprecated the .NET 'backend', and so in stead of generating a C# UWP solution, they generate a C++ UWP solution. If you build, your C# code will be rewritten in C++, you will need to compile that C++ and deploy the resulting app to your device. Compilation takes a whole lot longer, if you change as much as a comma you need to build the whole deployment solution again, and the actual running code (C++) no longer resembles any code you have written yourself. And when they released this, you could also forget about debugging your C# code in a running app. Unity did not only move the cheese, they actually blew up part of the whole cheese storehouse.

With Unity 2018.2.x - they've basically sent over some carpenters to cover up the hole with plywood at plaster. And now you can sort-of debug your C# code again. But it's a complicated and rather cumbersome process.

Brave new world - requirements

I installed all of Desktop and UWP C++ development bits, probably a bit over the top.

At one point I got complaints about the "VC++ 2015 (140) toolset" missing while compiling so I added that too. This is apparently something the Unity toolchain needs. Maybe this can be more efficient, needing less of this stuff, but this works on my machine. I really don't know anything about C++ development. I tried somewhere in the mid 90s and failed miserably.

Also crucial: install the Visual Studio tools for Unity, but chances are you already have, because we needed this too met .NET backends:

I did uncheck the Unity Editor option, as I used Unity 2018.3.6f1 in stead of the one Visual Studio tries to install. I tend to manage my Unity installs via the Unity Hub.

Build settings

In Unity, I use this settings for building the debuggable C++ app

I am not entirely sure if the "Copy References" is really necessary but I have added it anyway. The warning about missing components is another nice Unity touch - apparently something is missing, but they don't tell you what. My app is building, so I suppose it's not that important for my setup.

App capability settings

Now this one is crucial. To enable debugging, Unity builds a specialized player with some kind of little server in it that enables debuggers to attach to it. This means, it needs to have network access. The resulting app is still a UWP app, so it's network capabilities need to be set. You can do that in either the resulting C++ solution's manifest or in the Unity editor, using the "Player Settings" button. Under "Publishing Settings" you will find this box where you can set capabilities

I just added all network related stuff for good measure. The advantage of doing it here is that it will be added back even if you need to rebuild the deployment solution from scratch. The drawback is that you might forget to remove capabilities you don't need and you will end up with an app asking for a lot of capabilities it doesn't use. For you to decide what works best.

Selecting the IL2CPP backend

In case Unity or the MRTK2 does not do this for you automatically, you can find this setting by pressing the Player Settings button as well. In "Other settings" you can find the "Scripting Backend". Set this to IL2CPP.

Building and deploying the UWP C++ app.

A C++ UWP app generated by Unity looks like this:

Now maybe this is obvious for C++ developers but make sure the app that is labeled "Universal Windows" is the startup project. I was initially thrown off kilter by the "Windows Store 10.0" link and assumed that was the startup project.

Important is to build and deploy the app for Debug, that has not changed since the .NET backend days. Choose target and processor architecture as required by your device or PC

Make sure the app actually gets deployed to wherever you want to debug it. Use deploy, not build (from the Build menu in Visual Studio)

And now for the actual debugging

First, start the app on the machine - be it a PC or a HoloLens, that does not matter - where it needs to be debugged

Go back to your Unity C# ('editor' ) solution. Set breakpoints as desired. And now comes the part that really confused me for quite some time. I am used to debug targets showing up here.

But they never do. So don't go there. This is only useful when you are debugging inside the Unity Editor. In stead, what you need to do is go to the Debug menu of the main Visual Studio Window and select "Attach Unity Debugger"

I've started the app on both my HoloLens and as a Mixed Reality app on my PC, and I can choose of no less than three debug targets now: the Unity editor on my PC, the app running on the HoloLens and the app running on the PC

"Fudge" is the name of the gaming quality rig kindly built by a colleague a bit over a year ago, "HoloJoost" is my HoloLens. I selected the "Fudge" player. If you select a player, you will get an UAC prompt for the "AppContainer Network Isolation Diagnostics Tool". Accept that, and then this pops open:

Leave this alone. Don't close it, don't press CTRL-C.

Now just go over to your Mixed Reality app, be it on your HoloLens or your Immersive Headset, and trigger an action that will touch code with a breakpoint in it. In my case, that happens when I tap the asteroid

And then finally, Hopper be praised:

The debugger is back in da house.

Conclusions

This is not something I get overly happy about, but at least we are about three-quarters of where we were before. We can again debug C# code in a running app, but with a more convoluted build process, less development flexibility, and the need to install the whole C++ toolchain. But as usual, in IT, the only way is forward. The Mixed Reality Toolkit 2, that is used to built this asteroid project, requires Unity 2018.2.x. HoloLens 2 apps will be built with MRTK2 and thus we will have deal with it, move forward and say goodbye to the .NET backend. Unless we don't want to build for HoloLens 2 - which is no option at all for me ;)

No test project this time, as this is not about code but mere configuration. I will start blogging MRTK2 tidbits soon, though.

Credits

There is a host of people who gave me pieces of the puzzle that made it possible for to piece the whole thing together. In order of appearance: