26 September 2019

Migrating to MRTK2 - setting up and understanding Eye Tracking

Intro

One of the exiting features HoloLens 2 brings us, is Eye Tracking. On HoloLens 1, you had to move your whole head to move the gaze cursor. and while that works well enough for a lot of applications and it seems most people pretty quickly got used to it, mother Nature has equipped us with roving eyes. HoloLens 2, when calibrated for Eye Tracking, can actually track what your are looking at, not merely where your head is pointed at.

Although there is a nice demo in the Mixed Reality Toolkit 2, it took me a while to find out how all the events actually work and need to be hooked up to get it to work consistently. So I made a little demo that works like this:

Events and tracking them

The little blue globe is the target, is equipped with and EyeTrackingTarget script from the MRTK2 that supports five events, which you can see going off as the red spheres turns green

  • LS: On Look At Start
  • WL: While Looking At Target
  • LA: On Look Away
  • DW: On Dwell
  • S: On Selected

The EyeTrackingTarget is configured as follows:

In the scene, the whole thing showing the images (the five little red-turning-green globes with labels) is one prefab containing 5 little spheres with a label above it - each a prefab on its own. Every sphere has a "Single Shot Controller" script that turns it's sphere green for 0.5 seconds when an event is called.

It's a super simple script, the interesting part is even shorter:

public void ShowActivated()
{
    _timeActivated = Time.time;
}

void Update()
{
    var desiredColor = Time.time - _timeActivated > _resetTime ? 
        _originalColor : _activatedColor;
    if (_material.color != desiredColor)
    {
        _material.color = desiredColor;
    }
}

When ShowActivated is called, the _timeActivated field is set to now. The Update loop then checks every 60th of a second whether it should set the color to red or green, depending on the fact if the latest call to ShowActivated is already half a second ago.

What happens when

The event names are pretty straightforward, and things happen more or less than you expect, although

What is actually happening:

  • When the user first looks at the eye tracked object, "On Look Start" is fired once
  • While the user keeps looking, "While Looking At Target" keeps being called. Thus, the green sphere stays green. The calling seems to at the same instant - or nearly the same instant - as the previous event
  • As soon as the user stops looking at the sphere, "On Look Away" is called and "While Looking At Target" is stopped being called
  • "On Dwell" is being called after the time defined in the "Dwell Time in Sec" slider has passed has the user is still looking at the object. I took the ridiculously user-unfriendly time of three seconds to make sure this event was easily distinguishable from the other events. Here's the thing though - it's being called once. That kind of confused me.
  • "On Selected" is being called when then object being looked at and you say "Select". This is one of the predefined commands in the default speech commands profile (DefaultMixedRealitySpeechCommandsProfile)

Setting up and configuring eye tracking in profiles

Coming from the default profile, you will need to configure at least profiles, and better still three.

First, you will need to clone the Default Toolkit profile itself. First thing I do, while still in the early phases, is disabling the diagnostics system as I don't want that profiler in my face the whole time:

Next, you will have to clone the Input System Profile and add a Windows Mixed Reality Eye Gaze Provider"

As you can see, this sits in namespace "Microsoft.MixedReality.Toolkit.WindowsMixedReality.Input"

And then finally, and optionally, if you want this to work in the editor too, you will have to configure the Input Simulation Service. You do that by cloning the default input system profile and check the "Simulate Eye Position" checkbox

One more thing: setting capabilities

You will notice now the Gaze cursor turning up in the editor so you might thing you are done. Well, almost. There's the small matter of capabilities. C++ or not, the result is still a UWP app, and Gaze Input is a capability that you need to ask consent for. This, unfortunately, is not yet implemented in Unity. So after you generated the C++ app, you will need to open it in Visual Studio, select the Package.appmanifest file, and select there the Gaze Input capability.

 If you deploy the resulting solution to an emulator (or, if you are one of the lucky ones out there, an actual HoloLens 2) and it asks for your consent, you did it right.

Conclusion and final words

Setting up Eye Tracking is not that hard, but it takes a few steps. Mind you the MRTK2 comes with a few profiles that make settings things up easier - I just wrote down the steps from scratch. The demo project shows this in all it's glory ;) and allows you to play with it yourself without having to set it up. Notice there's hardly any code outside of the MRKT2 itself involved - there's only one custom script (my SingleShotController) and that's very simple.

By the way - in my own (so far single) HoloLens 2 app I only use the On While Looking event. This seems to be the most trustworthy. I previous iterations of the MRTK2 and/or the HoloLens emulator the other events did not go off reliably enough (IMHO) to use them for real. Of course, this may all be different now, and most likely is completely different (better) on a real device. We are still waiting for that.

On a final note – leaving the eye cursor visible can be confusing and/or annoying, or so I have been told. So under normal circumstances, it should be turned off – the object being looked should give some indication it is notices being looked at. I have found a way to do this myself, but that’s pretty complex, and just as I was about to blog about that, Julia Schwarz herself added a (better) sample to turn off pointers by code to the main MRTK2 repo.

05 September 2019

Migrating to MRTK2–submitting a HoloLens 2 app to the Microsoft Store

Intro

On Monday September 2 version 4.0.19 of my first HoloLens Store app, AMS HoloATC, became available in the Microsoft Store. This version has been completely rebuilt using the Mixed Reality Toolkit 2, and includes some HoloLens 2–only functionality: you can actually touch the airplanes now, and using gaze tracking it will show you a picture of the actual aircraft, if available.

Hoops to jump

As you might recall from earlier posts, things have changed quite a bit when it comes to actually deploying apps on HoloLens. Now, we will need compiling and submitting a Unity-generated C++ solution to the store. Although the process looks very much like we used to do for Unity apps running on the .NET backend, there are three things you might run into:

  1. Your WACK test will most likely fail
  2. If you have submitted your app as a bundle before, make sure you submit it as a bundle again. A Unity generated C++ solution does not have this as a default setting
  3. If you create (like me) an app that is supposed to run on Desktop (in immersive head sets), HoloLens 1 and HoloLens 2 you may find out your app cannot be downloaded by a HoloLens 1 anymore – or the HoloLens 2 emulator, for what matters

Fixing the WACK fail

Before you actually submit an app to the Store, you do the Windows Application Certification test first, to prevent embarrassing easy-to-prevent fails, right. (right?). And if you do so, you will see it fail. It will spout quite some errors at you.

  • The Windows security features test will complain about:
    • HolographicAppRemoting.dll has failed the AppContainerCheck check.
    • PerceptionDevice.dll has failed the AppContainerCheck check.
    • UnityRemotingWMR.dll has failed the AppContainerCheck check.
  • The Supported API test will list 10 errors concerning UnityRemotingWMR calling unsupported APIs
  • The Debug configuration test will tell youUnity RemotingWMR is only built in debug mode
  • And if you try to build for x86 or ARM, the Package sanity test will tell you HolographicAppRemoting.dll, PerceptionDevice.dll and UnityRemotingWMR.dll are only available for x64.

The solution is bit weird, but can be found in this Unity forum post, and involves manually hacking the “Unity Data.vcxitems” file that is inside your store projects. Open in it in a text editor, and search for “HolographicAppRemoting”. This will show this piece of XML:

<None Include="$(MSBuildThisFileDirectory)HolographicAppRemoting.dll">
  <DeploymentContent>true</DeploymentContent>
  <ExcludeFromResourceIndex>true</ExcludeFromResourceIndex>
</None>

Now simply change the value “true” inside the DeploymentContent to false:

<None Include="$(MSBuildThisFileDirectory)HolographicAppRemoting.dll">
  <DeploymentContent>false</DeploymentContent>
  <ExcludeFromResourceIndex>true</ExcludeFromResourceIndex>
</None>

Repeat this for PerceptionDevice and UnityRemotingWMR. Rebuild your app, generated packages again and presto, your WACK test will pass. That is literally all that’s needed to get rid of this multitude of errors.

Bundling your app

I don’t know exactly what changed, but all my HoloLens apps that I created with previous Unity versions were uploaded as bundles. To be honest, I never did pay much attention to it. But the default setting of the generated C++ setting is this:

which generates an appx per platform (in my case three: one for x64, one for x86 and one for ARM for, in the same order, WMR immersive headsets, HoloLens 1 and HoloLens 2). If you try to upload those files as updates to an app that was previously submitted as a bundle you will be greeted with:

And this can simply be fixed by changing the setting “Generate app bundle” from “If needed” to “Always”

Make your app (still) downloadable for HoloLens 1

To be honest, 4.0.19 was not the first HoloLens 2 enabled version I submitted. That was 4.0.17. It got certified – as one of the first if not the very first indie HoloLens 2 app. I was very happy about this – for about 25 seconds. And then I got a very unpleasant surprise: I could not download it anymore on a HoloLens 1. Sure enough, you could find it in the store, but the “Install” button was greyed out (well, light blue in stead of dark blue, but in any case unoperational). Curiously enough, a HoloLens that had it already installed did get the updated version though.

The reason for this behavior can be found down in this post in the Unity forums. Basically, Unity dropped support for anything lower than DirectX 10 and this is listed now in the app’s store manifest. Unfortunately, when the Store on the HoloLens 1 (and the HoloLens 2 emulator, incidentally) checks for DirectX 10, the device apparently reports “don’t have that” and the Store consequently blocks download.

Now I think this will be fixed shortly, but in the mean time here’s a workaround for if you need do to a submission right now:

First, open the Package.AppManifest.xml file in a text editor. Find these lines:

<TargetDeviceFamily Name="Windows.Desktop" MinVersion="10.0.16299.0" MaxVersionTested="10.0.18362.0" />
<TargetDeviceFamily Name="Windows.Holographic" MinVersion="10.0.16299.0" MaxVersionTested="10.0.18362.0" /

Comment out the second line. Then proceed to build a bundle, but for x64 only.

Go back to Package.AppManifest.xml, uncomment the first line, and re-activate the second. Now find the StoreManifest.xml file – open it in a text editor. It should look like this:

<?xml version="1.0" encoding="utf-8"?>
<StoreManifest xmlns="http://schemas.microsoft.com/appx/2015/StoreManifest">
    <Dependencies>
        <DirectXDependency Name="D3D11_HWFL_10_0" />
    </Dependencies>
</StoreManifest>

Simply remove the line <DirectXDependency Name="D3D11_HWFL_10_0" />

Now build a package for x86 and ARM. I am not sure if this is essential, but I made sure the x86/ARM bundle was had a one release number one point higher than the x64.

Now proceed to upload both bundles into a submission and set check boxes as needed. In my store submissions it looks like this:

Now as you can see version 4.0.17 still contains all platforms, but that is not necessary. But because the 4.0.19 has a higher version number, it will be offered first to HoloLens 1 and 2.

Anyway, now your app, once certified, should be downloadable for all devices. On x64 the DirectX 10 check will be still in place, for other devices it’s disabled.

Conclusion

It’s early days for HoloLens 2 (I built my app without having direct access to it) but I think it’s pretty cool to have an app armed and ready for it. It takes some fiddling around with xml files to get it right, but I am sure things will be better soon and these work arounds won’t be necessary anymore.

Enjoy building the next generation Mixed Reality apps!

13 August 2019

Migrating to MRTK2–interacting with the Spatial Map

Intro

One of the HoloLens’ great features is the ability to interact with real physical objects. This allows apps to place holograms on or adjacent to real objects, enables occlusion (the ability to let holograms appear to be hidden because they disappear behind physical objects), etc. This is all done using the Spatial Map, a graphical representation of whatever the HoloLens has observed to be present in the physical reality. Interacting with the Spatial map used to be easy – and it actually still isn't that hard, it’s just that - as with most of the things in the MRTK2 - quite some cheese has been moved

This blog post handles a common and a not so common scenario for interacting with the Spatial Map:

  1. Placing objects on the Spatial Map
  2. Programmatically enabling and disabling/clearing the Spatial Map

I have included a demo project that allows you to place cylinders on the Spatial Map by air tapping - and you can turn the Spatial Map on and off using a floating button.

Placing objects on the Spatial Map, MRKT2 style

I wrote about this already in November 2017 in my article about finding the floor using a HoloLens. In MRTK2, that process is a bit much different. Create a raycast from the Camera along the camera viewing angle and try to hit the Spatial Map. For this, you need the Spatial Map Layer mask. In the HoloToolkit you could simply access.

SpatialMappingManager.Instance.LayerMask

to get to that layer mask. Finding that now is a wee bit more complicated. You see, first, you need to extract the configuration from the Spatial Awareness System service like this:

var spatialMappingConfig =
CoreServices.SpatialAwarenessSystem.ConfigurationProfile as
     MixedRealitySpatialAwarenessMeshObserverProfile;

The spatial mapping config contains a property called ObserverConfigurations containing a list of of configurations (apparently taking provisions there might actually be more than one configuration). For each configuration you can take the profile from it's ObserverProfile property - that you have to cast to MixedRealitySpatialAwarenessMeshObserverProfile. Then you find the layer used by this config in it's MeshPhysicsLayer property.

I repeat - you can find the layer.

That is not the layer mask. It took me quite some time debugging to find out what was going on here - because if you feed that layer number into the raycast, it won't 'see' the Spatial Map. I have no idea why this was changed. Anyway, to get the layer mask, as required by raycast methods, you have to bit shift the actual layer number, like this

1 << observerProfile.MeshPhysicsLayer

So what used to be a single property, now requires this method:

private static int GetSpatialMeshMask()
{
    if (_meshPhysicsLayer == 0)
    {
        var spatialMappingConfig = 
          CoreServices.SpatialAwarenessSystem.ConfigurationProfile as
            MixedRealitySpatialAwarenessSystemProfile;
        if (spatialMappingConfig != null)
        {
            foreach (var config in spatialMappingConfig.ObserverConfigurations)
            {
                var observerProfile = config.ObserverProfile
                    as MixedRealitySpatialAwarenessMeshObserverProfile;
                if (observerProfile != null)
                {
                    _meshPhysicsLayer |= (1 << observerProfile.MeshPhysicsLayer);
                }
            }
        }
    }

    return _meshPhysicsLayer;
}

private static int _meshPhysicsLayer = 0;

And I added a static backing variable to speed up this process, otherwise this whole loop will be run 60 times a second in my TapToPlaceController, as well as every time you air tap to place a cylinder.

The method to find a point on the Spatial Map simply is then simply this:

public static Vector3? GetPositionOnSpatialMap(float maxDistance = 2)
{
    RaycastHit hitInfo;
    var transform = CameraCache.Main.transform;
    var headRay = new Ray(transform.position, transform.forward);
    if (Physics.Raycast(headRay, out hitInfo, maxDistance, GetSpatialMeshMask()))
    {
        return hitInfo.point;
    }
    return null;
}

This sits in the updated LookingDirectionHelpers class. In the demo project you can see how it is actually used.

In the TapToPlaceController, the Update method will flip the text from “Please look at the spatial map max 2m ahead of you" to "Tap to select a location" when the gaze strikes the Spatial Map (and the Spatial Map ONLY, not another hologram).

protected override void Update()
{
    _instructionTextMesh.text =
         LookingDirectionHelpers.GetPositionOnSpatialMap(_maxDistance) != null ?
         "Tap to select a location" : _lookAtSurfaceText;
}

If you then air tap, it will place a squatted cylinder on the spatial map at the place you are looking to. This is done in the OnPointerDown method - using the same call to LookingDirectionHelpers.GetPositionOnSpatialMap to get a point to place the cylinder.

You will notice a floating cube as well. You can't place a cylinder on the cube - it only finds the Spatial Map. Demonstrating that you can't place a cylinder on it, is the cube's sole purpose ;). What might happen is that you place a cylinder behind the cube on the Spatial Map, if your opposite wall is closer than 2 meters. It requires additional logic to handle that situation, but that is beyond the scope of this blog post.

Starting, stopping and clearing the Spatial map

For some apps, most notably my AMS HoloATC app, the Spatial Map is used to help getting an initial place to put an object but then it needs to go away, as to not get the view blocked by occlusion. Making the Spatial Map transparent sometimes helps, but then still the walls get in the way of selecting objects as they block the gaze and other cursors. Long story short – it is sometimes desirable to be able to turn the Spatial map on and off. And this is actually pretty simple:

public void ToggleSpatialMap()
{
     if( CoreServices.SpatialAwarenessSystem != null)
     {
         if( IsObserverRunning )
         {
             CoreServices.SpatialAwarenessSystem.SuspendObservers();
             CoreServices.SpatialAwarenessSystem.ClearObservations();
         }
         else
         {
             CoreServices.SpatialAwarenessSystem.ResumeObservers();
         }
     }
}

Note that “ClearObservations” is necessary, as merely calling Suspend only stops the updating of the Spatial Map – the graphic representation still stays active. This was actually added after feedback from yours truly ;)

As to checking whether or not the observer is / observers are actually running I have devised this little trick

private bool IsObserverRunning
{
     get
     {
         var providers =
           ((IMixedRealityDataProviderAccess)CoreServices.SpatialAwarenessSystem)
             .GetDataProviders<IMixedRealitySpatialAwarenessObserver>();
         return providers.FirstOrDefault()?.IsRunning == true;
     }
}

I check if there’s an observer and assume that if the first one is running, so is probably the rest. Although in practice, on a HoloLens, there will be only one observer running anyway.

You can activate and de-activate the Spatial Map by pressing the floating button, where the SpatialMapToggler behaviour is attached to.

Conclusion

If you run and deploy the demo project you will find a button floating before you (in the direction that you looked when the app started) that you can use to toggle the Spatial Map, and to the right a little cube. In addition, a text floating in your vision instructs you either to look at the spatial map or air tap when you actually do – and then a cylinder will appear. Like this in this little video:

30 July 2019

Fixing error Failed to locate “CL.exe” or MSB8020 when deploying IL2CPP solution

Symptom

You have created a Unity project to create an app using MRTK2, and you want to use the new IL2CPP backend. You open the solution in Visual Studio 2019, you try to deploy it by using Build/Deploy and all the way at the end the compiler complains about “CL.exe” missing.

Alternatively, you might get the slightly more verbose error:

error MSB8020: The build tools for Visual Studio 2017 (Platform Toolset = 'v141') cannot be found. To build using the v141 build tools, please install Visual Studio 2017 build tools.  Alternatively, you may upgrade to the current Visual Studio tools by selecting the Project menu or right-click the solution, and then selecting "Retarget solution".

Cause

You have most likely used the the recommended Unity version (2018.4.2f1) to create the project. This version – the name gives it away – was released before Visual Studio 2019, and therefore assumes the presence of Visual Studio 2017 and it’s accompanying C++ tools set, ‘V141’. So Unity generated a C++ solution referencing that tool set.

But now it’s 2019, you have kissed Visual Studio 2017 goodbye, installed Visual Studio 2019. And that comes with tool set V142.

Solution

Either you install V141 using the Visual Studio Installer, or you tell the generated solution to use V142. I personally prefer the last one, because newer is always better right ;)

Simply right-click the project in the solution that has “(Universal Windows)” behind it’s name, select properties, tab general and then the problem is already pretty evident:

Simply select Visual Studio 2019 (142) for Project Toolset and you are good to go. This setting will stay as long as you don’t delete the generated project – Unity will simply change what needs to be changed, and leave as much as it can (to speed up the generation process).

Conclusion

Simple fix, but can be hard to find. Hence a simple blog about it

29 July 2019

Minimal required software for MRTK2 development for HoloLens 2 and Immersive headsets

Intro

A short one this time – and codeless to. You see, next Saturday I will be giving an workshop for MixUG Netherlands about development with the Mixed Reality Toolkit 2 for Immersive headsets, together with my colleague, partner in crime and fellow MVP Alexander Meijers. One of the things that came up preparing for this workshop was what you would actually need to develop with the Mixed Reality Toolkit 2. Since ye olden days of the HoloToolkit, quite a few things have changed – Unity, the minimal OS version, and there’s even a new version of Visual Studio. So I set out to complete a minimal shopping list with a few optional items. Fortunately, our friends over at Microsoft Azure make it quite simple to spin up a totally pristine machine so you don’t run into the typical developer machine issues – multiple versions of Visual Studio with different workloads and a myriad of Unity versions – which makes it hard to tell sometimes what is required for what app.

OS version

Easy one. Windows 10, 1809 or (recommended) 1903. Everything I tested, I tested on Windows 10 Pro

Visual Studio

You will need Visual Studio 2019 community edition. 2017 will work too, but is much slower. Download Visual Studio 2019 community from this link and choose the following work loads:

  • UWP development with optional components USB connectivity and C++ (V142) UWP tools checked
  • Game development with Unity with the optional component 2018.3 64-bit editor unchecked

In images:

Make sure you install Visual Studio before Unity.

Offline installer

A fun trick – if you want to make an offline installer for the community edition for these particular workloads, open a command prompt after downloading the installer, and type (on one line):

vs_community.exe --layout c:\vsinstaller
--add Microsoft.VisualStudio.Workload.ManagedGame
--add Microsoft.VisualStudio.Workload.Universal
--add Microsoft.VisualStudio.Component.Windows10SDK.IpOverUsb
--add Microsoft.VisualStudio.ComponentGroup.UWP.VC --lang en-US

In c:\vsinstaller you will then find a complete install ‘layout’ for all the necessary components. Might be useful if you want to prepare multiple computers.

Unity

2018.4.2f1, taken from ProjectSettings/ProjectVersion.txt in the mrtk_development branch. This particular version can be downloaded directly from this link.

Choose as minimal components

  • Unity 2018.4.2f1
  • UWP Build Support

Mind you – this sets you op for HoloLens 2 and Windows Mixed Reality Immersive headsets only.

Optional – HoloLens 2 emulator

I have already written extensively about it. You can get it here. Be aware that it requires Hyper-V being installed. If you have installed Windows 10 1903, it will run right away. On 1809 you will need some trickery.

Conclusion

It’s not that hard to get up and running for MRTK2 development for HoloLens 2 and Windows Mixed Reality Immersive headsets. And now you have a nice complete ‘shopping list’ for when you want to prepare your PC.

14 July 2019

Migrating to MRTK2–manipulating holograms by grabbing

Intro

To be honest, the title of this blog post is a bit weird, because in Mixed Reality Toolkit 1 the concept of grabbing was unknown, as HoloLens 1 does not support this kind of gestures. But nevertheless, as I am on this quest of documenting all the gems I discover while migrating an existing app to Mixed Reality Toolkit 2, this is one of the things I came across so I am shoehorning it in this blog post series – the 8th installment of it already. And the fun thing about this one if that although there is a demo project available, I am going to write no code at all. The whole concept of manipulation by grabbing can be done by simply dragging MRTK2 components on top of a game object.

'Far manipulation'

This is really extremely simple. If I want to make a cube draggable in the 'classic' sense - that is, point a cursor to it, pinch and move my hand, and then the cube follows, all you have to do is add a ManipulationHandler to the cube, with default settings:

And then you simply point the 'hand ray' to it, pinch and move:

But as you could see, I can only drag it. I can't move it anymore - or rotate - as my hand comes closer, like at the end of the movie. I fact, I can't do anything anymore.

Allow grabbing and moving

For that, we will need to add another script: Near Interaction Grabbable.

And now, if the hand comes close to the cube, you can do all kinds of crazy stuff with it

Some settings to consider

  • If you don't want to allow 'far manipulation' (the first type) but only want to allow the manipulation by grapping, you can uncheck the "Allow Far Manipulation" on the ManipulationHandler.
  • If you want to see where the actual grab connection point is, check the "Show Tether When Manipulating" checkbox on Near Interaction Grabbable. This will look like this:

I bet there are more settings to consider, but I haven't tried those yet (or felt the need to do so).

Conclusion

The code of this completely code-less sample can be found here. I can't wait to add code like this to real-world HoloLens 2 projects. But alas, we still need to wait for the device :)

09 July 2019

Migrating to MRTK2– handling tap, touch and focus ‘manually’ (in code)

Wait a minute – you did handle tap before, right?

Indeed, dear reader, I did. But I also had signed up for a MixUG session on Wednesday July 3. And while making demos for that I learned some other ways to handle interaction. Once again it shows that the best way to learn things is to try to teach them – because the need to explain things induces the need to actually obtain a deeper knowledge.

Ye olde way

In the MRTK 1, it was thus:

  • Handle tap – implement IInputClickHandler
  • Handle drag – implement IManipulationHandler
  • Handle focus – implement
  • Handle touch – forget it. ;)

The new way

In the MRTK 2 it is now

  • Handle tap – implement IMixedRealityPointerHandler
  • Handle drag – see above
  • Handle focus – implement IMixedRealityFocusHandler
  • Handle touch – IMixedRealityTouchHandler

Now I am going to ignore drag for this tutorial, and concentrate me on tap, focus and touch

IMixedRealityPointerHandler

This requires you to implement four methods:

  • OnPointerDown
  • OnPointerDragged
  • OnPointerUp
  • OnPointerClicked

OnPointerClicked basically intercept a tap or an air tap, and will work as such as you deploy the demo project to a HoloLens 1. After being bitten by people tapping just a tiny bit to slow and therefore not getting response, I tend to implement OnPointerDown rather than OnPointerClicked to capture a 'tap' event, but that's a matter of preference.

IMixedRealityFocusHandler

You will need to implement:

  • OnFocusEnter
  • OnFocusExit

The method names are the same as in MRKT1, only the signatures not - you now get a parameter of type FocusEventData which does give you some more information - by what the object was focused (we have multiple ways of doing that now!), what the previous focused object was, and what the new focused object is.

IMixedRealityTouchHandler

This requires you to implement

  • OnTouchStarted
  • OnTouchCompleted
  • OnTouchUpdated

But there is a twist to that. As we will soon see.

Show-off

To show off how it all works, I have created a little demo project. You can run it either in the emulator or the Unity editor (or a HoloLens 2, if you are in the HoloLens team and part some very few selected parties - I am unfortunately neither).

I have created a little script CodedInteractionResponder that shows off how this works. This script implements all the three interfaces I just wrote about. If you open the demo project in Unity show itself like this. All three cubes have the script attached to them.

The text above the cubes will show how much times a cube has been either focused, touched or clicked. I you press play and then the space bar, the right hand will appear (or use ctrl for the left hand). Moving the hand can be done by using the mouse - if you move the hand ray over the cubes it will trigger a focus event, if you tap the left mouse button you will trigger a tap, and if you move the hand towards the cube (using the WASD keys) it will trigger a touch event.

That is to say - you would expect that. But that is not always the case

What happens is this:

  • You can click or focus the green cube, but you cannot touch it. Nothing happens if you try.
  • You can click, focus or touch the red cube, but if you touch it, the number of times it's clicked is increasing. Not the number of touches.
  • Only the blue cube works as expected.

Yet they all have the CodedInteractionResponder. How does this compute?

NearInteractionTouchable

The best way to explain this, is an image showing the bottom half of all the three cubes

The green cube misses the NearInteractionTouchable. This script is necessary to have touch events being fired at all. So unlike IMixedRealityPointerHandler and IMixedRealityFocusHandler, where a mere implementation of the interface will trigger an event, a touch event - that is, methods in IMixedRealityTouchHandler being called - requires the addition of a NearInteractionTouchable script.

And NearInteractionTouchable has another trick up it's sleeve. Suppose you have a button - whether it's (air) tapped or actually touched/pressed, you want to activate the same code. If you change "Events to Receive" from it's default "Touch" to "Pointer" (as I did with the red cube) touching the cube will actually trigger a pointer event. This saves you a few lines of code. So basically NearInteractionTouchable can act as a kind of event router. And this is why the red cube never shows a touch event - but a click event instead.

Be aware NearInteractionTouchable needs a collider to work on. This collider needs to be on the same object the script is on. So if you make an empty game object as a hat stand for a a bunch of smaller game objects, make sure to manually add a collider that envelops all game objects, otherwise the 'touch' won't seem to work.

What, no code?

Yes, there is code, but it's pretty straightforward and if you want to have a look at CodeInteractionResponder, have a look in GitHub. It's actually so simple I felt it a little bit overdone to verbatim repeat parts in this blog post itself.

19 June 2019

Migrating to MRTK2 - missing Singleton and 3DTextPrefab

Intro

If you are migrating from the HoloToolkit to Mixed Reality Toolkit 2 'cold turkey', as I am doing for my AMS HoloATC app, a lot of things break, as I already said in the first post of this series. For things that you can tap, you can simply change the implementing interface from IInputClickHandler or IManipulationHandler to a couple of other interface and change the signature a bit - that's not complex, only tedious, depending on how much you have used it.

What I found really hard was the removal of the Singleton class and the 3DTextFab. I used both quite extensively. The first one I needed for like data access classes as the concept of services that was introduced in the Mixed Reality Toolkit 2  was not yet available, and the other... well basically all my texts were 3DTextPrefabs so any kind of user feedback in text format was gone. Because so much breaks at the same time, it's very hard to step by step rebuilding your app to a working condition. Basically you have to change everything before something starts to work again. Since I was still learning by doing, there was no way to test if I was doing things more or less right. I got stuck, and took a radical approach.

Introducing HoloToolkitCompatiblityPack

I have created a little Unity Package that contains the things that made it hard for me to get a step-by-step migration to the MRTK2 and christened it the HoloToolkitCompatiblityPack. It contains minimal amount of scripts and meta files to have Singleton and 3DTextFab working inside an MRTK2 built app. As I will be migrating more apps, I will probably update the package with other classes that I need. You can find the package file here and the project here. If you take your existing HoloToolkit based app, yank out the HoloToolkit, replace it by the MRTK2, then import the HoloToolkitCompatiblityPack package, you at least have to fix a few less things to at least get your app to a minimal state of function again.


Caveat emptor

Yes, of course you can use the HoloToolkitCompatiblityPack in your production app, and ship a kind of Frankenbuild using both MRTK2 and this. Do let yourself be tempted to do that. See this package as a kind of scaffolding, or a temporary beam to hold up the roof while you are replacing a bearing wall. For 3DTextFab I tend to turn a blind eye, but please don't use Singleton again. Convert those classes into services one by one. Then remove the Singleton from the HoloToolkitCompatiblityPack to make sure everything works without. This is for migration purposes only.

Take the high road, not the low technical debt road.

Conclusion

Making this package helped me forward with the migration quite a lot. I hope it helps other too. I'd love to hear some feedback on this.

29 May 2019

Migrating to MRTK2 - looking a bit closer at tapping, and trapping 'duplicate' events

Intro

In my previous post I wrote about how game objects can be made clickable (or 'tappable') using the Mixed Reality Toolkit 2, and how things changed from MRTK1. And in fact, when you deploy the app to a HoloLens 1, my demo actually works as intended. But then I noticed something odd in the editor, and made a variant of the app that went with the previous blog post to see how things work- or might work- in HoloLens 2.

Debugging ClickyThingy ye olde way

Like I wrote before, it's possible to debug the C# code of a running IL2CPP C++ app running on a HoloLens. To debug using breakpoints is a bit tricky when you are dealing with rapidly firing event - stopping through the debugger might actually have some influence on the order events play out. So I resorted to the good old "Console.WriteLine-style" of debugging, and added a floating text in the app that shows what's going on.

The ClickableThingy behaviour I made in the previous post then looks like this:

using Microsoft.MixedReality.Toolkit.Input;
using System;
using TMPro;
using UnityEngine;

public class ClickableThingyGlobal : BaseInputHandler, IMixedRealityInputHandler
{
    [SerializeField]
    private TextMeshPro _debugText;

    public void OnInputUp(InputEventData eventData)
    {
        GetComponent<MeshRenderer>().material.color = Color.white;
        AddDebugText("up", eventData);
    }

    public void OnInputDown(InputEventData eventData)
    {
        GetComponent<MeshRenderer>().material.color = Color.red;
        AddDebugText("down", eventData);
    }

    private void AddDebugText( string eventPrefix, InputEventData eventData)
    {
        if( _debugText == null)
        {
            return;
        }
        var description = eventData.MixedRealityInputAction.Description;
        _debugText.text += 
            $"{eventPrefix} {gameObject.name} : {description}{Environment.NewLine}";
    }
}



Now in the HoloLens 1, things are exactly like you expect. Air tapping the sphere activates Up and Down events exactly once for every tap (because the Cube gets every tap, even when you don't gaze at it - see my previous post for an explanation)

When you run the same code in the editor, though, you get a different result:

Tap versus Grip - and CustomPropertyDrawers

The interesting thing is, when you 'air tap' in the editor (using the space bar and the left mouse button), thumb and index finger of the simulated hand come together. This, now, is recognized as a tap followed by a grip, apparently.

So we need to filter the events coming in through OnInputUp and OnInputDown to respond to the actual events we want. This is where things get a little bit unusual - there is no enumeration of sorts that you can use to compare you actual event against. The available events are all in the configuration, so they are dynamically created.

The way to do some actual filtering is to add a property of type MixedRealityInputAction to your behaviour (I used _desiredInputAction). Then the MRTK2 automatically creates a drop down with possible to events to select from:

How does this magic work? Well, the MRTK2 contains a CustomPropertyDrawer called InputActionPropertyDrawer that automatically creates this drop down whenever you add a property of type MixedRealityInputAction to your behaviour. The values in this list are pulled from the configuration. This fits with the idea of the MRTK2 that everything must be configurable ad infinitum. Which is cool but sometimes it makes things confusing.

Anyway, you select the event you want to test for in the UI, in this case "Select":

And then you have to check if the event methods if the event matches your desired event.

if (eventData.MixedRealityInputAction != _desiredInputAction)
{
    return;
}

And then everything works as you expect: only the select event results in an action by the app.

How about HoloLens 2?

I could only test this in the emulator. The odd things is, even without the check on the input action, only the select action was fired, even when I pinched the hand using the control pane:

So I have no idea if this is actually necessary on a real live HoloLens 2, but my friends and fellow MVPs Stephen Hodgson and Simon 'Darkside' Jackson have both mentioned this kind of event type check as being necessary in a few on line conversations (although I then did not understand why). So I suppose it is :)

Conclusion

Common wisdom has it that the best thing about teaching is that you learn a lot yourself. This post is excellent proof of that wisdom. If you think this here old MVP is the end-all and know-all of this kind of stuff, think again. I knew of customer editors, but I literally just learned the concept of CustomPropertyDrawer while I was writing this post. I had no idea it existed, but I found it  because I wanted to know how the heck the editor got all the possible MixedRealityInputAction from the configuration and show that in such a neat list. Took me quite some searching, actually - which is logical, if you don't know what exactly you are looking for ;).

I hope this benefits you as well. Demo project here (branch TapCloseLook).

22 May 2019

Migrating to MRTK2 - IInputClickHandler and SetGlobalListener are gone. How do we tap now?

Intro

Making something 'clickable' (or actually more 'air tappable') was pretty easy in the Mixed Reality Toolkit 1. You just added the IInputClickHandler interface like this:

using HoloToolkit.Unity.InputModule;
using UnityEngine;

public class ClickableThingy: MonoBehaviour, IInputClickHandler
{
    public void OnInputClicked(InputClickedEventData eventData)
    {
        // Do something
    }
}

You dragged this behaviour on top of any game object you want to act on being air tapped and OnInputClicked was being activated as soon as you air tapped. But IInputClickHandler does no longer exist in MRTK2. How does that work now?

Tap – just another interface

To support the air tap in MRTK2, it's simply a matter of switching out one interface for another:

using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class ClickableThingy : MonoBehaviour, IMixedRealityInputHandler
{
    public void OnInputUp(InputEventData eventData)
    {
        //Do something else
    }

    public void OnInputDown(InputEventData eventData)
    {
        //Do something
    }
}

I don't have HoloLens 2, but if you put whatever was in OnInputClicked in OnInputDown it's being executed on a HoloLens 1 when you do an air tap and the object is selected by a the gaze cursor.. So I guess that's a safe bet if you want to make something that runs on both HoloLens 1 and 2.

‘Global tap’ – add a base class

In the MRTK 1 days, when you wanted to do a ‘global tap’, you could simply add a SetGlobalListener behaviour to the game object that contained your IInputClickHandler implementing behaviour:

Adding this object meant that any airtap would be routed to this IImputClicked object - even without the gaze cursor touching the game it, or touching anything anything at all, for what matters. This could be very useful in situations where you for instance were placing objects on the spatial map and some gesture is needed to stop the movement. Or some general confirmation gesture in a situation where some kind of UI was not feasible because it would get in the way. But the SetGlobalListener behaviour is gone as well, so how do get that behavior now?

Well, basically you make your ClickableThingy not only implement IMixedRealityInputHandler, but also be a child class of BaseInputHandler.

using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class ClickableThingyGlobal : BaseInputHandler, IMixedRealityInputHandler
{
    public void OnInputUp(InputEventData eventData)
    {
        // Do something else
    }

    public void OnInputDown(InputEventData eventData)
    {
        // Do something
    }
}

This has a property isFocusRequired that you can set to false in the editor:

And then your ClickableThingy will get every tap. Smart people will notice it makes sense to always make a child class of BaseInputHandler as the IsFocusRequired property is default true – so the default behavior ClickableThingyGlobal is to act exactly the same as ClickableThingy, but you can configure it’s behavior in the editor, which makes your behavior applicable to more situations. Whatever you can make configurable saves code. So I'd always go for a BaseInputHandler for anything that handles a tap.

Proof of the pudding

This is exactly what the demo project shows: a cube that responds to a tap regardless whether there is a gaze or hand cursor on it, and a sphere that only responds to a tap when there is a hand or gaze cursor on it. Both use the ClickableThingyGlobal: the cube has the IsFocusRequired check box unselected, on the sphere it is selected. To this end I have adapted the ClickableThingyGlobal to actually do something usable:

using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class ClickableThingyGlobal : BaseInputHandler, IMixedRealityInputHandler
{
    public void OnInputUp(InputEventData eventData)
    {
        GetComponent<MeshRenderer>().material.color = Color.white;
    }

    public void OnInputDown(InputEventData eventData)
    {
        GetComponent<MeshRenderer>().material.color = Color.red;
    }
}

or at least something visible, which is to change the color of the elements from white to red on a tap (and back again).

In an HoloLens 1 it looks like this.

The cube will always flash red, the sphere only when there is some cursor pointing to it. In the HoloLens 2 emulator it looks like this:

The fun thing now is that you can act on both InputUp and InputDown, which I use to revert the color setting. To mimic the behavior of the old OnInputClicked, adding code in OnInputDown and leaving OnInputUp is sufficient I feel.

Conclusion

Yet another part of moved cheese, although not dramatically so. Demo code is very limited, but can still be found here. I hope me documenting finding my way around Mixed Reality Toolkit 2 helps you. If you have questions about specific pieces of your HoloLens cheese having been moved and you can't find them, feel free to ask me. In any case I intend to write lots more of these posts.

15 May 2019

Migrating to MRTK2 - MS HRTF Spatializer missing (and how to get it back)

Intro

One the many awesome (although sadly underutilized) capabilities of HoloLens is Spatial Audio. With just a few small speakers and some very nifty algorithms it allows you to connect audio to moving Holograms that sound as if they are coming from a Hologram . Microsoft have applied this with such careful precision that you can actually hear Holograms moving above and behind you, which greatly enhances the immersive experience in a Mixed Reality environment. It also has some very practical uses - for instance, alerting the user something interesting is happening outside of their field of vision - and the audio also provides a clue where the user is supposed to look.

Upgrade to MRKT2 ... where is my Spatializer?

In the process op upgrading AMS HoloATC to Mixed Reality Toolkit 2 I noticed something odd. I tried - in the Unity editor - to click an airplane, that should then start to emit a pinging sound. In stead, I saw this error in the editor pop up:

"Audio source failed to initialize audio spatializer. An audio spatializer is specified in the audio project settings, but the associated plugin was not found or initialized properly. Please make sure that the selected spatializer is compatible with the target."

Then I looked into the project's audio settings (Edit/Project Settings/Audio) and saw that the Spatializer Plugin field was set to "None" - and that the MS HRTF Spatializer (that I normally expect to be in the drop down) was not even available!

Now what?

The smoking - or missing - gun

The solution is rather simple. If you look in the Mixed Reality Toolkit 2 sample project, you will notice the MS HRFT Spatializer is both available and selected. So what is missing?

Look at the Packages node in your Assets. It's all the way to the bottom. You will probably see this;

But what you are supposed to see is this:

See what's missing? Apparently the spatializer has been moved into a Unity Package. When you install the Mixed Reality Toolkit 2 and click "Mixed Reality Toolkit/Add to Scene and configure" it is supposed to add this package automatically (at least I think it is) - but for some reason, this does not always happen.

Use the Force Luke - that is, the Unity Package Manager

Fortunately, it's easy to fix. In the Unity Editor, click Window/Package Manager. This will open the Package Manager Window. Initially it will only show a few entries, but then, near the bottom, you will see "Windows Mixed Reality" appear. Hit the "Install" button top right. And when its done the Windows Mixed Reality entry will appear in the Packages will appear.

And now, if you go to Edit/Project Settings/Audio, you will see that MS HRTF Spatializer has appeared again. If this a migrated project and you have not messed with the audio settings, it will probably be selected automatically again.

Conclusion

No code this time, as there is little to code. I do need to add a little word of warning here - apparently these packages are defined in YourProject/Packages/manifest.json. Make sure this gets added to your repo and checked in as well.

10 May 2019

Migrating to MRTK2–NewtonSoft.JSON (aka JSON.Net) is gone

Intro

In ye olde days, if you set up a project using the Mixed Reality Toolkit 1, NewtonSoft.JSON (aka JSON.Net) was automatically included. This was because part of the MRTK1 had a dependency on it –something related to the gLTF stuff used it. This is (apparently) no longer the case. So if you had a piece of code that previously used something like this

public class DeserializeJson : MonoBehaviour
{
    [SerializeField]
    private TextMeshPro _text;

    void Start()
    {
        var jsonstring = @"
{
   ""Property1"" : ""Hello"",
   ""Property2"" : ""Folks""
}";
        var deserializedObject = JsonConvert.DeserializeObject<DemoJson>(jsonstring);

        _text.text = string.Concat(deserializedObject.Property1,
            Environment.NewLine, deserializedObject.Property2);
    }
}

It will no longer compile when you use the MRTK2. You will need to get it elsewhere. There are two ways to solve this: the right way and the wrong way.

The wrong way

The wrong way, that I was actually advised to do, is to get a copy of an MRTK1 and drag the JSON.Net module from there into your project. It's under HoloToolkit\Utilities\Scripts\GLTF\Plugins\JsonNet. And it will appear to work, too. In the editor. And as long as you use the .NET scripting backend. Unity has announced, though, the .NET backend will disappear – you will need to use IL2CPP soon. And when you do so, you will notice your app will mysteriously fail to deserialize JSON. If you run the C++ app in debug mode from Visual Studio you will see something cryptic like this:

The reason why is not easy to find. If you dig deeper, you will see it complaining about it trying to use Reflection.Emit and this apparently is not allowed in the C++ world. Or not in the way it's done. Whatever.

The right way

Fortunately there is an another way - and a surprisingly one to boot. There is a free JSON.Net package in the Unity store, and it seems to do the trick for me – I can compile the C++ app, deploy it on the HoloLens 2 emulator and it actually parses JSON.

QED:

But will this work on a HoloLens 2?

The fun thing is of course the HoloLens 2 has an ARM processor, so the only way to test if this really works is to run in on an HoloLens 2. Unlike a few very lucky individuals, I don't have access to the device. But I do have something else - an ARM based PC that I was asked to evaluate in 2018.  I compiled for ARM, made a deployment package, powered up the ARM PC and wouldn't you know it...

So. I think we can be reasonably sure this will work on a HoloLens 2 as well.

Update - this has been verified.

Conclusion

I don't know whether all the arcane and intricate things JSON.Net supports are supported by this package, but it seems to do the trick as far as my simple needs are concerned. I guess you should switch to this package to prepare for HoloLens 2.

Code as usual on GitHub:

And yes, the master is still empty but I intend to use that for demonstrating a different issue.

06 May 2019

Migrating to MRTK2 - Mixed Reality Toolkit Standard Shader 'breaks'

Intro

At this moment I am trying to learn as much as possible about the new Mixed Reality Toolkit 2, to be ready for HoloLens 2 when it comes. I opted for using a rather brutal cold turkey learning approach: I took my existing AMS HoloATC app, ripped out the ye goode olde HoloToolkit, and replaced it by the new MRTK2 - fresh from GitHub. Not surprisingly this breaks a lot. I am not sure if this is the intended way of migrating - it's like renovating the house by starting with bulldozering a couple of walls away. But this is the way I chose do it, as it forces me to adhere to the new style and learn how stuff works, without compromises. It also makes me very clear where things are going to break when I do this to customer apps.

So I am starting a series of short blog posts, that basically documents the bumps in the road as I encounter them, as well as how I swerved around them or solved them. I hope other people will benefit from this, especially as I will showing a lot of moved cheese. And speaking of...

Help! My Standard Shader is broken!

So you had this nice Mixed Reality app that showed these awesome holograms:

and then you decided to upgrade to the Mixed Reality Toolkit 2

and you did not expect to see this. This is typically the color Unity shows when a material is missing or something in the shader is thoroughly broken. And indeed, if you look at the materials:

something indeed is broken.

How to fix this

There is good new, bad news, and slightly better news.

  • The good news - it's easy to fix.
  • The bad news is - you have to do this for every material in your apps that used the 'old' HTK Standard shader
  • The slightly better news - you can do this for multiple materials in one go. Provided they are all in one folder, or you do something nifty with search

So, in your assets select your materials:

Then in the inspector select the Mixed Reality Toolkit Standard Shader (again) :

And boom. Everything looks like it should.

Or nearly so,because although it carries the same name, it's actually a different shader. Stuff actually might look a wee bit different. In my sample app, especially the blue seems to look a bit different.

So what happened?

If you look what Git marks as changed, only the tree materials themselves are marked changed:

and if you look at a diff, you will see the referenced file GUID for the shader is changed. So indeed, although it carries the same name (Mixed Reality Toolkit Standard), as far as Unity is concerned it's a different shader.

(you might want to click on the picture to be able to actually read this).

As you scroll down through the diff, you will see lots of additions too, so this is not only a different shader id, it's actually a different or new shader as well. Why they deliberately chose to break the shader ID - beats me. Maybe to make upgrading from one shader to another possible, or have both the old and the new one simultaneously work in one project, making upgrade easier. But since they have the same name, this might also induce confusion. Whatever- but this is what causes the shader to 'break' at upgrade, an now you know how to fix it, too.

Conclusion

I hope to have eliminated once source of confusion today, and I wish you a lot of fun watching the //BUILD 2019 keynote in a few hours.

You can find a demo project here.

  • Branch "master" shows the original project with HoloToolkit
  • Branch "broken" shows the project upgraded to MRTK2 - with broken shaders
  • Branch "fixed" shows the fixed project