12 February 2020

HoloLens 2 - it's the interaction, stupid!

Intro

Tuesday, February 7, 2020 marked an important occasion to me. A HoloLens 2 arrived at my door. For the first time since March 2019, where I got to try a HoloLens 2 during the MVP Summit for a few minutes, I actually had my hands on a device. And what's more - I got a month to play, test and develop with it, courtesy of fellow MVP and Regional Director Philipp Bauknecht of MediaLesson GbmH, a real community hero, who has graciously provided me with this learning opportunity. I hope I will be someday be able to repay him this enormous favor.

Just having this device around and being able to test and develop with it, quite changed my views of it,  what's actually important - and what makes it a game changer.

Display

First of all, I am going to bring your hopes down a little. To an extent, HoloLens 2 suffers a bit from what I would like to coin as "the Apollo 12 effect". The whole world followed Neil Armstrong, Buzz Aldrin and Michael Collins to the Moon and where glued to very bad black & white screen while the first two men took their steps to the Moon. But a lot less people watched Apollo 12. Successive flights got even less attention (bar Apollo 13, but that was not because they landed, but almost died). People had literally seen this before and were - I kid you not - complaining about footage of men on the Moon eating up precious TV time from the football games. Subsequent flights after 17 were cancelled. People are extremely well equipped at accepting 'magic' and then getting bored with it.

As far as display goes, HoloLens 2 shows you virtual objects in 3D space that can interact with reality. This, my friends, is exactly what HoloLens 1 did.

It does this a lot faster, the view is brighter, the holograms are a lot more stable, and the thing almost everyone harped on - the field of view - has been considerably increased. I can almost imagine Alex Kipman shouting "we gave you bloody magic and all you kept telling me was the view was not big enough - are you happy now???"

There are other things: the device is a lot more ergonomic, it feels lighter but actually isn't very much - it's just better balanced. Donning it as easy as cake, taking it off as well, and charging via USB-C is a godsend - no more fiddling with MicroUSB on a wobbly end. I have seen more than one HoloLens 1 with a damaged charging port.

That's all very fine and welcome. But that's not what I mean by game changing. If we stay in space terminology - HoloLens 1 was like we suddenly had a fusion rocket. It is awesome, but parts of it were messy.

HoloLens 2 has a warp drive.

Interaction, interaction, interaction

Everyone who has ever used HoloLens 1 - or better still, tried instructing a newbie user to use one - knows the challenge. You can select something by pointing your head to a Hologram, then perform an air tap. Just tap your finger and thumb together. Easy as cake. And yet I have witnessed people who for some reason could not perform this simple task successfully. Either they pointed the cursor not correctly, or they made gestures that were almost but not quite an air tap, made it to slow or too fast, contorted their hands in a way that apparently confused the device, or started to make up gestures - that of course did not work at all. Whatever. Most people got it, but between 10-20% just never could get it to work reliably, if at all. The HoloLens 1 came with a little clicker for those people, apparently a last-minute addition - that almost no-one ever used. It either lost its charge at an inconvenient time, or (in most cases) got completely lost at all, it being a small device that easily was forgotten or dropped somewhere.

HoloLens 2 does not come with a clicker, and that's for a reason. If you make a gesture that even remotely resembles an air tap, it registers it as such - with such ferocity and accuracy that if you have a large contact surface you might even get some inadvertent air taps in (I will have to look into that for my app Walk the World, for instance). 

In addition, what everyone saw being demoed first by the amazing Julia Schwarz - the ability just to touch, grab and move Holograms works amazingly well. To such an extent that you can push buttons like they are real, grab, move and rotate things like they are real... everything with amazing accuracy. You can even have your hand visualized and then it looks like you have some computer-generated glove on your hands - it follows every little movement. The resulting interaction model is very natural. So natural that you actually at first expect haptic feedback when you push a 'button'. Maybe something for HoloLens 3 ;).

There are a few things you might want to explain when you instruct someone to use the device for the first time to speed things up - like that if you want to use the start button, that's on your wrist. No more bloom gestures – the Italians will appreciate that ;). You might want to explain how an air tap works, but it's likely people find that out by themselves as it is so easy now. Also, the device goes out of it's way to explain itself on first startup.The fact that it can not only track your hand but also recognizes all kinds of hand postures and gestures allows for much more detailed control , and my personal favorite is having menus popup when you hold one hand in a certain position. These hand palm menus can be very easily made, using no code at all, just using stuff that's included in the MRKT2 out of the box.

But wait, there's more

Voice commands, remember that? The thing that everyone used like crazy and then quickly came back from, as it did not always work in noisy environments, especially with a lot of talk around. And making an odd gesture in empty space and looking weird is one thing, but shouting repeatedly at a device makes you feel very awkward indeed. Whatever they did to it, it's now way more accurate and confident at recognizing speech. Even in a very loud room with people talking. Speech control is everywhere in HoloLens 2, and very easy to use reliably.

And then there's eye tracking. Remember you had to move your whole head to point the gaze cursor? It now tracks your eyes. I knows what your are looking at. I use this in AMS HoloATC to make an image of the actual airplane pop up when you look at the model. There are four (or five, depending on what you include) events that you can easily track. I also learned that on a real life device I make that happen way too fast and too nervously. Having a real device, I will be able to fix this in the near future.

Eye tracking also has some extra benefits - first of all, it allows the device to use Windows Hello login using iris recognition. Second, calibrating is a lot easier and faster. No longer do you have first close your one eye, then very very precisely move your finger in the right slot for a couple of times, and repeat that for the other eye - you now simply have to track a few holograms with your eyes, as they move though your view. And you really should do that - Microsoft pushed the envelope a lot further when it comes to display technology, so if you don't calibrate properly, there's a lot more chance of having a fuzzy view. Fortunately the device has a setting that automatically starts the calibration routine when it detects the user has changed (which it presumably does using the iris scan).

In conclusion

HoloLens 2 is an amazing device, with an amazing display technology - but it's the interaction model that makes it really special. This is what takes in over the top, makes it natural, simple to learn and easy to use. The hand/eye tracking removes the barrier of artificial gestures and make wandering around and interacting with Holograms at lot easier. This will make use of the device in business settings - especially industrial and manufacturing environments - a lot easier.

I love living in the future:)

22 January 2020

Mixed Reality apps failing the WACK - revisited

Intro

Last September I wrote about various hoops you had to jump through to get a Mixed Reality app to appear in the store and not have it to fail  the Windows Application Certification Toolkit (WACK). It's time for a little update, since part of what I wrote is no longer necessary, and some things have changed subtly.

The good news

I wrote how you manually had to mess with Package.AppManifest.xml, particularly how you had to remove the DirectX 10 dependency to make it (still) downloadable for HoloLens 1 devices. Provided you have been diligently installing the service updates that are issued for this device, by now your HoloLens 1 should have been updated to no longer reject apps that have this requirement. So no need to take this step anymore.

The bad news

The trick I wrote how to make sure some unsupported DLLs were not included does not work anymore. We have to resort to more drastic measures.

The problem

To recap: if you generate a C++ solution from the recommended Unity version (2018.4.x, LTS branch) you get the following errors in the WACK

  • HolographicAppRemoting.dll has failed the AppContainerCheck check.
  • PerceptionDevice.dll has failed the AppContainerCheck check.
  • UnityRemotingWMR.dll has failed the AppContainerCheck check.
  • The Supported API test will list 10 errors concerning UnityRemotingWMR calling unsupported APIs
  • The Debug configuration test will tell youUnity RemotingWMR is only built in debug mode
  • And if you try to build for x86 or ARM, the Package sanity test will tell you HolographicAppRemoting.dll, PerceptionDevice.dll and UnityRemotingWMR.dll are only available for x64.

I suggested editing the “Unity Data.vcxitems” file that is inside your store projects and change the contents of the DeploymentContent-tags for these files to false. That worked swell. Until, somewhere between Unity and Visual Studio versions upgrade, some brilliant person thought it a good idea to recreate that file whenever you change the architecture. So when you change the architecture from "x86" to "ARM" - something that happens automatically while creating multi-platform packages - you are greeted with this:

M previous solution does not work anymore, as the three offending DLLs are automatically added back to the file, and thus the WACK fails again. Some more brute force is being required.

The solution

If you expand the Il2CppOutputProject, you can simply see the offending DLL's. The solution is as simple as selecting and deleting them.

And then you can generate your packages, run the WACK again... and may get another failure:

This depends on the version of the WACK you have. As as you can read here, the Store has been updated to ignore this error. In other words, if you are down to this error, you are good to go for submission and (successful) certification in the Windows Store.

Mind you - as soon as you regenerate the solution from Unity, the project file may or may not be overwritten and you will need to repeat this. So - always always run the WACK before you submit - but then again, that's what you should do anyway.

Conclusion

The ever changing constellations of Unity and Visual Studio keep us throwing curveballs, but fortunately there's always been a way around. So far. Happy submitting!

15 January 2020

Mixed Reality apps failing MS Store validation on "10.5.1 Personal information" - and how to fix it

Intro

After fellow MVP (and RD, although not fellow-) Philipp Bauknecht discovered a bug in my AMS HoloATC app while testing it on his HoloLens 2 (what lunatic runs HoloLenses with DE-DE settings anyway :P) I created a fix for that and submitted the fixed app to the Microsoft Store on Saturday January 12th. I assumed it would pass right though. It did not. Monday morning I got a failure message:

That was not what I hoped, and certainly not what I expected

Parsing the message

I will admit to being rather confused, because I have a privacy policy link. In the Store. But apparently something has changed. If you want all the details, you can find them in the new App Developer Agreement, but the text in the failure basically says it all:

"For in-product include the privacy policy URL under the settings section"

So I need to add a privacy policy in the app. Okay. But they are expecting it under the settings section. This is clearly written with a desktop app in mind. But I don't have a main screen, let alone I can put a hamburger menu somewhere and put a "Settings" entry in this.

Solution part 1

So I could clearly not obey the letter of the law, but I can try to act in spirit. So I ventured out to see if I could get a privacy policy in the app in a way that would be acceptable to the Store testers. All my apps have like a "help" function, that you can activate by saying "help" or "show help". I actually advertise that on start-up with a floating text that is shortly visible after starting up the app.

This help screen looked like this:

And now looks like this

I actually added a text and a quad behind it, just ahead of the quad that is the main backdrop.

And on that quad there a very simple and crude behavior

using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;
using UnityEngine.WSA;

public class BrowserActivator : MonoBehaviour, IMixedRealityPointerHandler
{
    [SerializeField]
    private string _urlToOpen;

    public void OnPointerDown(MixedRealityPointerEventData eventData)
    {
        Launcher.LaunchUri(_urlToOpen,false);
    }

    public void OnPointerDragged(MixedRealityPointerEventData eventData)
    {
    }

    public void OnPointerUp(MixedRealityPointerEventData eventData)
    {
    }

    public void OnPointerClicked(MixedRealityPointerEventData eventData)
    {
    }
}

that launches a browser window when you tap on the text. In the app it looks like this:

And if you tap the indicated text, the app will move to the background and you will see this:

Or whatever URL you put into _urlToOpen in the editor. I took the exact same URL as in the Store. I am pretty sure this won't win me any design awards - but that's not the goal here. Note: these images where created with a HoloLens 1, as I am still lacking a HoloLens 2 (if anyone from Microsoft reads this - yes, that's a hint ;) ).

Solution part 2

Since I have no settings section, I thought it prudent to tell the testers where they could find the link to the privacy policy. So under "Submission options" I explained the why, where and how like this:

"IMPORTANT NOTE: this app failed App Policies: 10.5.1 Personal Information. Specifically, it missed an in app privacy policy. I was suggested to add it under section "settings" but there is no sections "settings", in fact, there are not sections at all, since this is not running in a window.
I have added a privacy policy tappable text in the help screen. You can access that by saying "help" or "show help", after you have initially placed the airport. This opens a browser and shows the privacy policy.
"

Proof of the pudding

As you can see, following this procedure took me through Store validation successfully. In fact, it took me through it twice - as it's Atlanta twin app ATL HoloATC got certified within hours after submitting with this addition.

Conclusion

Resuming:

  • Have some way in your app to popup the privacy policy web page.
  • Explain in Submission options to the testers where they can find it.

We can discuss at length how useful things like this are, if and how this could be communicated better to developers, if there needs to be guidance first before this is implement for the nice market of Mixed Reality apps - that don't have a window nor a settings section - whatever, this works, and while there is no official guidance on how to deal with privacy policies in apps like this, this article show you a way forward. Presumably, for the testers it's also just simply a matter of being mandated to tick off a box required by the legal department. And this is apparently an acceptable way of getting the box ticked. Job done.

No code sample this time - I assume everyone making MR apps will be able to re-create the two lines of actual code in the behaviour. And possible come up with a more elegant solution. But that was not the point of this - the point was getting in the store without a legal blocker.

24 November 2019

Migrating to MRKT2 - Interaction with irregular or complex objects

Intro

Interaction with objects that are not simple shapes like cubes, spheres, capsules etc. poses some challenges. The Mixed Reality Toolkit 2 offers some great components, but they all require a top level collider. Now consider this helicopter:

It consists of a lot of small objects. Default it does not even have a collider. You cannot add a Near Interaction Touchable on top of the object because it simply cannot find a collider. Now you can generate those on import, but that makes the object kind of heavy with regards to required processing power, and hooking all those colliders up up to their own Interaction Touchable is a lot of work.

There is a simpler way of doing that, fortunately. Actually, there are two variants of this, but I am going to show the one I think works the best (and is the most beautiful).








Adding a colliding 'catcher' to rule them all

First of all, we are going to add a surrounding object inside the model itself. I took a capsule, as this gives IMHO the most beautiful result

The result now is the helicopter is now almost completely covered in what looks like a giant suppository, which is definitely not what you want.

So by fiddling around I created this material. Note: it actual color is fully transparent black - basically 0,0,0,0. so that we can see the helicopter again.

But more importantly, it has a hover light override color of green with a light intensity of 0.4.

And now, if a cursor strikes the object, you get this what I think is rather pretty ghostly glow indicating this is your focused object.

Making it interactable

Making it actually interact with events is now pretty simple. Assuming your actual 'controller' behaviour needs to control the whole game object, it needs to sit on top - so it can do more things than just interaction (like moving the helicopter, for instance - which it does not do now). It looks like this:

using System;
using TMPro;
using UnityEngine;

public class InteractionResponder : MonoBehaviour
{
    [SerializeField]
    private TextMeshPro _text;

    private int _timesClicked;

    private int _timesTouched;

    private int _timesFocus;

    public void Click()
    {
        _timesClicked++;
        UpdateText();
    }
    
    public void TouchStart()
    {
        _timesTouched++;
        UpdateText();
    }

    public void OnFocus()
    {
        _timesFocus++;
        UpdateText();
    }

    private void UpdateText()
    {
        _text.text = string.Format("Clicked: {0}{1}Touched: {2}{1}Focused: {3}", 
            _timesClicked, Environment.NewLine, _timesTouched, _timesFocus);
    }
}

Super simple, basically only three event response methods that can be called - and it tries to display the text in a TextMeshPro object, that also sits in the HologramCollection, just like the Helicopter it self. As I said, the InteractionResponder behaviour will be sitting on the helicopter object itself:

Now we need go back to the SurroundingCapsule and add an Interactable and a Near Interaction Touchable Volume script to that. The latter is apparently new or something I missed: where an ordinary Near Interaction Touchable only takes a rectangular collider, the Volume script also takes a capsule:

Then, you will need to select "Select" for Input Actions, then drag the helicopter in the little box under OnClick like you usually do in this kind of event hookup, and select √ŹnteractionResponder.Click.

Then you click "Add Event", select "InteractableOnTouchReceiver" and hook the On Touch event up to the InteractionResponder.TouchStart method (we will ignore the On Touch End event in this sample).

In a similar fashion, you will add another event, select "InteractableOnFocusReceiver" and hook that to the InteractionResponder.OnFocus event.

And if you have done everything right, and hit the play button in the editor, you will see this happening when you click, touch or focus:

Conclusion

With very little code and one extra component you can make an irregular and complex object like a helicopter have an easy to touch/focus/click target that also gives some subtle but very clear visual feedback about what is happening. And this is only a start - we might as well have it respond to touch by showing some direct feedback, or show that it knows it's focused, even without a cursor hitting it (eye gaze!). Life is going to get interesting and much more immersive once HoloLens 2 comes around!

Demo project can be found here.

18 November 2019

Migrating to MRKT2 - using extension services for dependency injection

Intro

Coming from business development, you might get a little shock coming into Unity - traditionally, game developers are much more focused on making the outside pretty than the inside. Things like dependency injection are kind of unheard of or considered 'too heavy' for game development. But if you are still in the process of development,  actually being able to access a consistent (mock) data service in stead of the real live data service might be a big advantage, especially when that data service is rate limited or expensive.

The Mixed Reality Toolkit 2 offers a great feature for that: extension services. And it's actually pretty easy to use, and I am going to show a simple sample. I have written about this in very early alpha stage almost a year ago, but it's now to a point that it's actually usable.

Setting the stage

Using Unity 2018.4.6f1, I created a simple project MKRT2DepInject using the 3D template, imported the MRKT2 and TextMeshPro. For the latter I usually take essential resources only.Then I and added the MRKT2 to the SampleScene in the project. For the default profile, I usually take the DefaultHololens2Profile. Also, don't forget to set the platform to UWP (File/Build settings)

Also - and this is important - import JSON.net from the Unity store.

Extension services

A service requires an interface, an implementing class, optionally an inspector, a profile, and a default profile asset. Now the latter three may sounds maybe a bit abstract but it actually boils down to this:

  • An inspector is something that can be used to show the runtime status of a service in the editor. It's basically a debugging tool. It's entirely optional and in most cases it's not necessary.
  • A profile is a class holding configuration info for a class. If you have been using the MRKT2 for a while, you have been using them all along - cloning profiles and changing settings.
  • a default service profile asset is basically a serialized version of a profile class.

This may seem like a lot of work, but there's actually a nice tool for generating the boiler plate for all that - although had to get in a few pull request myself to getting it to work as I assume was intended ;)

Creating an extension service

Select Mixed Reality Toolkit/Utilities/Create Extension Service. This will bring up this UI:

Name the service "DataService". You will notice the "Service" suffix is mandatory. Choose "Services" for namespace. Then click the "Next" button. This will show you the next stage.

Now I like to organize my stuff a little, so I tend to put things in folders. The scripts go in a scripts/services folder, the profile in profile. You can set this by dragging the folder from the assets. Notice also I have disabled the inspector:

Hit next, and on the next screen click not now because otherwise you will be editing the default profiles - effectively, you are modifying the default settings of the MRKT2. You can do this only after you have cloned the proper profiles.

You will also notice that although you specified the default asset should have been created in the Profiles folder, it is in fact created in the Services folder. Look I am going to need to make another pull request. Anyway. I moved the DefaultDataServiceProfile to profiles, and let it sit there

Registering the service

First, we clone the top profile.

Then we disable the profiler, because that's annoyingly in the way when you want to demo something

Then we select the Extensions tab, and clone the "DefaultMixedRealityRegisteredServiceProvidersProfile" (the creators of the MRTK2 seem to have taken a liking to rather verbose names, as you might have noticed) to MyMixedRealityRegisteredServiceProvidersProfile

Now you can actually click the "+ Register a new Service Provider" button and register the service

Then you have to click the Configuration Profile drop down, which unfortunately shows you all possible profiles, and you have to pick the one you need, which is DefaultDataServiceProfile, which is fortunately at the top of the list

The end result should look like this:

Now the configuration stuff is finally done, and we are going to add some code.

The data and the data set

My simple sample is going to read a json file from the web and show the contents in the text. Therefore we need a data file, and a class to deserialize it in.

The data file sits here, and the class in which in can be deserialized looks like this

using Newtonsoft.Json;

namespace Json
{
    public class DemoData
    {
        [JsonProperty("firstName")]
        public string FirstName { get; set; }

        [JsonProperty("lastName")]
        public string LastName { get; set; }
    }
}

Configuration profile

So to make the configuration profile actually configurable, the DataServiceProfile class needs to be changed. We actually need to make a property to store an URL in. So, we add a serializable field and a read only property. Like this:

using System;
using UnityEngine;
using Microsoft.MixedReality.Toolkit;

namespace Services
{
    [MixedRealityServiceProfile(typeof(IDataService))]
    [CreateAssetMenu(fileName = "DataServiceProfile", 
        menuName = "MixedRealityToolkit/DataService Configuration Profile")]
    public class DataServiceProfile : BaseMixedRealityProfile
    {
        [SerializeField]
        private string _dataUrl;

        public string DataUrl => _dataUrl;
    }
}

Added code in red/bold. If you go back to the inspector, you will see there is a Data Url field now added to the DataService profile.

So let's clone that default profile to SchaikwebProfile:

And enter for Data Url: https://www.schaikweb.net/demo/DemoData.json. Result:

You can now already see how you can quickly change from one configuration profile to another. You could actually clone the schaikwebprofile to another profile with different settings. Now it has only one property, but it can have a lot - and you can change from one setting to another just by selecting a new profile.

Implementing the actual service

The generated code for the service - a bit abbreviated - looks like this:

namespace Services
{
    [MixedRealityExtensionService(....
    public class DataService : BaseExtensionService, IDataService, 
      IMixedRealityExtensionService
    {
        private DataServiceProfile dataServiceProfile;

        public DataService(IMixedRealityServiceRegistrar registrar, ....) 
        {
            dataServiceProfile = (DataServiceProfile)profile;
        }

        public override void Initialize()
        {
            // Do service initialization here.
        }

        public override void Update()
        {
            // Do service updates here.
        }
    }
}

You can see the profile - the class holding the settings - is being fed into the constructor. Now we don't need Initialize and Update in this simple service, so we delete that and add this:

public async Task<IList<DemoData>> GetNames()
{
    using (var request = new HttpRequestMessage(HttpMethod.Post, 
                                                dataServiceProfile.DataUrl))
    {
        using (var client = new HttpClient())
        {
            var response = await client.SendAsync(request);
            response.EnsureSuccessStatusCode();
            var result = await response.Content.ReadAsStringAsync();
            return JsonConvert.DeserializeObject<IList<DemoData>>(result);
        }
    }
}

Notice feeding in the URL from the dataserviceProfile!

Of course, we need to add this method to the IDataService interface as well:

public interface IDataService : IMixedRealityExtensionService
{
    Task<IList<DemoData>> GetNames();
}

And now some action...

So I created this little MonoBehaviour that actually accesses and uses the service.

public class NamesReader : MonoBehaviour
{
    [SerializeField]
    private TextMeshPro _text;

    private IDataService _dataService;
    void Start()
    {
        _dataService = MixedRealityToolkit.Instance.GetService<IDataService>();
    }

    void Update()
    {
        if (Input.GetKeyDown(KeyCode.Alpha3))
        {
            LoadNames();

        }
        if (Input.GetKeyDown(KeyCode.Alpha4))
        {
            _text.text = "";
        }
    }

    private async Task LoadNames()
    {
        var names = await _dataService.GetNames();
        _text.text = string.Join(Environment.NewLine,
            names.Select(p => $"{p.FirstName} {p.LastName}"));
    }
}

You can see how it simply gets a reference to the service in the start method. If you run this in the editor and you press "3" it will try to load the values from the service, and show them in as TextMeshPro _text (pressing "4" clears it again). The extremely spectacular result looks like this:

Basically a direct dump from the data file on my website:

[
    {
        "firstName": "Scott",
        "lastName": "Guthrie"
    },
    {
        "firstName": "Alex",
        "lastName": "Kipman"
    },
    {
        "firstName": "Scott",
        "lastName": "Hanselman"
    }
]

Mocking service access

Now let's assume, for the moment, this data service is extremely expensive, slow or otherwise limited in access. Or you need to test certain edge cases but the data service does not always give them when you need them. In other words, you want to make a fake service - a mock service. This, now, is very simple.

So let's build a mocking service:

[MixedRealityExtensionService(....
public class MockDataService : BaseExtensionService, 
                               IDataService
{
    public MockDataService(IMixedRealityServiceRegistrar registrar, ....
    {
    }

    public async Task<IList<DemoData>> GetNames()
    {
        var data = new List<DemoData>
        {
            new DemoData {FirstName = "Joost", LastName = "van Schaik"},
            new DemoData {FirstName = "John", LastName = "Doe"},
            new DemoData {FirstName = "Kermit", LastName = "the Frog"},
        };
        await Task.Yield();
        return data;
    }
}

So we implement the same interface, but it does not take a DataServiceProfile configuration (although it perfectly could if I implemented the constructor). And now a second implementation version of the service appears in the drop down:

Sow you can quickly now change a single service from a production implementation to test implementation. The mock service will show this:

But what is even more cool is when you make a 'mock profile' from the RegisteredServiceProfile profile. For if you have like 20 services (and believe me, the number of services goes up pretty quickly) you can change from test to production by simply switching the profile. So I cloned the MyMixedRealityRegisteredServiceProvidersProfile itself to MockMixedRealityRegisteredServiceProvidersProfile and now, by simply switching profiles - you can change the whole extension service definition with one simple dropdown.

Conclusion

Extension services are a really powerful feature of the MRTK2, that can be used for central access of data services - typically stuff you would use Singletons for in ye olde HoloToolkit. But using service profiles also offers a quick and easy way to switch between real and mock implementations, brings an important part of enterprise level development into the traditional - ahem - more chaotic Unity development environment.

Demo project can be found here.

17 November 2019

Migrating to MRKT2 - making a 'hand palm' menu

Intro

In all modesty I think I pretty much nailed making simple 'user interfaces' for HoloLens 1 - simple dialogs and stuff. Clients told me 'even a moron can understand how to operate this' so I think I did allright. But stuff floating in thin air wasn't always ideal. And now with HoloLens 2, we can actually let stick stuff to hands, in stead of just the gaze. So I decide to check if I could make a menu that stuck in the palm of your hand. Turned out I could.

And it's ridiculously easy to boot.

Creating a hand palm menu

This was actually just built from the UX components in the MRKT2.

These are literally only two "PressableButtonHoloLens2" prefabs floating about 2cm before a quad that has this material:

Then I added a Solver Handler to the HandMenu

I had to fiddle a bit with the settings, especially the additional offset - to determine where exactly the menu is going to appear in relation to the palm. How these parameters exactly work is a bit unclear with me, so I used the scientific method: I started to change the numbers till things happened the way I liked ;)

And finally I added a Hand Constraint Palm Up with these settings

And this is basically all you have to do to have the menu appear.

Some configuration settings

Although this works with the 'normal' simulated hand setting as well, it's best viewed with the 'flat' hand palm. To get this, starting with

  • The Mixed Reality Toolkit Profile (Cloned from DefaultHoloLens2ConfigurationProfile)
  • The Input System Profile
  • The Input Simulation Profile

Then you have to expand the "Input Simulation Service" section, scroll all the way down to "Hand Gesture Settings" and change the settings there as follows:

These are my default settings - but for this demo actually only the top setting is important (Default Hand Gesture to "Flat")

Testing the menu

I think it's safe to say that most of the people reading this - including me - don't have a HoloLens 2. Fortunately you can test this quite easily in the editor. To get to the point shown in the first picture in this post, simply do the following:

  1. Start play mode
  2. Move the mouse cursor inside the game window
  3. Press the Space bar and keep it down- the right hand should now appear
  4. While keeping the space bar pressed down, also press the left control key on your keyboard and keep it down
  5. With your other hand, slowly move the mouse to the left. The hand should now start to rotate in stead of move
  6. When you have rotated the hand just past the 90° angle, the menu should appear.

Making it actually do something

You might have noticed the buttons don't do anything at this stage. So I created a little helper behaviour that shows "Yes" or "No" depending on which button you press. It's not very sophisticated:

  • It simply follows the actual hand menu around
  • It has a public method "ShowYes" and "ShowNo" that can be hooked up to the buttons to show something when they are pressed.

It sits in a "ResponseHelper" game object and the hookup to the button is therefore simple, as the YesButton shows:

The actual working is displayed in this little video:

The trick to make the right hand stay while you operate the left hand is:

  • Press Y once the right hand is in the desired position and rotation
  • Release space bar and control key (right hand stays where it is, you are just longer controlling it)
  • Press the left shift key - the left hand should now appear
  • While keeping the left shift key pressed, you can move the left hand with the mouse. To move it forward and backward like in the movie, rotate the mouse wheel. This way you can actually make the index finger touch the buttons

Conclusion

Building a hand palm menu is really easy and basically requires no code. How this will work on a real HoloLens 2 - only time will tell. In the mean time you can find the demo project here.