24 November 2019

Migrating to MRKT2 - Interaction with irregular or complex objects

Intro

Interaction with objects that are not simple shapes like cubes, spheres, capsules etc. poses some challenges. The Mixed Reality Toolkit 2 offers some great components, but they all require a top level collider. Now consider this helicopter:

It consists of a lot of small objects. Default it does not even have a collider. You cannot add a Near Interaction Touchable on top of the object because it simply cannot find a collider. Now you can generate those on import, but that makes the object kind of heavy with regards to required processing power, and hooking all those colliders up up to their own Interaction Touchable is a lot of work.

There is a simpler way of doing that, fortunately. Actually, there are two variants of this, but I am going to show the one I think works the best (and is the most beautiful).








Adding a colliding 'catcher' to rule them all

First of all, we are going to add a surrounding object inside the model itself. I took a capsule, as this gives IMHO the most beautiful result

The result now is the helicopter is now almost completely covered in what looks like a giant suppository, which is definitely not what you want.

So by fiddling around I created this material. Note: it actual color is fully transparent black - basically 0,0,0,0. so that we can see the helicopter again.

But more importantly, it has a hover light override color of green with a light intensity of 0.4.

And now, if a cursor strikes the object, you get this what I think is rather pretty ghostly glow indicating this is your focused object.

Making it interactable

Making it actually interact with events is now pretty simple. Assuming your actual 'controller' behaviour needs to control the whole game object, it needs to sit on top - so it can do more things than just interaction (like moving the helicopter, for instance - which it does not do now). It looks like this:

using System;
using TMPro;
using UnityEngine;

public class InteractionResponder : MonoBehaviour
{
    [SerializeField]
    private TextMeshPro _text;

    private int _timesClicked;

    private int _timesTouched;

    private int _timesFocus;

    public void Click()
    {
        _timesClicked++;
        UpdateText();
    }
    
    public void TouchStart()
    {
        _timesTouched++;
        UpdateText();
    }

    public void OnFocus()
    {
        _timesFocus++;
        UpdateText();
    }

    private void UpdateText()
    {
        _text.text = string.Format("Clicked: {0}{1}Touched: {2}{1}Focused: {3}", 
            _timesClicked, Environment.NewLine, _timesTouched, _timesFocus);
    }
}

Super simple, basically only three event response methods that can be called - and it tries to display the text in a TextMeshPro object, that also sits in the HologramCollection, just like the Helicopter it self. As I said, the InteractionResponder behaviour will be sitting on the helicopter object itself:

Now we need go back to the SurroundingCapsule and add an Interactable and a Near Interaction Touchable Volume script to that. The latter is apparently new or something I missed: where an ordinary Near Interaction Touchable only takes a rectangular collider, the Volume script also takes a capsule:

Then, you will need to select "Select" for Input Actions, then drag the helicopter in the little box under OnClick like you usually do in this kind of event hookup, and select ÏnteractionResponder.Click.

Then you click "Add Event", select "InteractableOnTouchReceiver" and hook the On Touch event up to the InteractionResponder.TouchStart method (we will ignore the On Touch End event in this sample).

In a similar fashion, you will add another event, select "InteractableOnFocusReceiver" and hook that to the InteractionResponder.OnFocus event.

And if you have done everything right, and hit the play button in the editor, you will see this happening when you click, touch or focus:

Conclusion

With very little code and one extra component you can make an irregular and complex object like a helicopter have an easy to touch/focus/click target that also gives some subtle but very clear visual feedback about what is happening. And this is only a start - we might as well have it respond to touch by showing some direct feedback, or show that it knows it's focused, even without a cursor hitting it (eye gaze!). Life is going to get interesting and much more immersive once HoloLens 2 comes around!

Demo project can be found here.

18 November 2019

Migrating to MRKT2 - using extension services for dependency injection

Intro

Coming from business development, you might get a little shock coming into Unity - traditionally, game developers are much more focused on making the outside pretty than the inside. Things like dependency injection are kind of unheard of or considered 'too heavy' for game development. But if you are still in the process of development,  actually being able to access a consistent (mock) data service in stead of the real live data service might be a big advantage, especially when that data service is rate limited or expensive.

The Mixed Reality Toolkit 2 offers a great feature for that: extension services. And it's actually pretty easy to use, and I am going to show a simple sample. I have written about this in very early alpha stage almost a year ago, but it's now to a point that it's actually usable.

Setting the stage

Using Unity 2018.4.6f1, I created a simple project MKRT2DepInject using the 3D template, imported the MRKT2 and TextMeshPro. For the latter I usually take essential resources only.Then I and added the MRKT2 to the SampleScene in the project. For the default profile, I usually take the DefaultHololens2Profile. Also, don't forget to set the platform to UWP (File/Build settings)

Also - and this is important - import JSON.net from the Unity store.

Extension services

A service requires an interface, an implementing class, optionally an inspector, a profile, and a default profile asset. Now the latter three may sounds maybe a bit abstract but it actually boils down to this:

  • An inspector is something that can be used to show the runtime status of a service in the editor. It's basically a debugging tool. It's entirely optional and in most cases it's not necessary.
  • A profile is a class holding configuration info for a class. If you have been using the MRKT2 for a while, you have been using them all along - cloning profiles and changing settings.
  • a default service profile asset is basically a serialized version of a profile class.

This may seem like a lot of work, but there's actually a nice tool for generating the boiler plate for all that - although had to get in a few pull request myself to getting it to work as I assume was intended ;)

Creating an extension service

Select Mixed Reality Toolkit/Utilities/Create Extension Service. This will bring up this UI:

Name the service "DataService". You will notice the "Service" suffix is mandatory. Choose "Services" for namespace. Then click the "Next" button. This will show you the next stage.

Now I like to organize my stuff a little, so I tend to put things in folders. The scripts go in a scripts/services folder, the profile in profile. You can set this by dragging the folder from the assets. Notice also I have disabled the inspector:

Hit next, and on the next screen click not now because otherwise you will be editing the default profiles - effectively, you are modifying the default settings of the MRKT2. You can do this only after you have cloned the proper profiles.

You will also notice that although you specified the default asset should have been created in the Profiles folder, it is in fact created in the Services folder. Look I am going to need to make another pull request. Anyway. I moved the DefaultDataServiceProfile to profiles, and let it sit there

Registering the service

First, we clone the top profile.

Then we disable the profiler, because that's annoyingly in the way when you want to demo something

Then we select the Extensions tab, and clone the "DefaultMixedRealityRegisteredServiceProvidersProfile" (the creators of the MRTK2 seem to have taken a liking to rather verbose names, as you might have noticed) to MyMixedRealityRegisteredServiceProvidersProfile

Now you can actually click the "+ Register a new Service Provider" button and register the service

Then you have to click the Configuration Profile drop down, which unfortunately shows you all possible profiles, and you have to pick the one you need, which is DefaultDataServiceProfile, which is fortunately at the top of the list

The end result should look like this:

Now the configuration stuff is finally done, and we are going to add some code.

The data and the data set

My simple sample is going to read a json file from the web and show the contents in the text. Therefore we need a data file, and a class to deserialize it in.

The data file sits here, and the class in which in can be deserialized looks like this

using Newtonsoft.Json;

namespace Json
{
    public class DemoData
    {
        [JsonProperty("firstName")]
        public string FirstName { get; set; }

        [JsonProperty("lastName")]
        public string LastName { get; set; }
    }
}

Configuration profile

So to make the configuration profile actually configurable, the DataServiceProfile class needs to be changed. We actually need to make a property to store an URL in. So, we add a serializable field and a read only property. Like this:

using System;
using UnityEngine;
using Microsoft.MixedReality.Toolkit;

namespace Services
{
    [MixedRealityServiceProfile(typeof(IDataService))]
    [CreateAssetMenu(fileName = "DataServiceProfile", 
        menuName = "MixedRealityToolkit/DataService Configuration Profile")]
    public class DataServiceProfile : BaseMixedRealityProfile
    {
        [SerializeField]
        private string _dataUrl;

        public string DataUrl => _dataUrl;
    }
}

Added code in red/bold. If you go back to the inspector, you will see there is a Data Url field now added to the DataService profile.

So let's clone that default profile to SchaikwebProfile:

And enter for Data Url: https://www.schaikweb.net/demo/DemoData.json. Result:

You can now already see how you can quickly change from one configuration profile to another. You could actually clone the schaikwebprofile to another profile with different settings. Now it has only one property, but it can have a lot - and you can change from one setting to another just by selecting a new profile.

Implementing the actual service

The generated code for the service - a bit abbreviated - looks like this:

namespace Services
{
    [MixedRealityExtensionService(....
    public class DataService : BaseExtensionService, IDataService, 
      IMixedRealityExtensionService
    {
        private DataServiceProfile dataServiceProfile;

        public DataService(IMixedRealityServiceRegistrar registrar, ....) 
        {
            dataServiceProfile = (DataServiceProfile)profile;
        }

        public override void Initialize()
        {
            // Do service initialization here.
        }

        public override void Update()
        {
            // Do service updates here.
        }
    }
}

You can see the profile - the class holding the settings - is being fed into the constructor. Now we don't need Initialize and Update in this simple service, so we delete that and add this:

public async Task<IList<DemoData>> GetNames()
{
    using (var request = new HttpRequestMessage(HttpMethod.Post, 
                                                dataServiceProfile.DataUrl))
    {
        using (var client = new HttpClient())
        {
            var response = await client.SendAsync(request);
            response.EnsureSuccessStatusCode();
            var result = await response.Content.ReadAsStringAsync();
            return JsonConvert.DeserializeObject<IList<DemoData>>(result);
        }
    }
}

Notice feeding in the URL from the dataserviceProfile!

Of course, we need to add this method to the IDataService interface as well:

public interface IDataService : IMixedRealityExtensionService
{
    Task<IList<DemoData>> GetNames();
}

And now some action...

So I created this little MonoBehaviour that actually accesses and uses the service.

public class NamesReader : MonoBehaviour
{
    [SerializeField]
    private TextMeshPro _text;

    private IDataService _dataService;
    void Start()
    {
        _dataService = MixedRealityToolkit.Instance.GetService<IDataService>();
    }

    void Update()
    {
        if (Input.GetKeyDown(KeyCode.Alpha3))
        {
            LoadNames();

        }
        if (Input.GetKeyDown(KeyCode.Alpha4))
        {
            _text.text = "";
        }
    }

    private async Task LoadNames()
    {
        var names = await _dataService.GetNames();
        _text.text = string.Join(Environment.NewLine,
            names.Select(p => $"{p.FirstName} {p.LastName}"));
    }
}

You can see how it simply gets a reference to the service in the start method. If you run this in the editor and you press "3" it will try to load the values from the service, and show them in as TextMeshPro _text (pressing "4" clears it again). The extremely spectacular result looks like this:

Basically a direct dump from the data file on my website:

[
    {
        "firstName": "Scott",
        "lastName": "Guthrie"
    },
    {
        "firstName": "Alex",
        "lastName": "Kipman"
    },
    {
        "firstName": "Scott",
        "lastName": "Hanselman"
    }
]

Mocking service access

Now let's assume, for the moment, this data service is extremely expensive, slow or otherwise limited in access. Or you need to test certain edge cases but the data service does not always give them when you need them. In other words, you want to make a fake service - a mock service. This, now, is very simple.

So let's build a mocking service:

[MixedRealityExtensionService(....
public class MockDataService : BaseExtensionService, 
                               IDataService
{
    public MockDataService(IMixedRealityServiceRegistrar registrar, ....
    {
    }

    public async Task<IList<DemoData>> GetNames()
    {
        var data = new List<DemoData>
        {
            new DemoData {FirstName = "Joost", LastName = "van Schaik"},
            new DemoData {FirstName = "John", LastName = "Doe"},
            new DemoData {FirstName = "Kermit", LastName = "the Frog"},
        };
        await Task.Yield();
        return data;
    }
}

So we implement the same interface, but it does not take a DataServiceProfile configuration (although it perfectly could if I implemented the constructor). And now a second implementation version of the service appears in the drop down:

Sow you can quickly now change a single service from a production implementation to test implementation. The mock service will show this:

But what is even more cool is when you make a 'mock profile' from the RegisteredServiceProfile profile. For if you have like 20 services (and believe me, the number of services goes up pretty quickly) you can change from test to production by simply switching the profile. So I cloned the MyMixedRealityRegisteredServiceProvidersProfile itself to MockMixedRealityRegisteredServiceProvidersProfile and now, by simply switching profiles - you can change the whole extension service definition with one simple dropdown.

Conclusion

Extension services are a really powerful feature of the MRTK2, that can be used for central access of data services - typically stuff you would use Singletons for in ye olde HoloToolkit. But using service profiles also offers a quick and easy way to switch between real and mock implementations, brings an important part of enterprise level development into the traditional - ahem - more chaotic Unity development environment.

Demo project can be found here.

17 November 2019

Migrating to MRKT2 - making a 'hand palm' menu

Intro

In all modesty I think I pretty much nailed making simple 'user interfaces' for HoloLens 1 - simple dialogs and stuff. Clients told me 'even a moron can understand how to operate this' so I think I did allright. But stuff floating in thin air wasn't always ideal. And now with HoloLens 2, we can actually let stick stuff to hands, in stead of just the gaze. So I decide to check if I could make a menu that stuck in the palm of your hand. Turned out I could.

And it's ridiculously easy to boot.

Creating a hand palm menu

This was actually just built from the UX components in the MRKT2.

These are literally only two "PressableButtonHoloLens2" prefabs floating about 2cm before a quad that has this material:

Then I added a Solver Handler to the HandMenu

I had to fiddle a bit with the settings, especially the additional offset - to determine where exactly the menu is going to appear in relation to the palm. How these parameters exactly work is a bit unclear with me, so I used the scientific method: I started to change the numbers till things happened the way I liked ;)

And finally I added a Hand Constraint Palm Up with these settings

And this is basically all you have to do to have the menu appear.

Some configuration settings

Although this works with the 'normal' simulated hand setting as well, it's best viewed with the 'flat' hand palm. To get this, starting with

  • The Mixed Reality Toolkit Profile (Cloned from DefaultHoloLens2ConfigurationProfile)
  • The Input System Profile
  • The Input Simulation Profile

Then you have to expand the "Input Simulation Service" section, scroll all the way down to "Hand Gesture Settings" and change the settings there as follows:

These are my default settings - but for this demo actually only the top setting is important (Default Hand Gesture to "Flat")

Testing the menu

I think it's safe to say that most of the people reading this - including me - don't have a HoloLens 2. Fortunately you can test this quite easily in the editor. To get to the point shown in the first picture in this post, simply do the following:

  1. Start play mode
  2. Move the mouse cursor inside the game window
  3. Press the Space bar and keep it down- the right hand should now appear
  4. While keeping the space bar pressed down, also press the left control key on your keyboard and keep it down
  5. With your other hand, slowly move the mouse to the left. The hand should now start to rotate in stead of move
  6. When you have rotated the hand just past the 90° angle, the menu should appear.

Making it actually do something

You might have noticed the buttons don't do anything at this stage. So I created a little helper behaviour that shows "Yes" or "No" depending on which button you press. It's not very sophisticated:

  • It simply follows the actual hand menu around
  • It has a public method "ShowYes" and "ShowNo" that can be hooked up to the buttons to show something when they are pressed.

It sits in a "ResponseHelper" game object and the hookup to the button is therefore simple, as the YesButton shows:

The actual working is displayed in this little video:

The trick to make the right hand stay while you operate the left hand is:

  • Press Y once the right hand is in the desired position and rotation
  • Release space bar and control key (right hand stays where it is, you are just longer controlling it)
  • Press the left shift key - the left hand should now appear
  • While keeping the left shift key pressed, you can move the left hand with the mouse. To move it forward and backward like in the movie, rotate the mouse wheel. This way you can actually make the index finger touch the buttons

Conclusion

Building a hand palm menu is really easy and basically requires no code. How this will work on a real HoloLens 2 - only time will tell. In the mean time you can find the demo project here.

26 September 2019

Migrating to MRTK2 - setting up and understanding Eye Tracking

Intro

One of the exiting features HoloLens 2 brings us, is Eye Tracking. On HoloLens 1, you had to move your whole head to move the gaze cursor. and while that works well enough for a lot of applications and it seems most people pretty quickly got used to it, mother Nature has equipped us with roving eyes. HoloLens 2, when calibrated for Eye Tracking, can actually track what your are looking at, not merely where your head is pointed at.

Although there is a nice demo in the Mixed Reality Toolkit 2, it took me a while to find out how all the events actually work and need to be hooked up to get it to work consistently. So I made a little demo that works like this:

Events and tracking them

The little blue globe is the target, is equipped with and EyeTrackingTarget script from the MRTK2 that supports five events, which you can see going off as the red spheres turns green

  • LS: On Look At Start
  • WL: While Looking At Target
  • LA: On Look Away
  • DW: On Dwell
  • S: On Selected

The EyeTrackingTarget is configured as follows:

In the scene, the whole thing showing the images (the five little red-turning-green globes with labels) is one prefab containing 5 little spheres with a label above it - each a prefab on its own. Every sphere has a "Single Shot Controller" script that turns it's sphere green for 0.5 seconds when an event is called.

It's a super simple script, the interesting part is even shorter:

public void ShowActivated()
{
    _timeActivated = Time.time;
}

void Update()
{
    var desiredColor = Time.time - _timeActivated > _resetTime ? 
        _originalColor : _activatedColor;
    if (_material.color != desiredColor)
    {
        _material.color = desiredColor;
    }
}

When ShowActivated is called, the _timeActivated field is set to now. The Update loop then checks every 60th of a second whether it should set the color to red or green, depending on the fact if the latest call to ShowActivated is already half a second ago.

What happens when

The event names are pretty straightforward, and things happen more or less than you expect, although

What is actually happening:

  • When the user first looks at the eye tracked object, "On Look Start" is fired once
  • While the user keeps looking, "While Looking At Target" keeps being called. Thus, the green sphere stays green. The calling seems to at the same instant - or nearly the same instant - as the previous event
  • As soon as the user stops looking at the sphere, "On Look Away" is called and "While Looking At Target" is stopped being called
  • "On Dwell" is being called after the time defined in the "Dwell Time in Sec" slider has passed has the user is still looking at the object. I took the ridiculously user-unfriendly time of three seconds to make sure this event was easily distinguishable from the other events. Here's the thing though - it's being called once. That kind of confused me.
  • "On Selected" is being called when then object being looked at and you say "Select". This is one of the predefined commands in the default speech commands profile (DefaultMixedRealitySpeechCommandsProfile)

Setting up and configuring eye tracking in profiles

Coming from the default profile, you will need to configure at least profiles, and better still three.

First, you will need to clone the Default Toolkit profile itself. First thing I do, while still in the early phases, is disabling the diagnostics system as I don't want that profiler in my face the whole time:

Next, you will have to clone the Input System Profile and add a Windows Mixed Reality Eye Gaze Provider"

As you can see, this sits in namespace "Microsoft.MixedReality.Toolkit.WindowsMixedReality.Input"

And then finally, and optionally, if you want this to work in the editor too, you will have to configure the Input Simulation Service. You do that by cloning the default input system profile and check the "Simulate Eye Position" checkbox

One more thing: setting capabilities

You will notice now the Gaze cursor turning up in the editor so you might thing you are done. Well, almost. There's the small matter of capabilities. C++ or not, the result is still a UWP app, and Gaze Input is a capability that you need to ask consent for. This, unfortunately, is not yet implemented in Unity. So after you generated the C++ app, you will need to open it in Visual Studio, select the Package.appmanifest file, and select there the Gaze Input capability.

 If you deploy the resulting solution to an emulator (or, if you are one of the lucky ones out there, an actual HoloLens 2) and it asks for your consent, you did it right.

Conclusion and final words

Setting up Eye Tracking is not that hard, but it takes a few steps. Mind you the MRTK2 comes with a few profiles that make settings things up easier - I just wrote down the steps from scratch. The demo project shows this in all it's glory ;) and allows you to play with it yourself without having to set it up. Notice there's hardly any code outside of the MRKT2 itself involved - there's only one custom script (my SingleShotController) and that's very simple.

By the way - in my own (so far single) HoloLens 2 app I only use the On While Looking event. This seems to be the most trustworthy. I previous iterations of the MRTK2 and/or the HoloLens emulator the other events did not go off reliably enough (IMHO) to use them for real. Of course, this may all be different now, and most likely is completely different (better) on a real device. We are still waiting for that.

On a final note – leaving the eye cursor visible can be confusing and/or annoying, or so I have been told. So under normal circumstances, it should be turned off – the object being looked should give some indication it is notices being looked at. I have found a way to do this myself, but that’s pretty complex, and just as I was about to blog about that, Julia Schwarz herself added a (better) sample to turn off pointers by code to the main MRTK2 repo.

05 September 2019

Migrating to MRTK2–submitting a HoloLens 2 app to the Microsoft Store

Intro

On Monday September 2 version 4.0.19 of my first HoloLens Store app, AMS HoloATC, became available in the Microsoft Store. This version has been completely rebuilt using the Mixed Reality Toolkit 2, and includes some HoloLens 2–only functionality: you can actually touch the airplanes now, and using gaze tracking it will show you a picture of the actual aircraft, if available.

Hoops to jump

As you might recall from earlier posts, things have changed quite a bit when it comes to actually deploying apps on HoloLens. Now, we will need compiling and submitting a Unity-generated C++ solution to the store. Although the process looks very much like we used to do for Unity apps running on the .NET backend, there are three things you might run into:

  1. Your WACK test will most likely fail
  2. If you have submitted your app as a bundle before, make sure you submit it as a bundle again. A Unity generated C++ solution does not have this as a default setting
  3. If you create (like me) an app that is supposed to run on Desktop (in immersive head sets), HoloLens 1 and HoloLens 2 you may find out your app cannot be downloaded by a HoloLens 1 anymore – or the HoloLens 2 emulator, for what matters

Fixing the WACK fail

Before you actually submit an app to the Store, you do the Windows Application Certification test first, to prevent embarrassing easy-to-prevent fails, right. (right?). And if you do so, you will see it fail. It will spout quite some errors at you.

  • The Windows security features test will complain about:
    • HolographicAppRemoting.dll has failed the AppContainerCheck check.
    • PerceptionDevice.dll has failed the AppContainerCheck check.
    • UnityRemotingWMR.dll has failed the AppContainerCheck check.
  • The Supported API test will list 10 errors concerning UnityRemotingWMR calling unsupported APIs
  • The Debug configuration test will tell youUnity RemotingWMR is only built in debug mode
  • And if you try to build for x86 or ARM, the Package sanity test will tell you HolographicAppRemoting.dll, PerceptionDevice.dll and UnityRemotingWMR.dll are only available for x64.

The solution is bit weird, but can be found in this Unity forum post, and involves manually hacking the “Unity Data.vcxitems” file that is inside your store projects. Open in it in a text editor, and search for “HolographicAppRemoting”. This will show this piece of XML:

<None Include="$(MSBuildThisFileDirectory)HolographicAppRemoting.dll">
  <DeploymentContent>true</DeploymentContent>
  <ExcludeFromResourceIndex>true</ExcludeFromResourceIndex>
</None>

Now simply change the value “true” inside the DeploymentContent to false:

<None Include="$(MSBuildThisFileDirectory)HolographicAppRemoting.dll">
  <DeploymentContent>false</DeploymentContent>
  <ExcludeFromResourceIndex>true</ExcludeFromResourceIndex>
</None>

Repeat this for PerceptionDevice and UnityRemotingWMR. Rebuild your app, generated packages again and presto, your WACK test will pass. That is literally all that’s needed to get rid of this multitude of errors.

Bundling your app

I don’t know exactly what changed, but all my HoloLens apps that I created with previous Unity versions were uploaded as bundles. To be honest, I never did pay much attention to it. But the default setting of the generated C++ setting is this:

which generates an appx per platform (in my case three: one for x64, one for x86 and one for ARM for, in the same order, WMR immersive headsets, HoloLens 1 and HoloLens 2). If you try to upload those files as updates to an app that was previously submitted as a bundle you will be greeted with:

And this can simply be fixed by changing the setting “Generate app bundle” from “If needed” to “Always”

Make your app (still) downloadable for HoloLens 1

To be honest, 4.0.19 was not the first HoloLens 2 enabled version I submitted. That was 4.0.17. It got certified – as one of the first if not the very first indie HoloLens 2 app. I was very happy about this – for about 25 seconds. And then I got a very unpleasant surprise: I could not download it anymore on a HoloLens 1. Sure enough, you could find it in the store, but the “Install” button was greyed out (well, light blue in stead of dark blue, but in any case unoperational). Curiously enough, a HoloLens that had it already installed did get the updated version though.

The reason for this behavior can be found down in this post in the Unity forums. Basically, Unity dropped support for anything lower than DirectX 10 and this is listed now in the app’s store manifest. Unfortunately, when the Store on the HoloLens 1 (and the HoloLens 2 emulator, incidentally) checks for DirectX 10, the device apparently reports “don’t have that” and the Store consequently blocks download.

Now I think this will be fixed shortly, but in the mean time here’s a workaround for if you need do to a submission right now:

First, open the Package.AppManifest.xml file in a text editor. Find these lines:

<TargetDeviceFamily Name="Windows.Desktop" MinVersion="10.0.16299.0" MaxVersionTested="10.0.18362.0" />
<TargetDeviceFamily Name="Windows.Holographic" MinVersion="10.0.16299.0" MaxVersionTested="10.0.18362.0" /

Comment out the second line. Then proceed to build a bundle, but for x64 only.

Go back to Package.AppManifest.xml, uncomment the first line, and re-activate the second. Now find the StoreManifest.xml file – open it in a text editor. It should look like this:

<?xml version="1.0" encoding="utf-8"?>
<StoreManifest xmlns="http://schemas.microsoft.com/appx/2015/StoreManifest">
    <Dependencies>
        <DirectXDependency Name="D3D11_HWFL_10_0" />
    </Dependencies>
</StoreManifest>

Simply remove the line <DirectXDependency Name="D3D11_HWFL_10_0" />

Now build a package for x86 and ARM. I am not sure if this is essential, but I made sure the x86/ARM bundle was had a one release number one point higher than the x64.

Now proceed to upload both bundles into a submission and set check boxes as needed. In my store submissions it looks like this:

Now as you can see version 4.0.17 still contains all platforms, but that is not necessary. But because the 4.0.19 has a higher version number, it will be offered first to HoloLens 1 and 2.

Anyway, now your app, once certified, should be downloadable for all devices. On x64 the DirectX 10 check will be still in place, for other devices it’s disabled.

Conclusion

It’s early days for HoloLens 2 (I built my app without having direct access to it) but I think it’s pretty cool to have an app armed and ready for it. It takes some fiddling around with xml files to get it right, but I am sure things will be better soon and these work arounds won’t be necessary anymore.

Enjoy building the next generation Mixed Reality apps!