05 September 2018

Getting a quick Mixed Reality log file from someone else's computer or HoloLens

When the **** hits the fan

A quick tip this time, born from necessity. I guess we've all been here - you make a perfectly working Mixed Reality app for Immersive Headsets and/or the HoloLens,, you test the hell out of it, you submit it to the store, you get approved, and you sit back and relax to wait for download numbers to go up and five star reviews to roll in...

... except they don't. Between the five stars a few one stars appear. "Does not work". "Crashes immediately on my system". Especially on Immersive Headsets things like this can happen, for the simple reason there are like 8 different headsets out there, connected to Finagle knows how many differently configurated PCs. I've had this happen to me last year, on systems that were configured with gtx1080i-based systems and/or the Samsung Odyssey headset - those things were so bloody fast my app ran into timing issues. But I didn't have either so I could not reproduce. What to do?

Getting the log file from a PC

Turns out you can actually get a Unity Player logfile from someone's system pretty easily. If your user is using an Immersive Headset, simply ask them to open go to the desktop of the PC the headset is connected to, open the File Explorer and go to the following folder:

%USERPROFILE%AppData\Local\Packages

In that folder there are a number of app-specific folders, usually quite a lot. Suppose there's a problem with my app AMS HoloATC. In the folder, your user needs to find the folder that is specific for AMS HoloATC

image

So you user needs to look into 17852LocalJoost.AMSHoloATC_rt0w0x8frck66

In that folder there's a folder TempState, which contains your coveted UnityPlayer.log:

image

In that file there's a lot of stuff - basically everything you see passing by in your Visual Studio Output window if you are running the app from Visual Studio:

Direct3D:
    Version:  Direct3D 11.0 [level 11.1]
    Renderer: Intel(R) HD Graphics 630 (ID=0x591b)
    Vendor:   (null)
    VRAM:     2111 MB
Initialize engine version: 2018.1.0f2 (d4d99f31acba)
WARNING: Shader Unsupported: 'MixedRealityToolkit/InvisibleShader' - Pass '' has no vertex shader
UnloadTime: 1.484216 ms
TryGetGeometry always returns false.
 
(Filename: C:\buildslave\unity\build\Runtime/Export/Debug.bindings.h Line: 43)

Display is Opaque
 
(Filename: C:\buildslave\unity\build\Runtime/Export/Debug.bindings.h Line: 43)

NullReferenceException: Object reference not set to an instance of an object.
   at OpaqueDetector.get_IsOpaque()
   at InitialPlaceByTap.Start()
   at InitialPlaceByTap.$Invoke2(Int64 instance, Int64* args)
   at UnityEngine.Internal.$MethodUtility.InvokeMethod(Int64 instance, Int64* args, IntPtr method) 
(Filename: <Unknown> Line: 0)

And lo and behold - apparently my OpaqueDetector class is playing havoc on the user's machine. Now at least I know where my app is going wrong.

Getting the log file from a HoloLens

You can do the same thing on a HoloLens, but there are a few caveats there:

  • You can only do this for sideloaded apps, not apps installed from the Store
  • Your user will need to use the file explorer in the device portal.

image

In User Folders \ LocalAppData \ 17852LocalJoost.AMSHoloATC_2.1.9.0_x86__rt0w0x8frck66 \ TempState \ you will find the log file. But you will need to send the user experiencing the problem an Appx package first. Since HoloLenses are basically all the same, this scenario is not very likely to happen, but it can be useful if you are making a sideloaded business app and someone hits upon an unforeseen scenario making the app behave in an unforeseen way - this may help a quick diagnosis.

Important notes

  • The log file is overwritten every session. So if the user starts the app again after aforesaid unexpected behavior, the data is gone. So make sure people using or testing the app know this.
  • Make sure you only write none-sensitive stuff to the log stuff: API keys, cookie values, decoded stuff and other information you don't want to tell the word - don't write that in your log file.
  • This is not a replacement for instrumentation. But it can (and has been to me) a life saver if someone from the HoloDevelopers Slack community can repro the problem, can quickly send you some log files, you can send a fixed Appx back for them to test, and in a few rounds you can squash the bug and submit a fixed version - even if you don't have access to the system playing havoc.

Credits and thanks

Special thanks to freshly (finally!) minted MVP Jesse McCulloch, both for making me aware of this file and where to find it - as well as being that invaluable repro/tester for me when Walk the World for Immersive Headsets suddenly started to crash on certain configurations - including, fortunately, his 

08 August 2018

Auto word wrapping a text to fit before a backdrop pane in Mixed Reality apps

Intro

As Mixed Reality in general and HoloLens apps in particular become more mainstream, so they become ever more complex, and this reflects in their interfaces. And if, in those user interfaces, we need to communicate something more complex to user, text is in most cases still the most efficient way of doing that.

Now a floating text by itself may not be very readable, so you might want to use some kind of backdrop. I usually take a kind of blueish background with white text, as that turns out the most readable. And yeah, I know, it feels a bit 2D Windows-ish – but until someone comes up with a better paradigm, it will have to do.

A TextMesh in Unity does not have a clue about word wrapping or fitting into a specific 'box' – you need to do that yourself. There is some of that in the Mixed Reality Toolkit dialog system – but that wraps based on the maximum of characters. If you change the font size, or the size of the backdrop – you will need to start testing again if you message fits. No need for that here.

The actual text wrapping

public static class TextUtilities
{
    public static void WordWrap(this TextMesh mesh, float maxLength)
    {
        var oldQ = mesh.gameObject.transform.rotation;
        mesh.gameObject.transform.rotation = Quaternion.identity;
        var renderer = mesh.GetComponent<Renderer>();
        var parts = mesh.text.Split(' ');
        mesh.text = "";
        foreach (var t in parts)
        {
            var builder = new StringBuilder(mesh.text);

            mesh.text = string.Concat(mesh.text, t, " ");
            if (renderer.bounds.size.x > maxLength)
            {
                builder.AppendFormat("{0}{1} ", Environment.NewLine, t);
                mesh.text = builder.ToString();
            }
        }

        mesh.text = mesh.text.TrimEnd();
        mesh.gameObject.transform.rotation = oldQ;
    }
}

This sits in an extension method in the class TextUtilities.It assumes the text has already been applied to the text mesh. What is basically does is:

  • Rotate the text mesh to Identity so it can measure width in one plane
  • Split the text in words
  • For each word:
    • Make a StringBuilder for the text so far
    • Add the word to the mesh
    • Calculate the width of the resulting mesh
    • If the mesh is wider than allowed:
      • Add the word to the StringBuilder with a newline prepending it
      • Set mesh to the StringBuilder’s contents

Now I did not make that up myself, I nicked it from here in the Unity Forums but I kind of simplified and optimized it a bit – a thing I am prone to doing as my colleague knows ;)

Calculating the size

As I have written in a previous post, you can calculate an object's size by getting the Render's size. But I also shown what you get is the size after rotation. So what you need to do is to calculate the unrotated size. I put the same trick as used in the way to measure the text width in an extension method:

public static class GameObjectExtensions
{
    public static Vector3 GetRenderedSize( this GameObject obj)
    {
        var oldQ = obj.transform.rotation;
        obj.transform.rotation = Quaternion.identity;
        var result = obj.GetComponent<Renderer>().bounds.size;
        obj.transform.rotation = oldQ;
        return result;
    }
}

Rotate the object to identity, get it's render's size, return the object back, then return the result. A rather crude way to get to the size, but it seems to work. I stuck this method into my GameObjectExtensions class.

Connecting all the dots

The only thing now missing is a behaviour that will be using all this:

public class SimpleTextDialogController : MonoBehaviour
{
    private TextMesh _textMesh;

    [SerializeField]
    public GameObject _backPlate;

    [SerializeField]
    public float _margin = 0.05f;

    void Start()
    {
        _textMesh = GetComponentInChildren<TextMesh>();
        gameObject.SetActive(false);
    }

    public void ShowDialog(string text)
    {
        gameObject.SetActive(true);
        StartCoroutine(SetTextDelayed(text));
    }

    private IEnumerator SetTextDelayed(string msg)
    {
        yield return new WaitForSeconds(0.05f);
        _textMesh.text = msg;
        var sizeBackplate = _backPlate.GetRenderedSize();
        var textWidth = sizeBackplate.x - _margin * 2f;
        _textMesh.WordWrap(textWidth);
        _textMesh.GetComponent<Transform>().position -= new Vector3(textWidth/2f, 0,0);
    }
}

So the start method immediately renders the game object invisible. Calling the ShowDialog makes the 'dialog' visible and actually sets the text, by calling the ShowTextDelayed coroutine, where the stuff is actually happening. First we get the size of the 'backplate', then we calculate the desired width of the text. After that the text is wordwrapped, and then it's moved half the calculated width to the left.

So why the delay? This is because the complete dialog looks like this:

image

I reuse the AdvancedKeepInViewController from my previous post. But if you use the AppearInView property (as I do), that places the dialog's center exactly on the gaze cursor when it's activated. And you will see that it tends to appear on the left of the center, then quickly move to the center.

That is because when the text is rendered without the word wrap, it looks like this

image

So Unity calculates the horizontal component of the center of all the objects in the combined game object, that end up a little left of the center of the text. But what we want to see is this:

image

The text is nicely wrapped and fits before the box. So hence the little delay, to allow AdvancedKeepInViewController to calculate and place the object in the center, before we start messing with the text.

Finally there's a simple behaviour called DialogLauncher but basically all that does is calling the ShowDialog method with some text I got from "Startupsum", a kind of Lorum Ipsum generator that uses words from your average Silicon Valley startup marketing manager's BS jargon.

The fun thing is that when you make the dialog bigger and run the app again, it will automatically word wrap to the new size:

image

And when you increase the font size:

image

Requirements and limitations

There are four requirements to the text:

  • The text needs to be placed in the horizontal center of the backdrop plate.
  • Vertically it needs to be in the position where you want to the text to start flow down from
  • imageIt needs to have it's Anchor upper left
  • The Alignment needs to be left

Limitations

  • If you have a single word in your text that's so long it takes up more space than available, you are out of luck. Germans need to be especially careful ;)
  • As you can see a little in the last image: if the text is so long it takes up more vertical space than available, Unity will render the rest 'in thin air' under the backdrop.

It's actually possible to fix that too, but you might wonder how efficiently you are communicating if you need to write large texts in what is in essence an immersive experience. How much people read an entire six-paragraph text in zoo or a museum? "Brevity is the soul of wit", as The Bard said, so keep texts short and concise.

Conclusion

Thank to Shopguy on the Unity Forums for the original word wrapping code. The code for this article can be found here.

18 July 2018

An advanced gaze-following behaviour to place objects on top or in front of obstructions (and scale if necessary)

Intro

In my previous post I described some complicated calculations using a BoxCastAll to determine how to place a random object on top or in front of some obstruction in the looking direction of the user, be it another object or the Spatial Mesh. Because the post was long enough as it was, I described the calculations separately. They are in an extra method "GetObjectBeforeObstruction" in my HoloToolkitExtension LookingDirectionHelpers, and I wrote a very simple Unity behaviour to show how it could be used. But that behaviour simply polls every so many seconds (2 is default) where the user looks, then calls GetObjectBeforeObstruction and moves the object there. This gives a kind of nervous result. I promised a more full fledged behaviour, and here it is: AdvancedKeepInViewController. It’s basically sitting in a project that looks remarkably like the demo project in the previous post: the same ‘scenery’, only there’s a 4th element you can toggle using the T button or by saying “Toggle”.

image

Features

The behaviour

  • Only moves the object if the head is rotated more than a certain number of degrees per second, or the user moves a certain number of meters per second. It use the CameraMovementTracker from Walk the World that I described in an earlier post.
  • Optionally fine tunes the location where the object is placed after doing an initial placement (effectively doing a BoxCastAll twice per movement)
  • Optionally scales the object to have a more or less constant viewing size. This is indented for 'billboards' like objects - i.e. floating screens.
  • Optionally makes an object appear right in front of the user if it's enabled, in stead of moving it in view the first time from the last place where it was before it got disabled.
  • Optionally makes the object disappear when the user is moving a certain number of metes per second, to prevent objects from blocking the view or providing distractions. This is especially useful when running an app in a HoloLens while you are on a factory floor where you really want to see things like handrails, electricity cables, or holes in the floor (possibly with a smelter in it).

The code is not that complicated, but I thought it best to explain it step by step. I skip the part where all the editor-settable properties are listed - you can find them in AdvancedKeepInViewController's source in the demo project. I have added explanatory tooltip description to almost all of them.

Starting up

The start is pretty simple:

void Start()
{
    _objectMaterial = GetComponentInChildren<Renderer>().material;
    _initialTransparency = _objectMaterial.color.a;
}

void OnEnable()
{
    _startTime = Time.time + _delay;
    DoInitialAppearance();
    _isJustEnabled = true;
}

private void DoInitialAppearance()
{
    if (!AppearInView)
    {
        return;
    }

    _lastMoveToLocation = GetNewPosition();
    transform.position = _lastMoveToLocation;
}

We get the material of the first Render's material we can find and it's initial transparency, as we need be able to revert to that later. Then we need to check if the user has selected to object to initially appear in front, and if so, do the initial appearance. At the end you see GetNewPosition being called, that's a simple wrapper around LookingDirectionHelpers.GetObjectBeforeObstruction. It tries to project the object to hit an obstruction at a certain max direction; if there is no obstruction in that range, just give a point at the maximum distance. Since it's called multiple times and I am lazy, I made a little method of it

private Vector3 GetNewPosition()
{
    var newPos = LookingDirectionHelpers.GetObjectBeforeObstruction(gameObject, MaxDistance,
        DistanceBeforeObstruction, LayerMask, _stabilizer);
    if (Vector3.Distance(newPos, CameraCache.Main.transform.position) < MinDistance)
    {
        newPos = LookingDirectionHelpers.CalculatePositionDeadAhead(MinDistance);
    }
    return newPos;
}

Moving around

The main thing is, of course, driven by the Update loop. The Update method therefore the heart of the matter:

void Update()
{
    if (_startTime > Time.time)
        return;
    if (_originalScale == null)
    {
        _originalScale = transform.localScale;
    }

    if (!CheckHideWhenMoving())
    {
        return;
    }

    if (CameraMovementTracker.Instance.Distance > DistanceMoveTrigger ||
        CameraMovementTracker.Instance.RotationDelta > DeltaRotationTrigger || 
        _isJustEnabled)
    {
        _isJustEnabled = false;
        MoveIntoView();
    }
#if UNITY_EDITOR
    if (_showDebugBoxcastLines)
    {
        LookingDirectionHelpers.GetObjectBeforeObstruction(gameObject, MaxDistance,
            DistanceBeforeObstruction, LayerMask, _stabilizer, true);
    }
#endif
}

After the startup timeout (0.1 second) has been expired, we first gather the original scale of the object (needed if we actually scale). If the user is moving fast enough, hide the object and stop doing anything. Else, use the CameraMovementTracker that I wrote about two posts ago to determine if the user has moved or rotated enough to warrant a new location for the object (and the first time the code gets here, repositioning should happen anyway). And then it simply shows the Box Cast debug lines, that I already extensively showed off in my previous post.

So the actual moving around is done by these two methods (using once again good old LeanTween), and the second one is pretty funky indeed:

private void MoveIntoView()
{
    if (_isMoving)
    {
        return;
    }

    _isMoving = true;
    var newPos = GetNewPosition();
    MoveAndScale(newPos);
}

private void MoveAndScale(Vector3 newPos, bool isFinalAdjustment = false)
{
    LeanTween.move(gameObject, newPos, MoveTime).setEaseInOutSine().setOnComplete(() =>
    {
        if (!isFinalAdjustment && EnableFineTuning)
        {
            newPos = GetNewPosition();
            MoveAndScale(newPos, true);
        }
        else
        {
            _isMoving = false;
            DoScaleByDistance();
        }
    });
    _lastMoveToLocation = newPos;
}

So the move MoveIntoView method first check if a move action is not already initiated. Then it gets a new position using - duh - GetNewPosition again, and calls MoveAndScale. MoveAndScale moves the object to it's new position, then calls itself an extra time. The idea behind this is a follows: the actual bounding box of the object might have changed between the original cast in MoveIntoView and the eventual positioning if the object you move is locked to be looking at the Camera while it moved, using something like the Mixed Reality Toolkit's BillBoard or (as in my sample) my very simple LookAtCamera behaviour . So a second 'finetuning' call is done, using the 'isFinalAdjustment' parameter. And if we are done moving, optionally we do some scaling. And this looks like this:

You might also notice the cubes appear from the camera’s origin, the floating screen appears initially in the right place. This is another option you can select.

Scale it up. Or down

For an object like a floating screen with text, you might want to ensure readability. So if your text is projected too far away, it might become unreadable. If it is projected too close, the text might become huge, the user can only see a small portion of it - and effectively it's unreadable too. Hence this little helper method

private void DoScaleByDistance()
{
    if (!ScaleByDistance || _originalScale == null || _isScaling)
    {
        return;
    }
    _isScaling = true;
    var distance = Vector3.Distance(_stabilizer ? _stabilizer.StablePosition : 
        CameraCache.Main.transform.position,
        _lastMoveToLocation);
    var newScale = _originalScale.Value * distance / MaxDistance;
    LeanTween.scale(gameObject, newScale, MoveTime).setOnComplete(() => _isScaling = false);
}

I think it only makes sense for 'text screens', not for 'natural' objects, therefore it's an option you can turn off in the editor. But if you do turn it on, it determines the scale by multiplying the original scale by the distance divided by the MaxDistance, assuming that is the distance you want to see you object on it's original scale as defined in the editor. Be aware that the autoscaling can make the screen appear inside other objects again, so use wisely and with caution.

Fading away when necessary

This method should return false whenever the object is faded out, or fading in or out – that way, MoveIntoView does not get called by Update.

private bool CheckHideWhenMoving()
{
    if (!HideWhenMoving || _isFading)
    {
        return true;
    }
    if (CameraMovementTracker.Instance.Speed > HideSpeed &&
        !_isHidden)
    {
        _isHidden = true;
        StartCoroutine(SetFading());
        LeanTween.alpha(gameObject, 0, FadeTime);
    }
    else if (CameraMovementTracker.Instance.Speed <= HideSpeed && _isHidden)
    {
        _isHidden = false;
        StartCoroutine(SetFading());
        LeanTween.alpha(gameObject, _initialTransparency, FadeTime);
        MoveIntoView();
    }

    return !_isHidden;
}

private IEnumerator SetFading()
{
    _isFading = true;
    yield return new WaitForSeconds(FadeTime + 0.1f);
    _isFading = false;
}

Basically this method says: if this thing should be hidden at high speed and is not already fading in or out:

  • If the user’s speed is higher than the threshold value and the object is visible, hide it
  • If the user’s speed is lower than the threshold value and the object is invisible, show it.

The way of hiding and showing is once againdone with LeanTween, but I found that using the .setOnComplete was a bit unreliable for detecting the fading in or out came to an end. So I simply use a coroutine that sets the blocking _isFading, waits a wee bit longer than the FadeTime, and then clears _isFading again. That way, no multiple fades can start or stop.

The tiny matter of transparency

The HideWhenMoving feature has a dependency – for it to work, the material needs to support transparency. That is to say – it’s rendering mode needs to be set to transparent (or at least not opaque). As you move around quickly, the semi transparent box and the double boxes will fade out and in nicely:

But if you move around and the floating screen wants to fade, you will see only the text fade out – the window outline stays visible. This has a simple reason: the material’s rendering mode is set is opaque, not transparent

image

The background of the screen with the button fades out nicely though because it uses a different material – actually a copy, but with only the rendering mode set to transparent:

image

But if you look really carefully, you will see not the entire screen fades out. Part of the button seems to remain visible. The culprit is the button’s backplate:

image

Now it’s up to you – you can change the opacity this material, and then it will be fixed for all buttons. The problem is that this material is part of the Mixed Reality Toolkit. So if you update that, it will most likely be overwritten. And then you will have to keep track of changes like this. Or you can manually change every backplate of every button, or do that once and make your own prefab button. There are multiple ways to Rome in this case.

Nice all those Unity demos…

But how does it look in real life? Well, like in this video. First it shows you the large semi-transparent cube actually disappears if you move quickly enough, then it shows the moving and scaling of the "Hello World" screen, but it also shows that when you move quickly enough, it will try to fade, but only the text will fade. The two cubes show nothing special other than that they appear more or less on the spatial mesh, and the "Screen with button" shows shrinking and growing as well, and it will fade completely - but the back plate. I have told you how to fix that.

Some tidbits

If you try to run the project in an HoloLens or Immersive Headset and wonder where the hell the cubes, capsule and other 'scenery' is that is clearly visible in the Unity editor - the are explicitly turned off by something called the HideInRuntime behaviour that sits in the "Scenery" game object, where all the 'scenery'  resides it. This is because in a HoloLens, you already have real obstructions. If you want to try this in an Immersive headset, please remove or disable this feature otherwise you will be in a void with almost nothing to test the behaviour at all.

Conclusion

Unlike the previous one, this behaviour makes full use of the possibilities GetObjectBeforeObstruction offers. I think there’s still room for improvement here and tweaks. For instance, if you want to use this behaviour to move and place stuff, simply disable the the behaviour when it’s done. But this behaviour as it is, is very usable and in fact I use it myself in various apps now.

13 July 2018

Calculate Mixed Reality object locations on top or in front of obstructions using BoxCastAll

Intro

It was quite difficult to name this article. Usually I try to find a title that more or less describes a good search term that I used myself when I was looking for ‘something like this’, but I could not really find what I was looking for. What I have here is code that calculates locations for objects to be in front or on top of other objects and/or the Spatial Mesh. It does so using a BoxCastAll, something I have tried to use before, but not very successfully.  I have tried using Rigidbody.SweepTest and although it works for some scenarios, it did not work for all. My floating info screens ended up half a mountain anyway (in Walk the World), or the airport could not move over the ground because of some tiny obstacle blocking it (in AMS HoloATC). So I tried a new approach.

This is part one of a two-post blog post. Explaining how the BoxCast works and what extra tricks and calculations were necessary to get it to work properly proved to need quite a long text, so I will leave the behaviour actually using this code in good way for the next post.

BoxCast magic

So what is a BoxCast, actually? It’s comparable to a normal RayCast. But where a RayCast gives you the intersection of a line and an obstruction, a BoxCast does that – suprise - with a box. You essentially throw a box from a point along a vector until it hits something – or some things, as a BoxCastAll potentially returns more than one hit. If you take the one that is closest to your camera (a hit has a convenient “distance” property) you potentially have a place where you can place the object.

… except that it does not take into account the following things:

  • An object’s (center) position and the center of it’s bounding box are not always the same; this will make the BoxCast not always happen at the place you think it does
  • The vector from the camera to the hit may or may not be parallel to the direction of the BoxCast; therefore, we need to project the vector from the camera to the hit on the vector of the BoxCast.
  • The BoxCast hit detection happens at the edge of the casted box, and an objects position is determined by it’s center – so, we need so we need to move back a little towards the camera, or else about half of our object – determined by it’s actual orientation - will end up inside the obstruction.

My code takes all of that into account. It was quite hard won knowledge before I uncovered all the lovely pitfalls.

First a new utility method

For a BoxCast to work, you need a box. You typically accomplish that by getting the bounds of all the Renderers in the object you want to cast and combine those to one big bounding box. I hate typing or copying code more than once, so I create this little extension method to GameObject

public static class GameObjectExtensions
{
    public static Bounds GetEncapsulatingBounds(this GameObject obj)
    {
        Bounds totalBounds = new Bounds();

        foreach (var renderer in obj.GetComponentsInChildren<Renderer>())
        {
            if (totalBounds.size.magnitude == 0f)
            {
                totalBounds = renderer.bounds;
            }
            else
            {
                totalBounds.Encapsulate(renderer.bounds);
            }
        }

        return totalBounds;
    }
}

BoxCast Magic

In LookingDirectionHelpers, a static class containing utilities to calculate directions and places the direction the user is looking (duh) at I have created a method that does the BoxCast magic. It does quite a lot, and I am going through it step by step. It starts like this:

public static Vector3 GetObjectBeforeObstruction(GameObject obj, float maxDistance = 2,
    float distanceFromObstruction = 0.02f, int layerMask = Physics.DefaultRaycastLayers,
    BaseRayStabilizer stabilizer = null, bool showDebugLines = false)
{
    var totalBounds = obj.GetEncapsulatingBounds();

    var headRay = stabilizer != null
        ? stabilizer.StableRay
        : new Ray(CameraCache.Main.transform.position, CameraCache.Main.transform.forward);

     var hits = Physics.BoxCastAll(GetCameraPosition(stabilizer),
                                  totalBounds.extents, headRay.direction,
                                  Quaternion.identity, maxDistance, layerMask)
                                  .Where(h => !h.transform.IsChildOf(obj.transform)).ToList();

As you can see, the method accepts quite some parameters, most of them optional:

  • obj – the actual object to cast and place against or on top of the obstruction
  • maxDistance – the maximum distance to place the object from the camera (if it does not hit another object first)
  • distanceFromObstruction – the distance to keep between the object and the obstruction
  • layerMask – what layers should we ‘hit’ when we are looking for obstructions (default is everything)
  • stabilizer – used to get a more stable location and viewpoint source than the camera itself
  • showDebugLines – use some awesome help classes I nicked from the Unity Forums from “HiddenMonk” to show how the BoxCast is performed. Without these, I sure as hell would not have been able to identify all issues that I had to address.

Well then – first we get the total encapsulating bounds, then we check if we can either use the Stabilizer that we need to use the camera to define a ray in the direction we want to cast. The we calculate a point dead ahead of the camera – this is the.

And then we do the actual BoxCast, or actually a BoxCastAll. The cast is done:

  • From the Camera position
  • Using the total extents of the object
  • In the direction of the viewing ray (so a line from your head to where the gaze cursor is)
  • using no rotation (we used the Render's bounds, that already takes any rotation into account)
  • over a maximum distance
  • against the layers described by the layer mask (default is all)

Notice the Where clause at the end. BoxCasts hit everything, including child objects of the cast object itself, as it may be in the path of it’s own cast. So we need to weed out any hits that apply to the object itself or its children.

The next piece of visualizes how the BoxCast is performed, using HiddenMonk's code:

if (showDebugLines)
{
    BoxCastHelper.DrawBoxCastBox(GetCameraPosition(stabilizer),
        totalBounds.extents, headRay.direction,
        Quaternion.identity, maxDistance, Color.green);
}

This uses Debug.Draw – these lines are only visible in the Unity editor, in Play mode. They will not show up in the Game pane but in the Scene pane. Which makes sense, as you can them look at the result from every angle without affecting the actual scene in the game.

This looks like this:

image

Now to address the issues I listed on top of this article, we need to do a few things.

Giving it the best shot cast

The next line is a weird one but is explained by the fact that there may be a difference between the center of the actual bounding box (and thus the cast) and center of the object as reported by Unity. I am not entirely sure why this is, but trust me, it's happens with some objects. We need to compensate for that.

var centerCorrection = obj.transform.position - totalBounds.center;

Below you see an example of such an object. I typically happens when an object is composed of one or more other objects that are off center, and especially when the object is asymmetrical. Like this 'floating screen'. You will see it's an empty game object containing a Quad and a 3DTextPrefab that are moved upwards in local space. Without the correction factor, you get the situation on the left - the BoxCast happens 'too low'

imageimage

On the right side, you see the the desired effect. I opted to change the location to of the object to the center of the BoxCast – you might also consider changing the start location of the BoxCast, but that a side effect: the ray won’t start at the user’s viewpoint (but in this case, a little bit above it) which might be confusing or produce undesirable results.

Hit or miss - projection

We need to find the closest hit… but that hit might not be right in front us, along the viewing vector. So  we need to create a vector from the camera to the hit, then make a (longer) vector that follows the user’s gaze, and finally project the ‘hit vector’ to the ‘gaze vector’. Then and only then we know how much room there is in front of us.

if (hits.Any())
{
    var closestHit = hits.First(p => p.distance == hits.Select(q => q.distance).Min());
    var hitVector = closestHit.point - GetCameraPosition(stabilizer);
    var gazeVector = CalculatePositionDeadAhead(closestHit.distance * 2) - 
                       GetCameraPosition(stabilizer);
    var projectedHitVector = Vector3.Project(hitVector, gazeVector);

To show what happens, I have made a screenshot where I made Unity draw debug lines for every calculated vector:

if (showDebugLines)
{
    Debug.DrawLine(GetCameraPosition(stabilizer), closestHit.point, Color.yellow);
    Debug.DrawRay(GetCameraPosition(stabilizer), gazeVector, Color.blue);
    Debug.DrawRay(GetCameraPosition(stabilizer), projectedHitVector, Color.magenta);
}

Which results in the following view (for clarity I have disabled the code that draws the BoxCast for this screenshot)

image

A little magnification shows the area of interest a little bit better:

image

You can clearly see the the yellow line from the camera to the original hit, the blue line which is the viewing direction of the user, and the magenta line projected on that.

Keep your distance please

Now this all works fine for a flat object like a Quad (posing as an 'info screen' here. But not on a box like this for instance (which I made partially translucent for clarity).

image

The issue here is simple, although it took me some time to figure out what was causing it. Like I said before, the hit takes place at the edge of the shape, but the object's position is tied to it's center, so if I set the object's position to that hit, it will end up halfway the obstruction. QED.

So what we need to do is make yet another ray, that will go from the center of the object to the edge, following the same direction as the projected hit vector (the magenta line). Now RayCasts don't work from inside an object, but fortunately there's another way - the Bounds class supports an IntersectRay method. It works a bit kludgy IMHO but it does the trick:

var edgeRay = new Ray(totalBounds.center, projectedHitVector);
float edgeDistance;
if(totalBounds.IntersectRay(edgeRay,  out edgeDistance))
{
    if (showDebugLines)
    {
        Debug.DrawRay(totalBounds.center, 
            projectedHitVector.normalized * Mathf.Abs(edgeDistance + distanceFromObstruction),
            Color.cyan);
    }
}

So we intersect the projected hit vector from the center of the bounds to the edge of the bounds. This will give us the distance from the center to the part of the object that hit the obstruction, and we can move the object 'backwards' to the desired position. Since I specified a 'distanceFromObstruction' we can add that to the distance the object needs to be moved 'back'  as well to keep a distance from an obstruction, in stead of touching it (although for this object it's 0). Yet another debug line, cyan this time, shows what's happening:

image

The cyan line is the part over which the object is moved back. Now the only thing left is to calculate the new position and return it, this time using the centerCorrection we used before to make the object actually appear within the BoxCast's 'outlines':

return GetCameraPosition(stabilizer) +
            projectedHitVector - projectedHitVector.normalized * 
Mathf.Abs(edgeDistance + distanceFromObstruction) + centerCorrection;

Nobody is perfect

If you think "hey, it looks like it is not completely perfectly aligned", you are right. This is because Unity has it's limits in determining volumes and bounding boxes. This is probably because the main concern of a game is performance, not 100% accuracy. If I add this line to the code

BoxCastHelper.DrawBox(totalBounds.center, totalBounds.extents, Quaternion.identity, Color.red);

it actually shows the bounding box:

image

So this explains a bit more what is going on. With all the debug lines enabled it looks like this, which I can imagine is as confusing as helpful ;)

image

Show and tell

It’s actually not really easy to properly show you how this method can be utilized. As I said in the beginning, I will save that for the next post. In the mean time, I have cobbled together a demo project that uses the GetObjectBeforeObstruction in a very simple way. I have created a SimpleKeepInViewController that polls every so many seconds (2 is default) where the user looks, then calls GetObjectBeforeObstruction and moves the object there. This gives a bit of a nervous result, but you get the idea.

public class SimpleKeepInViewController : MonoBehaviour
{
    [Tooltip("Max distance to display object before user")]
    public float MaxDistance = 2f;

    [Tooltip("Distance before the obstruction to keep the current object")]
    public float DistanceBeforeObstruction = 0.02f;

    [Tooltip("Layers to 'see' when detecting obstructions")]
    public int LayerMask = Physics.DefaultRaycastLayers;

    [Tooltip("Time before calculating a new position")]
    public float PollInterval = 2f;

    [SerializeField]
    private BaseRayStabilizer _stabilizer;

    [SerializeField]
    private bool _showDebugBoxcastLines = true;

    private float _lastPollTime;


    void Update()
    {
        if (Time.time > _lastPollTime)
        {
            _lastPollTime = Time.time + PollInterval;
            LeanTween.move(gameObject, GetNewPosition(), 0.5f).setEaseInOutSine();
        }
#if UNITY_EDITOR
        if (_showDebugBoxcastLines)
        {
            LookingDirectionHelpers.GetObjectBeforeObstruction(gameObject, MaxDistance,
                DistanceBeforeObstruction, LayerMask, _stabilizer, true);
        }
#endif
    }

    private Vector3 GetNewPosition()
    {
        return LookingDirectionHelpers.GetObjectBeforeObstruction(gameObject, MaxDistance,
            DistanceBeforeObstruction, LayerMask, _stabilizer);
    }
}

There is only one oddity here – you see I actually call GetObjectBeforeObstruction twice. But the first time only happens in the editor, and only if you select the Show Debug Boxcast Line checkbox:

imageIf I did not add this, you would see the lines flash for one frame every 2 seconds, which is hardly enlightening. This way, you can see them all the time in the editor



imageIn the demo project you will find three objects – in the images above you have already seen a single block (the default), a rotating ‘info screen’ that shows “Hello World” and there’s also this composite object on the left (two cubes off-center), here displayed with all debug lines enabled ;). You can toggle between the three objects by saying “Toggle” or by pressing the “T”. The latter will actually also work in a HoloLens if you have a Bluetooth keyboard attached - and believe me, I tried ;-)


Conclusion

Yet another way to make an object appear next to on on top of an obstruction ;). This code actually took me way too much time to complete, but I learned a lot from it and at some point it became a matter of honor to get the bloody thing to work.

Fun factoid: most of the code, and a big part of the blog post, was actually written on train trips to an from an awesome HoloLens project I am currently involved in. Both the demo project and this blog post were actually published while being on my way, courtesy of the Dutch Railways free WiFi service ;)

13 June 2018

Measuring user movement in Mixed Reality apps

Intro

For a business HoloLens app I am currently developing - as well as for my app Walk the World - I needed a way to see if a user is moving or rotating in excess of a certain speed, to make certain control elements that are floating in view disappear when he/she is on the move - and come back when the movement stops. How this is used in detail I will describe later, but first I want to describe an easy helper behaviour to sample and measure speed, movement and rotation.

The actual tracker

using HoloToolkit.Unity;
using UnityEngine;

namespace HoloToolkitExtensions.Utilities
{
    public class CameraMovementTracker : Singleton<CameraMovementTracker>
    {
        [SerializeField]
        private readonly float _sampleTime = 1.0f;
        
        private Vector3 _lastSampleLocation;
        private Quaternion _lastSampleRotation;
        private float _lastSampleTime;

        public float Speed { get; private set; }
        public float RotationDelta { get; private set; }
        public float Distance { get; private set; }

        void Start()
        {
            _lastSampleTime = Time.time;
            _lastSampleLocation = CameraCache.Main.transform.position;
            _lastSampleRotation = CameraCache.Main.transform.rotation;
        }
   }
}

The behaviour is implemented as a Singleton. Although that is not strictly necessary, it makes sense to do so as there can also be only one Mixed Reality Camera and there is only one user. There is only one public property - that is the sample time. The idea of a sample time is simple - if you want to measure speed, or rotation, or movement - you have to do so over time. Default it samples location and rotation every second, and then it's up to you to decide to do something with it. At the start, it simply sets the sample time at now, the first sample location to the camera's current location and the rotation to it's current rotation.

In the update method (called every 60th of a second) we simply check whether the sample time period has expired, and then we get a new sample of location and rotation

void Update()
{
    if (Time.time - _lastSampleTime > _sampleTime)
    {
        Speed = CalculateSpeed();
        RotationDelta = CalculateRotation();
        Distance = CalculateDistanceCovered();
        _lastSampleTime = Time.time;
        _lastSampleLocation = CameraCache.Main.transform.position;
        _lastSampleRotation = CameraCache.Main.transform.rotation;
    }
}

The calculations itself are rather simple:

private float CalculateDistanceCovered()
{
    return Vector3.Distance(_lastSampleLocation, CameraCache.Main.transform.position);
}

private float CalculateSpeed()
{
    // return speed in km/h
    return CalculateDistanceCovered() / (Time.time - _lastSampleTime) * 3.6f;
}

private float CalculateRotation()
{
    return Mathf.Abs(Quaternion.Angle(_lastSampleRotation, 
CameraCache.Main.transform.rotation)); }

The distance is simply the difference between the previous and the current Camera position. Time.time is always in seconds since the app started, so dividing the speed through the elapsed time results in the speed in meters per second. Multiplying it by 3.6 makes that km/h - I presumed that to be a unit most people have a feeling for. Feel free to adapt this to your needs and have it return miles, yards, feet, furlongs, stadia or your outdated/obscure distance unit of choice ;).

So what is this good for?

Well, simply put - to take action when some threshold for rotation or movement is crossed.Like I mentioned, it's particularly useful for determining if control elements that should be more or less in the user's field of view should be moved - but not too often or too brusque, or else it is not possible to properly view or interact with them. In the demo project I have created a little demo behaviour that shows speed, distance covered and rotation in a floating text, and it also uses that data to decide whether or not it's time to move the text back into view.

image

This is a picture of the text just after it moving back in view. It will rapidly go back to showing all zeroes as it only measures these values over the last second.

The demo behaviour in a bit more detail:

using HoloToolkitExtensions.Utilities;
using UnityEngine;

public class ShowCameraActions : MonoBehaviour
{
    private TextMesh _mesh;

    [SerializeField]
    private float _rotationThreshold = 10f;

    [SerializeField]
    private float _moveTreshold = 0.4f;

    [SerializeField]
    private float _moveTime = 0.2f;

    private bool _isBusy;

    void Start()
    {
        _mesh = GetComponentInChildren<TextMesh>();
        MoveText();
    }

    void Update()
    {
        SetText();
        if ((CameraMovementTracker.Instance.RotationDelta > _rotationThreshold ||
            CameraMovementTracker.Instance.Distance > _moveTreshold ) && !_isBusy)
        {
            MoveText();
        }
    }

    private void MoveText()
    {
        _isBusy = true;
        LeanTween.move(gameObject, 
                        LookingDirectionHelpers.CalculatePositionDeadAhead(), _moveTime).
                  setEaseInOutSine().setOnComplete(() => _isBusy = false);
    }

    private void SetText()
    {
        var text = 
            string.Format(
"Speed: {0:00.00} km/h - Rotation: {1:000.0}° - Moved {2:00.0}m", CameraMovementTracker.Instance.Speed, CameraMovementTracker.Instance.RotationDelta, CameraMovementTracker.Instance.Distance); if (_mesh.text != text) { _mesh.text = text; Debug.Log(text); } } }

Long story short:

  • The text will be updated in every call to Update (which is 60 times per second) but since the CameraMovementTracker updates itself only once a second by default you should see the text change only once a second. I have also included a Debug.Log so you can see the numbers change when the text is still outside of your view. This of course only works in the Unity editor.
  • If the rotation threshold (10 degrees) or movement threshold (0.4 meters) is exceeded, the behaviour will attempt to move the text back into view (if it is not already doing so), using good old LeanTween. The "setEaseInOutSine" will make the movement start and stop fluently.

Conclusion

It's not hard to measure these things and the code is not complicated, but as is my custom - if I need to make something the 3rd time, it's time to make it into a generalized reusable class. And there you have it. Have fun with the demo project.

30 May 2018

Simple way to prevent an unintended double tap in Mixed Reality apps

This is an easy and simple tip, but one that I use in almost every Mixed Reality app at one point. It’s mostly applicable to HoloLens: especially new users tend to double (triple, or even more) tap when the are operating an app, because the gestures are new. This can lead to undesirable and confusing results, especially with toggles. So it can help a little to ‘dampen out’ those double clicks.

I created this very simple helper class called DoubleClickPreventer:

using UnityEngine;

namespace HoloToolkitExtensions.Utilities
{
    public class DoubleClickPreventer
    {
        private readonly float _clickTimeOut;

        private float _lastClick;

        public DoubleClickPreventer(float clickTimeOut = 0.1f)
        {
            _clickTimeOut = clickTimeOut;
        }

        public bool CanClick()
        {
            if (!(Time.time - _lastClick < _clickTimeOut))
            {
                return false;
            }
            _lastClick = Time.time;
            return true;
        }
    }
}

Basically, every time you ask it it you can click, it checks if a set amount of time (default 0.1 second) has passed since the last click. It’s usage is pretty simple: just make a behaviour that implements IInputClickHandler (from the Mixed Reality Toolkit) as usual, define a DoubleClickPreventer member, create it in Start like this

_doubleClickPreventer = new DoubleClickPreventer(0.5f);

and then in your OnInputClicked implementation use something like this:

if (_doubleClickPreventer.CanClick())
{
  // do stuff
}

and this will prevent a second click happening more than once every half second.

I made a little demo project, in which one cube has a GuardedClicker behaviour, that implements the DoubleClickPreventer, and the other a SimpleClickPreventer, that just registers every click.

image

If you click like a maniac for about a second or two on both cubes, you will clearly see a different result.

Note: the InputClickedEventData that you get in OnInputClicked.IInputClickHandler contains a “TapCount” property, but I have found that’s usually just 1, and however that does not stop the ‘stuttery double tap’ we are trying to prevent here. Also, this solution allows for fast clicks on separate objects, but not fast clicks on the same object.

The demo project (although it’s very little) can be downloaded here.

21 May 2018

Developing in Unity for Windows Mixed Reality without having to unplug your device all the time

A very simple but potentially quite a big time saving tip this time.

Typically, when you are developing for Windows Mixed Reality, you spend a lot of time in the Unity editor getting things just right. Unity and Mixed Reality integration is awesome - you can just hit play and your scene will show directly in your head set.

But sometimes, that is just not what you want. If you are fiddling with a tiny bit of interaction or animation code (and we all knows that can take a lot of time in Unity) you often just want to hit play, observe the results in the Game window, maybe move a bit around with the WASD keys or using an Xbox Controller. In that workflow, the necessity of putting on a  Mixed Reality device - and taking it off again - for every little iteration can be a cumbersome process.

There are two solutions for this: first, unplug your device if you don't need it. Duh. But if you use big desktop development box or this means reaching out to the back of the device. For laptops and desktop boxes both, you will need to fiddle with plugs, which in time might wear out or damage the plugs and/or the plug sockets of your PC. I am unfortunately speaking from experience here.

So what to use? Good ol'e device manager. Simply type "Device manager" in your start menu

image

You will find a node "Mixed Reality Devices". Find yours (I have a Samsung Odyssey indeed). Simply right-click

image

Hit "Disable device", click "Yes" on the rather ominous following warning, and your headset is now an expensive paperweight, no longer paying attention to what Unity does. You can now use Unity Play mode the 'old' way.

If you are done finicking in Unity, you can simply enable the device again using the Device Manager, and your awesome headset wakes up again.

14 May 2018

Fixing the "...Unity\Tools\AssemblyConverter.exe exited with code 1" error when you are building your Mixed Reality app

It's not easy being green - I mean, a Mixed Reality developer

This is one that has annoyed me for quite some time. When developing your Mixed Reality app, I usually go like this:

  • Change something in Unity
  • Build your Mixed Reality app from Unity
  • Open Visual Studio
  • Build your app and run in your HoloLens or Immersive headset
  • Be not entirely satisfied with the result
  • Change something more in Unity
  • Build your Mixed Reality app again

And then something like this happens:

image

An extremely unhelpful error message. If you copy the whole command in a command line and run the command yourself, you get a little more info

System.Exception: Unclean directory. Some assemblies are already converted, others are not.
    at Unity.SanityCheckStep.Execute()
    at Unity.Step.Execute(OperationContext operationContext, IStepContext previousStepContext)
    at Unity.Operation.Execute()
    at Unity.Program.Main(String[] args)

So here's a clue. Apparently some cruft stays behind preventing Unity from rebuilding the app the second time around. This is because Unity does not always overwrite all the files, presumably to to speed up the build process that second time. Only apparently they mess up sometimes.

UntitledSo then you go delete your generated app, but don't forget to retain some files you might want to keep (like your manifest file). You might add those to your repo - but then don't forget to revert after deleting, and oh yes, if you are testing on your local machine (and not your HoloLens or a different machine) you might find you can't even delete everything because it's locked. So you need to uninstall the app first. And this happens every second time you compile the app. And yes, this also happens when you use the Build Window.

Meh indeed

I never thought I would ever write this sentence, but here it goes:

PowerShell to the rescue

This first step you only need to do when you are testing on your development machine, and not on a HoloLens.

We  need to find out what the actual package name of your app is. The are two ways to do this. The simplest one is going over to your package.appxmanifest, double click it, select tab "packaging"

Untitled

Or you can just run a PowerShell Command like this:

Find name : Get-AppxPackage | Where-Object {$_ -like "*Walk*"}

Anyway, then you can make a script, call it "CleanApp.ps1" (or whatever you want to call it) and add the following commands:

Get-AppxPackage 99999LocalJoost.ThisIsMyApp | Remove-AppxPackage

$PSScriptRoot = Split-Path -Parent -Path $MyInvocation.MyCommand.Definition `
Get-ChildItem -Path ${PSScriptRoot}\App\ ` -include *.dll,*.pdb,*.winmd,*.pri,*.resw,Appx -Recurse | ` Remove-Item -Force -Recurse

This assumes your generated app is sitting in a directory "App" that is a sub directory of the directory that the actual script is sitting in. It typically place this in the root of my (Unity) project, while the App folder is a sub directory from that.

So your typical workflow becomes now:

  • Change something in Unity
  • Run this script
  • And then continue building the Mixed Reality app as you see fit (either from Unity to Visual Studio, or directly from the Build Window)

I deem you all capable of copying these lines of code from this blog post, so won't put any code online to go with this article.

05 May 2018

Running Mixed Reality apps on Windows 10 on ARM PCs–get ready for a surprise

Intro

I was planning to write a step-by-step procedure of the things you would need to do to get the Mixed Reality app I created in my previous post to work on a Windows 10 on ARM PC. After all, when I tried to do that on a Raspberry PI2 quite some time ago, there was some creative slashing necessary.

Life is what happens while you are busy making other plans

Turns that what I needed to do was exactly nothing. Well, I had to compile and deploy it for ARM. And that worked. Just like that, just like on a intel-based PC as I described in my previous post. When I deployed Walk the World to the Windows 10 on ARM PC some posts ago, I still had to remove some parts of the Mixed Reality Toolkit to make the ARM tools swallow the sources. Apparently, that’s no longer necessary.

And then I tested it. And I learned I had to make some changes to my code after all. I think Microsoft likes to hear that you can run code on Windows 10 on ARM PCs unchanged, but in this case I don’t think they will mind me saying this I actually needed to make some changes because it was running too bloody fast on a Windows 10 on ARM PC. Yeah, you read that right. The code I wrote for controlling the app via an Xbox One Controller replied so fast it was actually nearly impossible to control the view point, especially when rotating. Even when I compiled it for x86 and CHPE had to do the translation, it still ran too fast for reasonable control.

It actually ran faster than on my i7 Surface Pro 4. That was one serious WTF, I can tell you that.

One trigger, two triggers

You might remember that in the previous post I used the right trigger to make moving and rotating go faster. Well we sure don’t need to go faster, so I adapted the code to calculate the speed up factor:

var speed = 1.0f + TriggerAccerationFactor * eventData.XboxRightTriggerAxis;

To use the right trigger to slow down the speed.

var speed = (1.0f + TriggerAccerationFactor * eventData.XboxRightTriggerAxis) - 
            (eventData.XboxLeftTriggerAxis * 0.9f);

And that works reasonably well.

Now I made a setup quite comparable to my previous post on Windows 10 on ARM, only now the x86/ARM versions are not only compared with each other, but also with an x64 version running on my Surface Pro 4.IMG_6845_2

The Surface Pro 4 for is running on his own screen and is connected to the right Xbox Controller and the gray ArcMouse, the Windows 10 on ARM PC, once again missing from this picture, is connected to the black ArcMouse, and Dell monitor and the left Xbox Controller via the Continuum dock that you can see just in front of the MVP thermos.

So here’s a little video of the three versions:

You can clearly see the the Windows on ARM10 PC is quite a bit faster than the Surface Pro 4 and that even the x86 CHPE-fied version is faster, so that rotating indeed needs the left trigger to slow it down, to get some resemblance of control. At the end, you can actually seen them all three together

IMG_6852_3

The difference between the x86 and the ARM version is mainly in startup time here and a wee bit of general performance (although you mainly notice that when you actually operate the app – if you just watch it’s less obvious). Last time I wrote about Windows 10 on ARM I already concluded that CHPE does an amazing job as far as graphics performance goes, and it shows here again.

[image%5B22%5D]Interesting detail – the Windows 10 on ARM PC does not show this popup a the end, while the Surface Pro 4 does. Now this may be because Windows x64 actually has the optional “Windows Mixed Reality” component (although this particular hardware doesn’t support that), and Windows 10 on ARM does not have that particular component. Also, the latter still runs the Fall Creators Update, while the Surface Pro 4 runs the April 2018 update. Both may be a factor. I have no way to test this now.

Two versions of the same app?

You might have noticed this before - I sometimes run two versions of the same app together on one PC. That's normally not possible - if you deploy one version it gets overwritten by the other, even when you change target architecture in Visual Studio. To get two version of the same app to run of one computer, you will need to fiddle somewhat in the Package.appmanifest. Open it as XML file (not via the beautiful GUI editor provided by Visual Studio). Change the Name in Identity (3rd line in the file)

<?xml version="1.0" encoding="utf-8"?>
<Package xmlns:....
   <Identity Name="XBoxControllerDemo" Publisher="CN=DefaultCompany" Version="1.0.0.0" />

and change XBoxControllerDemo for instance to XBoxControllerDemoARM

Then look a bit lower for the VisualElements tag

<uap:VisualElements DisplayName="XBoxControllerDemo"

And change that to for instance "XBoxController ARM Version" - to make sure the app also have separate icon labels.

Do not ever do this on production apps but if you want to you your own kind of crazy A-B testing like me it can be useful.

Conclusion

This article is quite a bit shorter than I anticipated, but that it’s because Mixed Reality apps seem to run amazingly well on Windows 10 on ARM PCs with very little work. This platform is a serious candidate for Unity generated UWP apps.

I am now seriously considering rebuilding my Mixed Reality apps with this new MRTK and the newest applicable version of Unity, and including an ARM package in the store. Why not. It runs fine. Let's see if users like it.

No (new) project this time. You can find the project with the updated XBoxControllerAppControl.cs (still) here.

02 May 2018

Running your Mixed Reality app on an ‘ordinary’ PC–using an Xbox One Controller

Intro

Let’s face it – although Windows Mixed Reality has a steady uptick (at least I think I can draw that conclusion from the increasing download numbers of my two Mixed Reality apps in the Windows Store) – not everyone has a Mixed Reality headset, or even has a PC capable of supporting that. Time will take care of that soon enough. In the mean time, as a Mixed Reality developer, you might want to show all 700 million Windows 10 users a glimpse of your app, in stead of ‘only’ the HoloLens and Mixed Reality headset owners out there. Even in a reduced state, it gives you eyeballs, and maybe entice them to get themselves a headset after all. It’s not like they are expensive these days.

This sounds familiar?

Well it should. This is far from original. I have been down this road before, describing how to run a HoloLens app on a Raspberry PI2. That’s the U in UWP for you. Only now we are going to run on a full PC – in my case, a Surface Pro 4. That’s a sufficiently high end device for a nice experience, but it predates the Windows Mixed Reality era by almost two years and does not support it. But you can’t walk around without a headset, so we will need another means to change our view point.

Parts list

  • One reasonably nice performing PC not capable of supporting Mixed Reality – or at least with the Mixed Reality portal not installed
  • Unity 2017.2.1p2
  • The Mixed Reality Toolkit 
  • One XBox One controller

The first point is important – for if you have the portal installed, your PC will launch it like a good boy trying to do the logical thing - and you won’t see the effect I am trying to show you.

Setting up the project

I created a new project in Unity, copied in the latest Mixed Reality Toolkit, then clicked the three menu options under Mixed Reality Toolkit/Configure.

Then I added my standard empty game objects “Managers” (with nothing in it)  and “HologramCollection” with a cube and a sphere, to have something to see:

image

There is more to that two objects that meets the eye but we will get to that later.

Control the view point using an XBox Controller

There’s a simple class for that, in my ever growing HolotoolkitExtensions, that starts like this

using HoloToolkit.Unity.InputModule;
using UnityEngine;

namespace HoloToolkitExtensions.Utilities
{
    public class XBoxControllerAppControl : MonoBehaviour, IXboxControllerHandler
    {
        public float Rotatespeed = 0.6f;
        public float MoveSpeed = 0.05f;
        public float TriggerAccerationFactor = 2f;

        private Quaternion _initialRotation;
        private Vector3 _initialPosition;

        private readonly DoubleClickPreventer _doubleClickPreventer = 
                                                new DoubleClickPreventer();
        void Start()
        {
            _initialRotation = gameObject.transform.rotation;
            _initialPosition = gameObject.transform.position;
        }
    }
}

I tend to offer settings to the Unity editor as much as possible - to make it easy to reuse this class and adapt its behavior without code changes. Here I offer some speed settings. You can set the maximal rotation speed and the maximal speed the camera moves, and the ‘speed up factor’ that is applied to all values when the right trigger is pressed. Be advised these are all analog values between 0 and 1, so you can control the speed anyway by varying the amount of pressure you apply to the sticks, the D-pad. But sometimes you just wanna go fast, hence the trigger. Also notice how initial rotation and position are retained.

The main routine is of course OnXboxInputUpdate, as the IXboxControllerHandler mandates its presence.

public void OnXboxInputUpdate(XboxControllerEventData eventData)
{
    if (!UnityEngine.XR.XRDevice.isPresent)
    {
        var speed = 1.0f + TriggerAccerationFactor * eventData.XboxRightTriggerAxis;

        gameObject.transform.position += eventData.XboxLeftStickHorizontalAxis * 
                                         gameObject.transform.right * MoveSpeed * speed;
        gameObject.transform.position += eventData.XboxLeftStickVerticalAxis * 
                                         gameObject.transform.forward * MoveSpeed * speed;

        gameObject.transform.RotateAround(gameObject.transform.position, 
            gameObject.transform.up, 
            eventData.XboxRightStickHorizontalAxis * Rotatespeed * speed);
        gameObject.transform.RotateAround(gameObject.transform.position, 
            gameObject.transform.right, 
            -eventData.XboxRightStickVerticalAxis * Rotatespeed * speed);

        gameObject.transform.RotateAround(gameObject.transform.position, 
            gameObject.transform.forward, 
            eventData.XboxDpadHorizontalAxis * Rotatespeed * speed);

        var delta = Mathf.Sign(eventData.XboxDpadVerticalAxis) * 
                    gameObject.transform.up * MoveSpeed * speed;
        if (Mathf.Abs(eventData.XboxDpadVerticalAxis) > 0.0001f)
        {
            gameObject.transform.position += delta;
        }

        if (eventData.XboxB_Pressed)
        {
            if (!_doubleClickPreventer.CanClick()) return;
            gameObject.transform.position = _initialPosition;
            gameObject.transform.rotation = _initialRotation;
        }

        HandleCustomAction(eventData);
    }
}

Let’s unpack that a little.

Important is the if (!UnityEngine.XR.XRDevice.isPresent). We only want this behaviour to do it’s work when there is no headset present whatsoever – no Mixed Reality head set, no HoloLens.

  • First we calculate a possible ‘speed up factor’ to be applied when the trigger is used. If it is not, it’s simply 1 and has no effect to the actual movement or rotation.
  • The left stick is used for movement in the ‘horizontal’ plane – forward, backward, left, right. Be aware the axes are relative. So if you are rotated 45 degrees left and you move left, you will move 45 degrees left. It’s actually logical – your frame of reference is always yourself, not some random rotation that happened to be in place when you got somewhere.
  • The right stick is used for rotation around your top and horizontal axis (left to right). Moving it to right will make you spin to the right (I negate the actual value coming from the stick as you can only rotate a game object around it’s left axis), pushing it forward will make you look at the floor.
  • That leaves moving up and down, and rotating left and right. The D-pad fills the voids: pushing it left or right will make you rotate sideways (like you are falling to the left or right), pushing it up or down will make your viewpoint move up or down.

This is exactly the way it works when you use an Xbox Controller to steer the Unity editor in play mode. The D-pad feels a bit counter-intuitive to me, but when you try to move in three dimensions using sticks that move both in only two dimensions, you will need something extra, and the D-pad is the only thing left. It feels odd to me, but it works.

Then finally the B button – when you press that, you get back to your initial position. This is very useful for if you have messed around a bit too much and completely lost track of where you are. And that is mostly all of it.

A tiny bit of SOLID

protected virtual void HandleCustomAction(XboxControllerEventData eventData)
{
}

Hardly worth mentioning, but should you want to add your own logic handling controller buttons or triggers, you can make a child class of this XBoxControllerAppControl  and override this method. It’s a hook that makes it open for extension, but keeps it own logic intact. That’s better than making OnXboxInputUpdate virtual, because that enables you to interfere with the existing logic by not calling the base OnXboxInputUpdate. It’s the O of SOLID. image

How to use it

Simply drag it to the the MixedRealityCameraParent, change the settings to your liking and your are done. I think I took some reasonable default settings.

But wait, there’s more!

I have found the Xbox Controller buttons tend to stutter – that is, they sometimes fire repeatedly and rapid fire events can give a bit of a mess.

So I created this little helper DoubleClickPreventer that is not exactly rocket science, but very useful

using UnityEngine;

namespace HoloToolkitExtensions.Utilities
{
    public class DoubleClickPreventer
    {
        private readonly float _clickTimeOut;

        private float _lastClick;

        public DoubleClickPreventer(float clickTimeOut = 0.1f)
        {
            _clickTimeOut = clickTimeOut;
        }

        public bool CanClick()
        {
            if (!(Time.time - _lastClick > _clickTimeOut))
            {
                return false;
            }
            _lastClick = Time.time;
            return true;
        }
    }
}

It’s rather simple: whenever the method CanClick is called, a time is set. If the method is called twice within 0.1 seconds it returns false, otherwise it returns true. It’s actually used twice within this sample: it’s also on on the little helper class “SelectorDemo” that makes the sphere and the cube go “plonk” and flash blue when you click them using the Xbox “A” button. I won’t go into that – you can find it in the demo project, and it’s inner workings are left as exercise to the reader.

And it looks like…

There are a few things you might notice. First of all, I apparently am able to select something, but I never coded for it. That’s courtesy of the Mixed Reality Toolkit – your Xbox Controller’s “A” button is acting the same as saying “Select” in a Mixed Reality app while you are gazing at something, air tapping while using a HoloLens, or pointing your Mixed Reality controller to an object and pressing the trigger.

Also, you might notice this at the end of the video:

image

A clear sign Windows is not really content with this. It figures – because if nothing prevents you from downloading an app that simply does not work on your machine it might disgruntle users. But still – the app launches and seems to work.

Some other things to notice and take into consideration

  • Use the right Unity version: 2017.2.1p2. That’s the one that goes with this release of the Mixed Reality Toolkit. Using newer versions of Unity or the toolkit (like the development branch) I got results varying from the app not wanting to compile, crashing or simply not starting. I also got just this “Can’t open app” dialog and nothing else
  • You can also see (very small) “Development build” in the lower right corner. There’s a check box in Unity that everyone tells you to use, and then that text will go away. The trouble is, that does not work. What will make it go away, is building the app with the Master configuration. That and only that. For Mixed Reality apps, this check box apparently only is there for show. At least as far as this text is concerned, and as far as I can see ;).

buildmaster

  • And finally, when making these apps run on an ordinary PC, you might want to rethink the UI a bit at places. Floating menu’s, which are very cool in real Mixed Reality environments, can be really hard to use on a flat screen, for instance. Also, placing things on top of the ‘floor’ might be a bit of a challenge without a floor – even a virtual one.

Concluding words

I am not sure if this will continue working going forward with the MRTK and Unity, how useful this will be in the real word, or if that the Mixed Reality team even appreciates this approach. I am simply showing you what’s possible and a possible way to tackle this. Your mileage may vary, very much in fact. Have fun!

Once again – demo project here