Introduction
As you might have seen, the SensorCore SDK has been released. I intended to bring out a full fledged app utilizing this new and exiting SDK, but I found out that I first had to explain a bit about the ‘supporting acts’ I was going to use – if you study my last four posts you will see they all contain parts of the sample app that goes what this post. In addition, the last month I had a rather time- and energy-consuming, frustrating, ultimately fruitless and disappointing thing cross my way with which I won’t bother you with – sufficient to say I would rather have spent this time on writing apps.
Anyway. I set out to make an app that would use the both the SensorCore ActivityMonitor and TrackPointMonitor, as well as the existing MapLocationFinder to find out what I was doing where, and when.
For those who are not familiar with the concept of SensorCore: a few newer devices, among which the Lumia 630, have hardware aboard that allows you to track where the user has been about every 5 minutes, and what he was doing, for the last 10 days. Based upon this information, it designates places that are apparently important to the owner (like home, work). In addition, it has a step counter. Everything is handled by a special, very low power piece of hardware that continuously tracks location and movement with very little battery drain.
Now for the tin foil hat crew: there are some key differences to the way other platforms handle this. First of all, it can only happen with the user’s consent – it’s a phone setting that has to be explicitly turned on, and then and only then the hardware starts to collect data. Furthermore, this data never leaves the phone all by itself – there is no Bing server tracking your actions to clutter your phone with ads or other stuff you may not necessarily appreciate. The only way this data can ever leave your phone is by some for of app – but first you have to give the phone consent, then the app, and said app has to pass certification as well. This Windows/Windows Phone ecosystem takes your privacy very seriously. Ask any developer – almost everyone has been rejected at least once over some tiny detail they missed in the privacy sections of the certification requirements ;-)
The app I am going to show you looks like this. With sincere apologies for the video quality – apparently maps on a slow device is a bit too much for a cable connection with a Lumia 630, but it shows the gist.
Setting the stage
So I created a Windows Phone 8.1 (Store) app WhatWhenWhere with one supporting class library WpWinNl.MvvmLight.Contrib. I added my WpWinNlMaps NuGet packages to both projects, and added PropertyChanged.Fody as well to it. The contrib package holds BaseNotifyingModel and TypedViewModelBase – as described in this article on using PropertyChanged.Fody for Model-to-Viewmodel communication
An other minor detail – you might want to pull in the Lumia SensorCore NuGet package as well ;-)
A model to hold one location
So I needed a model to know where and when something happened (a TrackPointMonitor TrackPoint), what I was doing there (an ActivityMonitorReading) and an approximate street address (a MapLocationFinder MapLocation). First analysis learns that an ActivityMonitorReading only holds a Timestamp and an Activity. This activity describes what I was doing (Idle, Moving, Stationary, Walking or Running). As I already know the time from the TrackPoint, and I only use that Timestamp to get the accompanying Activity, we might as just only hold that Activity, and not the whole ActivityMonitorReading.
So I created the following model class:
using System; using System.Linq; using Windows.Devices.Geolocation; using Windows.Services.Maps; using Lumia.Sense; using WpWinNl.MvvmLight; namespace WhatWhenWhere.Models { public class ActivityPoint : BaseNotifyingModel { public ActivityPoint() { } public ActivityPoint(TrackPoint p) { LengthOfStay = p.LengthOfStay; Position = p.Position; Radius = p.Radius; Timestamp = p.Timestamp; } public ActivityPoint(TimeSpan lengthOfStay, BasicGeoposition position, double radius, DateTimeOffset timestamp, Activity activity) { LengthOfStay = lengthOfStay; Position = position; Radius = radius; Timestamp = timestamp; Activity = activity; } public TimeSpan LengthOfStay { get; set; } public BasicGeoposition Position { get; set; } public double Radius { get; set; } public DateTimeOffset Timestamp { get; set; } public Activity Activity { get; set; } public MapLocation LocationData { get; set; } public void LoadAddress() { if (LocationData == null) { MapLocationFinder.FindLocationsAtAsync(
new Geopoint(Position)).AsTask().ContinueWith(p => { var firstLocationData = p.Result.Locations; if (firstLocationData != null) { LocationData = firstLocationData.FirstOrDefault(); } }); } } } }
Which has, unlike the TrackPoint and the ActivityMonitorReading, the added benefit of being serializable on account of having a default constructor (ahem) which comes in handy when doing storing state when the app is deactivated (note: the demo app does not handle that).
You might notice the LoadAddress method that can called externally to load the address on demand (in stead of just doing that in the constructor. This has been done for performance reasons – geocoding and loading addresses is expensive and slow, and you don’t want to do that if you are not sure if the user actually wants to see those addresses or not. So you defer that to the last possible moment – when the user actually selects this object.This technique is described in detail in this article .Notice it carefully avoids the way how you obtain these locations ;-) .
A model to load the data.
The bulk of the DataLoaderModel is shamelessly stolen from the samples.
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using GalaSoft.MvvmLight.Messaging; using Lumia.Sense; using WhatWhenWhere.Models.Messages; using Tracker = Lumia.Sense.TrackPointMonitor; namespace WhatWhenWhere.Models { public class DataLoaderModel { public DataLoaderModel() { } public async Task Init() { await InitTracker(); await InitActivityMonitor(); } private Tracker tracker; private async Task<bool> InitTracker() { if (await Tracker.IsSupportedAsync()) { if (tracker == null) { if (await CallSensorcoreApiAsync(async () => tracker = await Tracker.GetDefaultAsync())) { return true; } } } return false; } private ActivityMonitor activityMonitor; private async Task<bool> InitActivityMonitor() { if (await ActivityMonitor.IsSupportedAsync()) { if (activityMonitor == null) { if (await CallSensorcoreApiAsync(async () => activityMonitor = await ActivityMonitor.GetDefaultAsync())) { return true; } } } return false; } public async Task SetSensorState(bool enable) { await SetSensorState(tracker, enable); await SetSensorState(activityMonitor, enable); } private async Task SetSensorState(ISensor sensor, bool active) { if (sensor != null) { if (!active) { await CallSensorcoreApiAsync(async () =>
{ await sensor.DeactivateAsync(); }); } else { await CallSensorcoreApiAsync(async () =>
{ await sensor.ActivateAsync(); }); } } } private async Task<bool> CallSensorcoreApiAsync(Func<Task> action) { try { await action(); } catch (Exception ex) { SenseHelper.GetSenseError(ex.HResult); Messenger.Default.Send( new SenseFailureMessage(SenseHelper.GetSenseError(ex.HResult), this)); // This is now not handled, but should really be handled as described here // (Calling the SensorCore SDK safely) return false; } return true; } } }
It has some clever tricks with Lambdas to be able to call SensorCore methods without having to copy & past the whole try/catch stuff. I basically only added in both sensors, and let it send out an error message via the MVVMLight Messenger if things go wrong (note: app does not handle this, but I think it’s a good pattern).
I did add some code to this stolen-from-sample class, to actually load the data into the model:
public public async Task<bool> LoadInitialData() { if (tracker != null && activityMonitor != null) { var result = await tracker.GetTrackPointsAsync( DateTime.Now - TimeSpan.FromDays(10), TimeSpan.FromDays(10)); if (RoutePoints == null) { RoutePoints = new List<ActivityPoint>(result.Select(p => new ActivityPoint(p))); } else { RoutePoints.Clear(); } await LoadActivities(); return true; } return false; } private async Task LoadActivities() { foreach (var r in RoutePoints) { try { var reading = await activityMonitor.GetActivityAtAsync(r.Timestamp); r.Activity = reading.Mode; } catch (Exception ex) { // lame – I know. } } } public List<ActivityPoint> RoutePoints { get; set; }
This is the core of the whole app – the rest is fluff to make things visible. LoadInitialData first loads all the TrackPoints from the last 10 days (which is the maximal available time anyway) and converts them into my ActivityPoint model. It then proceeds to find the actual activity that was performed at the time the TrackPoint was recorded. And if you call the ActivityPoint’s LoadAddress method, it will find the (approximate) address of the location it was recorded – it will even return highway names (it’s won’t find a house number then, but I don’t think that will surprise anyone).
A view model for a single point
Quite a lot of how this this works, is already described in my article on using PropertyChanged.Fody. I can directly bind to the model, but for formatting output and commands I like to keep my model clean and employ a viewmodel for that.
using System; using System.ComponentModel; using System.Globalization; using System.Linq; using System.Windows.Input; using Windows.Devices.Geolocation; using Windows.Services.Maps; using GalaSoft.MvvmLight.Command; using GalaSoft.MvvmLight.Messaging; using GalaSoft.MvvmLight.Threading; using WhatWhenWhereMessages; using WhatWhenWhere.Models; using WpWinNl.MvvmLight; namespace WhatWhenWhere.ViewModels { public class ActivityViewModel : TypedViewModelBase<ActivityPoint> { public ActivityViewModel() { } public ActivityViewModel(ActivityPoint p) : base (p) { } public Geopath Location { get { return new Geopath(new[] { Model.Position }); } set { var p = value.Positions.FirstOrDefault(); if (Model.Position.Equals(p)) { Model.Position = p; RaisePropertyChanged(() => Location); } } } public string DateAndTime { get { return Model != null ?
Model.Timestamp.ToString("dd-MM-yyyy hh:mm:ss",
CultureInfo.InvariantCulture) : string.Empty; } } public string LocationName { get { return Model != null && Model.LocationData != null ?
GetFormattedAdress(Model.LocationData.Address) : string.Empty; } } private string GetFormattedAdress(MapAddress a) { if (a == null) throw new ArgumentNullException("a"); return string.Format("{0} {1} {2} {3}",
a.Street, a.StreetNumber, a.Town, a.Country); } public ICommand SelectCommand { get { return new RelayCommand( () => { Messenger.Default.Send(new SelectedObjectMessage(this)); Model.LoadAddress(); }); } } public ICommand DeSelectCommand { get { return new RelayCommand( () => Messenger.Default.Send(new SelectedObjectMessage(null))); } } protected override void ModelPropertyChanged(object sender,
PropertyChangedEventArgs e) { if (e.PropertyName == "LocationData") { DispatcherHelper.CheckBeginInvokeOnUI(() =>
RaisePropertyChanged(() => LocationName)); } } } }
There are a few things that might attract your attention:
- The Location in this viewmodel is a GeoPath of only one position. This is because I want to re-use my MapBinding assembly that I originally created for Windows Phone 8, and recently ported to Windows Phone 8.1
- The SelectCommand explicitly launches the loading of the address when the viewmodel is selected. In my previous sample I showed a way to do this in the getter of a property, but this is a better way I think (as I already said then)
- The ModelPropertyChanged method launches a RaisePropertyChanged using MVVMLight’s DispatcherHelper. While this is very useful, it requires the DispatcherHelper to be initialized in the App.Xaml.cs. We will get to that later.
Bringing it all together
The MainViewModel is always my ‘class that brings it all together’. I won’t show all details here, or this article will be even longer than it already is. I start with some initialization stuff:
using System.Collections.ObjectModel; using System.Threading.Tasks; using System.Windows.Input; using Windows.Devices.Geolocation; using GalaSoft.MvvmLight; using GalaSoft.MvvmLight.Command; using GalaSoft.MvvmLight.Messaging; using WhatWhenWhereMessages; using WhatWhenWhere.Models; using WpWinNl.Messages; using WpWinNl.Utilities; namespace WhatWhenWhere.ViewModels { public class MainViewModel : ViewModelBase { public async Task Start() { if (Activities == null) { Activities = new ObservableCollection<ActivityViewModel>(); } if (Route == null) { Route = new ObservableCollection<RouteViewModel>(); } Messenger.Default.Register<WindowVisibilityMessage>(this, async m => { await ProcessWindowVisibilityMessage(m); }); Messenger.Default.Register<SelectedObjectMessage>(this,
ProcessSelectedObjectMessage); await Model.Init(); } private void ProcessSelectedObjectMessage(SelectedObjectMessage message) { SelectedItem = message.Activity; } private async Task ProcessWindowVisibilityMessage(WindowVisibilityMessage m) { if (Model != null) { if (!IsInDesignMode) { await Model.SetSensorState(m.Visible); } } } } }
And you can also see the viewmodel listens to two messages: one that is fired when an object is selected, and one that is fired when the main windows becomes visible (or invisible) - and sets the sensor state according to that. The second half of the MainViewModel mostly contains data properties and a command:
public ICommand LoadCommand { get { return new RelayCommand( async () => { await Model.LoadInitialData(); Model.RoutePoints.ForEach(p => Activities.Add(new ActivityViewModel(p))); Route.Clear(); var route = new RouteViewModel(Activities); Route.Add(route); ViewArea = GeoboundingBox.TryCompute(route.Path.Positions); }); } } public ObservableCollection<RouteViewModel> Route { get; set; } public ObservableCollection<ActivityViewModel> Activities { get; set; } private GeoboundingBox viewArea = GeoboundingBox.TryCompute(new[] { new BasicGeoposition { Latitude = -90, Longitude = -90 }, new BasicGeoposition { Latitude = 90, Longitude = 90 } }); public GeoboundingBox ViewArea { get { return viewArea; } set { if (viewArea != value) { viewArea = value; RaisePropertyChanged(() => ViewArea); } } } private ActivityViewModel selectedItem; public ActivityViewModel SelectedItem { get { return selectedItem; } set { if (selectedItem != value) { selectedItem = value; RaisePropertyChanged(() => SelectedItem); } } }
Notice the LoadCommand: not only it loads the activities into a view model, but it also makes a new RouteViewModel (for drawing one line between all the points) and a bounding box to make all the points fit into the view. The RouteViewModel itself is a very simple thing that creates a GeoPath from all the points of all activities:
using System.Collections.Generic; using System.Linq; using Windows.Devices.Geolocation; using GalaSoft.MvvmLight; namespace WhatWhenWhere.ViewModels { public class RouteViewModel : ViewModelBase { public RouteViewModel() { } public RouteViewModel(IEnumerableactivities) { Path = new Geopath(activities.Select(p => p.Location.Positions.First())); } private Geopath geoPath; public Geopath Path { get { return geoPath; } set { if (geoPath != value) { geoPath = value; RaisePropertyChanged(() => Path); } } } } }
And a wee bit of XAML to glue it all together
Being a lazy ****** and not wanting to think of something to draw all the stuff on a map, I reused both my Map Drawing behavior as the trick to show a popup, and came out with pretty little XAML indeed:
<Page.BottomAppBar> <CommandBar> <AppBarButton Icon="Accept" Label="Load" Command="{Binding LoadCommand, Mode=OneWay}"/> </CommandBar> </Page.BottomAppBar> <!-- stuff snipped --> <Grid Grid.Row="1" x:Name="ContentRoot" Margin="19,9.5,19,0"> <interactivity:Interaction.Behaviors> <behaviors:SizeListenerBehavior x:Name="ContentRootListener"/> </interactivity:Interaction.Behaviors> <Button Content="Button" HorizontalAlignment="Left" Margin="246,208,0,0" VerticalAlignment="Top" Command="{Binding LoadCommand}"/> <Maps:MapControl maps:MapBindingHelpers.MapViewArea="{Binding ViewArea}"> <interactivity:Interaction.Behaviors> <maps:MapShapeDrawBehavior LayerName="Locations" ItemsSource="{Binding Activities}" PathPropertyName="Location"> <maps:MapShapeDrawBehavior.EventToCommandMappers> <maps:EventToCommandMapper EventName="MapTapped" CommandName="SelectCommand"/> </maps:MapShapeDrawBehavior.EventToCommandMappers> <maps:MapShapeDrawBehavior.ShapeDrawer> <maps1:MapActivityDrawer/> </maps:MapShapeDrawBehavior.ShapeDrawer> </maps:MapShapeDrawBehavior> <maps:MapShapeDrawBehavior LayerName="Route" ItemsSource="{Binding Route}" PathPropertyName="Path"> <maps:MapShapeDrawBehavior.ShapeDrawer> <maps:MapPolylineDrawer Color="Green" Width="3" StrokeDashed="True"/> </maps:MapShapeDrawBehavior.ShapeDrawer> </maps:MapShapeDrawBehavior> </interactivity:Interaction.Behaviors> </Maps:MapControl> <Grid DataContext="{Binding SelectedItem}" Height="{Binding WatchedObjectHeight, ElementName=ContentRootListener, Converter={StaticResource PercentageConverter}, ConverterParameter=30}" VerticalAlignment="Bottom" Background="Black" > <interactivity:Interaction.Behaviors> <behaviors:UnfoldBehavior RenderTransformY="1" Direction="Vertical" Activated="{Binding Converter={StaticResource NullToBooleanConverter}}"/> </interactivity:Interaction.Behaviors> <userControls:RouteItemPopup > </userControls:RouteItemPopup> </Grid> </Grid> </Grid>
If you want more info on how these behaviors for binding elements to a MapControl work, I suggest you look at the original article explaining how they should be used. If a user taps on a map symbol, a popup with activity data is shown, and the address where that happened is loaded. Basically the same trick as I used here
Start it up
It’s important to initialize the Dispatcher helper, start new MainViewModel, and setup the WindowVisibilityMessage. I have talked about this before, but I thought it wise to repeat it one more time.
protected async override void OnLaunched(LaunchActivatedEventArgs e) { DispatcherHelper.Initialize(); MainViewModel.CreateNew(); await MainViewModel.Instance.Start(); WindowVisibilityMessage.Setup(); // moreNotice the OnLaunched needs to be async, because of the async nature of the Start method.
…and a class to draw a different image for every activity
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Windows.Devices.Geolocation; using Windows.Foundation; using Windows.Storage.Streams; using Windows.UI.Xaml.Controls.Maps; using Lumia.Sense; using WhatWhenWhere.ViewModels; using WpWinNl.Maps; namespace WhatWhenWhere.Maps { public class MapActivityDrawer : MapShapeDrawer { public override MapElement CreateShape(object viewModel, Geopath path) { var activityModel = ((ActivityViewModel) viewModel).Model; return new MapIcon { Location = new Geopoint(path.Positions.First()), Title = activityModel.Activity.ToString(), NormalizedAnchorPoint = new Point(0.5,1), Image = GetImage(activityModel.Activity) }; } private static RandomAccessStreamReference GetImage(Activity activity) { var image = "Other"; switch (activity) { case Activity.Idle: { image = "Idle"; break; } case Activity.Walking: { image = "Walking"; break; } case Activity.Moving: { image = "Moving"; break; } case Activity.Stationary: { image = "Stationary"; break; } } return RandomAccessStreamReference.CreateFromUri( new Uri(string.Concat("ms-appx:///Images/", image, ".png"))); } } }
Basically, translates every bound object back to ActivityViewModel, and draws a different image for it. This can be made more efficient, but this shows a way it could work.
Conclusion: the good and the bad parts
As you have seen, using SensorCore is actually pretty easy. Making it work together with MVVMLight as well, although I am very well aware that all the stuff I used around it may make it look a bit more convoluted that it actually is. The DataLoaderModel is all you have to understand. The rest is just the app around it.
The bad parts of SensorCore are pretty simple:
- It’s still in beta
- It requires a Windows Phone 8.1 phone with a Lumia Cyan update and the required hardware aboard. Currently, to the best of my knowledge, only the Lumia 630 has both the required hardware and software, although I assume the 930 will have it too.
- The TrackPointMonitor only gives a position every 5 minutes – at the very best. Sometimes it misses points. So forget about a super-duper-detailed tracking of your location.
Another bad part – but that is more my bad – is that I have not included a way to run this on simulated data. A NuGet package allows you to test against fake or prerecorded data, so you can try your app without having access to an actual device have the sensors on board. I made this using a real live 630. To get this to work properly, you will need one too. The good part is: they are dirt cheap.
The good parts are a lot better:
- Very simple and easy-to-use API.
- Possibility to do geographical tracking and analysis after-the-fact
- All kinds of fun apps possible, like fitness apps, life logging, and stuff like that
- Very low power consumption (unlike ‘active’ GPS enabled apps)
- Privacy is secured – no big brother watching over your shoulder. The data is yours and yours alone.
So. A new class of apps is enabled by this new SDK. I hope I have inspired you to have a spin with it. If you have read all the way to this, I am very impressed by your perseverance ;-)
Demo solution, as (nearly) always, can be found here.
Edit 26-06-2014: made some minor updates after suggestions from my Finland fellow MVP Jani Nevalainen.