04 September 2017

Activity recognition.

Activity recognition.

Most business app are focused on CRUD like operations. They show, save or update data that comes from an external source. Even when they use a native UI and UX in every platform, these types of apps don’t take advantage of the device capabilities at all, in most cases leading to a poor rating of the app.

But this isn’t limited to business apps: apps who use sensors or doing a lot of background work, abuse the battery without worrying about user actions.

From last versions of iOS (7.0+) and Android (API Level 8+) our devices are always using sensors information to known user activity type, why not acting accordingly? You can take advantage of this information and craft  smartest applications.

The user, even without knowing it, is telling you at all times what he/she is doing. You just have to listen to it.

For example, if the device is completely stil, charging, with Wi-Fi and the GPS has lost accuracy, it’s easy to assume that the user is indoor in a known place or, simply, that he is at home.

There are many actions that we can deduce by the different sensors that the device has. In this article, we will focus on the activity recognition that Android and iOS have, so an app can improve usability and performance. There are lot of cases where you can apply these, but we will focus on three:

  • Running app.
  • Tracker driving app.
  • Background task app.

 

Let’s start creating an app with activity recognition. We create a project in Xamarin.Forms for iOS and Android. We would create a Master-Detail page to simulate that each option is the main screen of its own app.

Once we have created the two options with empty screen and menu, let’s create the recognition service. First of all, every platform has its own API, objects and definitions, so we need to create some shared artifacts to translate platform specific code to shared core code. We will start with an enum that defines the different types of activity. Same way, we need a ActivityRecognized class that will be our common model between platforms and a class for the parameters of the event that we will launch. In our case:

    public enum ActivityTypes
    {
        Stopped = 0,
        Walking = 1,
        Running = 2,
        OnBicycle = 3,
        OnVehicle = 4
    }
    public class ActivityRecognized
    {
        public ActivityTypes ActivityType { get; set; }
        public int Confidence { get; set; }
    }
    public class ActivityChangedEventArgs:EventArgs
    {
        public ActivityChangedEventArgs(ActivityRecognized activity)
        {
            Activity = activity;
        }
        public ActivityRecognized Activity { get; set; }
    }

Once we have the necessary model, we create the interface of our service:

    public interface IRecognitionActivityService
    {
        event EventHandler ActivityChanged;
        ActivityRecognized LastActivity { get; }
        void StartService();
        void StopService();
    }

And once we have all this, we start with the implementation of the service on every platform.

For Android, we need to install the Xamarin.GooglePlayServices.Location nuget package. We need this for access to GooglePlay Services. Also, we must request the ActivityRecognition permissions in Manifest file.

Now we can request the activity recognition to GooglePlay. For this, we must remember that our service has to inherit from Java.Lang.Object, because it is the type that Android service is expecting.

To connect to GooglePlay we must set the Callback to receive the response. If you want to implement it in the same class, you have to inherit from two new interfaces. In the end, it would look like this:

public class RecognitionActivityService : Java.Lang.Object, GoogleApiClient.IConnectionCallbacks, GoogleApiClient.IOnConnectionFailedListener, IRecognitionActivityService

If you are looking for the full implementation of the service, you will found it here in the sample GitHub.

In iOS, we have to create a CMMotionActivityManager that will be the manager to start or stop the Recognition service. In the same way as Android, you can found the implemented service here in the sample GitHub.

As you will see, we have created an event to whom we pass the activity recognized as a parameter. We will subscribe to that event from our ViewModel, so we can send it to the View using the PropertyChanged method. We have done it this way for easy follow the thread of execution, but, there are better ways to do it. For example: We can use the Reactive pattern or the MessagingCenter of Xamarin.Forms.

Once we have the recognition activity created, we can begin working with the data we get from it.

“Running App” sample.

Let’s start with the simplest of the three: The running feature. There are lot of applications to control our positions and speed when we are running. One of the features that we can add to our app is to detect when the user is running or on a bicycle, to make the stop and start button bigger and to lower the brightness of the screen. Even we can change what data is displayed when we are walking and when we are running.

This would lower the battery consumption while at the same time our UI is more accessible. This is especially useful when the user is wearing a running or bicycle phone holder.       

To do something like making a simpler screen when the user is doing exercise, we will make a white screen with six buttons, to simulate the different options of any tracker application.

Obviously, in this sample, no one of this buttons have a real use, but we write the texts “Start” and “Stop” on two of them. These two are the buttons that we will make bigger when the user is exercising.

    < ScrollView>
        < StackLayout x:Name="MainSlack">           
            < Button x:Name="StartButton" Text="Start" />                   
            < Button x:Name="StopButton" Text="Stop" />                   
            < Button x:Name="Button1" Text="Option1" />                   
            < Button x:Name="Button2" Text="Option2" />                   
            < Button x:Name="Button3" Text="Option3" />                   
            < Button x:Name="Button4" Text="Option4" />
        < /StackLayout>
    < /ScrollView>

In the code behind of our View, we need to subscribe to the PropertyChanged method exposed by our ViewModel, to call to our ExecuteAnimation method. That method will make the buttons bigger or smaller based on what activity is receiving.

        private void ViewModel_PropertyChanged(object sender, PropertyChangedEventArgs e)
        {
            switch (e.PropertyName)
            {
                case nameof(ViewModel.Activity):
                    Title = ViewModel.Activity.ActivityType.ToString();
                    if (ViewModel.Activity.ActivityType == ActivityTypes.Running || ViewModel.Activity.ActivityType == ActivityTypes.Walking)
                        ExecuteAnimation(true);
                    else
                        ExecuteAnimation(false);
                    break;
            }
        }

And this is how the ExecuteAnimation method looks like:

        private void ExecuteAnimation(bool makeButtonBigger)
        {
            if (makeButtonBigger && !isAnimationRealized)
            {
                Button1.IsVisible = false;
                Button2.IsVisible = false;
                Button3.IsVisible = false;
                Button4.IsVisible = false;
                Animation animationHeight = new Animation(x => { this.StartButton.HeightRequest = x; this.StopButton.HeightRequest = x; }, 0, this.Height / 2);
                Animation animationWidth = new Animation(x => { this.StartButton.WidthRequest = x; this.StopButton.WidthRequest = x; }, 0, this.Width / 2);
                animationWidth.Commit(this, nameof(animationWidth));
                animationHeight.Commit(this, nameof(animationHeight));
                isAnimationRealized = true;
            }
            else if (!makeButtonBigger && isAnimationRealized)
            {
                Button1.IsVisible = true;
                Button2.IsVisible = true;
                Button3.IsVisible = true;
                Button4.IsVisible = true;
                Animation animationHeight = new Animation(x => { this.StartButton.HeightRequest = x; this.StopButton.HeightRequest = x; }, this.Height / 2, 0);
                Animation animationWidth = new Animation(x => { this.StartButton.WidthRequest = x; this.StopButton.WidthRequest = x; }, this.Width / 2, 0);
                animationWidth.Commit(this, nameof(animationWidth));
                animationHeight.Commit(this, nameof(animationHeight));
                isAnimationRealized = false;
            }
        }

Now our application can be more easy to use and accessible to our users. Especially, when the users can’t have good visibility on the device like in a phone holder as we told before.

“Driving App” sample.

There are some situations in which the application will be used while the user is driving, but he should not. For example, Pokemon Go, or even map applications like Google Maps or Here! Maps.

Interacting with these applications can endanger the user if he is driving at the same time. So, detect when the user is on vehicle is very important so you can offer a safe driving mode for your app.

In these occasions, it’s your decisions if you want block the screen, ask the user if he is driving or it’s a passenger, or at least, launch an AlertView to warn the user about dangers of using the smartphone while driving.

For any of these options, we must send the activity event to our view, using the same way as in the previous example:

        private void ViewModel_PropertyChanged(object sender, PropertyChangedEventArgs e)
        {
            switch (e.PropertyName)
            {
                case nameof(ViewModel.Activity):
                    Title = ViewModel.Activity.ActivityType.ToString();
                    if (ViewModel.Activity.ActivityType == ActivityTypes.OnVehicle)
                        ThrowAlert();
                    break;
            }
        }

In the ThrowAlert method we launch the alert that we consider necessary.

Background App

Complex apps using background tasks could have a huge impact on system performance and could drain battery very fast. Evolving the behaviour of our application is a must if we want to receive a good feedback from our users. And we could do it, if we can predict the user actions. Let’s see a few examples:

  • If we are doing a large download (or upload) of files, we can use the Activity Recognition for a better performance management. For example: Discovering that the user start to walk, could indicate that, maybe, he loses Wi-Fi connection because he will leave his house. This will not always be so, however, although he doesn’t lose the connectivity, he may have a low signal quality by the movement. A common scenario, could be to limit the number of simultaneous connections. For example, 5 uploads/downloads when the user is stopped, 2 when he is walking or 1 in other cases.
  • Another great behaviour that we can think about, is the reduction of impact in the background. If the device is still several hours between night hours, we can assume that the user may be sleeping. We can take advantage about this to increase or decrease the impact of the application on the performance of device in background, depending on the app needs. For example we can increase the processing of data or sync with cloud services as no one is using the device.

Conclusion

There are many ways to improve the behaviour and performance of our applications. On this time, we have focused on the ActivityRecognition, but there are a lot of sensors that can give us data about the activity of user: Activity recognition, inclinometer, speedometer, geofences, GPS, gyroscope, compass...

There are many occasions when knowing this data, they could indicate the better way to behave for our applications, especially in background. For example:

  • If the user is walking and the device have an inclination between 40 and 80 degree, we can assume that the user is using the phone with another app and we should reduce the impact of ours to not slows down the use of device.
  • If we have an application on a business device in a factory or an office, if the device leaves that geofences, it could indicate that somebody is stealing it or an emergency has occurred. In both cases, we must launch an alert to a server and activate the GPS.

And much more cases...

A smartphone app, unlike web applications, have direct and simple access to these sensors. You can take advantage of them to create really smart applications.

You can download the full source code of this sample at our GitHub page!

Related

0 ( 0 reviews)

Post a Comment