Gesture Detection On-Loading for Next Generation Sensor Subsystems (GDO-NGS2)

User experience is heavily affected by the smartphone’s ability to recognize and react to gestures, e.g. zoom-in and zoom-out gestures on a touch screen. User-smartphone-interactions are steadily growing in number and diversity. To accommodate common usage scenarios, Android 5.1 now specifies a new interface for glance and pick-up gestures, which allow for displaying important information with minimal required user attention. Most likely such gestures will be detected using inertial MEMS sensors.

Figure 1: System using Wakeup App running on the APU

State-of-the-art gesture detection on smartphones is done with inertial MEMS sensors detecting basic activity and triggering an analysis on the application processor (APU) in order to recognizing higher-level gestures. However, often this analysis is to no avail as most smartphone activity does not relate to any gesture. Especially if the processor is interrupted during deep-sleep mode, a significant battery life reduction is observable. Thus, the main disadvantage of today’s smartphone gesture detection is the overall energy consumption caused by the activation of the application processor. This situation will be even more dramatic in wearables where battery life is more restricted. In this TTP, we improve the overall energy efficiency by on-loading gesture detection to the sensor subsystem. That way, the application processor will only be triggered if an activation gesture is identified.

The objective of the GDO-NGS2 TTP was to improve the overall energy efficiency by performing the gesture recognition within the sensor subsystem. Hence, the sensor subsystem will provide a virtual wake-up, pick-up, and glance sensor to the application processor of the smartphone or wearable. In turn, the application processor subsystem will only be triggered by interrupt on the occurrence of such an event. As a result gesture detection is off-loaded from the APU and loaded onto a microprocessor close to the sensor system. Starting from existing gesture recognition algorithms developed at University of Rostock, Bosch Sensortec and University of Rostock studied the potential to do gesture detection on-loading for the next generation sensor subsystems in smartphones and wearables.

Figure 2: System using Wakeup IRQ and sensor hub data processing

While on-chip coprocessors are already common in smartphones, distributing recognition tasks to off-chip processing units like microprocessors is still a relatively new approach. Recognition of elementary gestures can be achieved efficiently with a static classifier, which uses highly specific knowledge of the signal characteristics for each gesture. As this approach is basically control flow dominant, it is very well suited for the execution on (power efficient) microcontrollers. More and more smartphones come equipped with sensor hubs, which are used for diverse preprocessing tasks. The implementation of identical gesture recognition systems on both the APU (as in common Android apps) and the sensor subsystem (the sensor hub) enabled a fair and convincing comparison between an APU-based and a microcontroller-based gesture recognition for Android wakeup gestures.

Figure 3 Average power consumption of smartphone over various gesture frequencies with APU-based and sensor hub gesture recognition.

In the TTP, the gestures for wake-up, pick-up, and glance have been implemented in software for novel Bosch Sensortec sensor subsystems equipped with a microcontroller. One of the key problems was that other firmware components running on the sensor subsystem should not be affected by the new functionality. The implementation of a robust gesture detection in real-time is further complicated by the limited resources of the microcontroller. Additionally, the energy efficiency of the gesture detection within the sensor subsystem has been optimized. The required adaptions in existing gesture recognition algorithms demanded for a quality assessment in terms of recognition performance and recognition time. Tuning those parameters has been essential for robust gesture detection and, as a consequence, for the user experience and acceptance. As a result the above described prototype has been developed and used to assess the energy efficiency.

For an average time between gestures equal or higher than 60 seconds, significant power savings can be achieved with the sensor hub approach. The effect increases with the period between gestures. This is expected as fewer gestures mean more time for the APU to sleep. However, the power consumption of the recognition running on the APU is also affected by the frequency of gestures: this is caused by the display, which turns on after each recognized gesture. The worst case can be identified with APU sleep times of less than one second. More realistically though is a usage pattern for smartphones with wakeup gestures every five minutes (300s) during the day.

With these results, architectural improvements for next generation sensor subsystems have been identified within the TTP. Finally, a first executable specification has been implemented as a virtual prototype written in SystemC within the TTP. This executable specification will serve Bosch Sensortec as a starting point for developing new MEMS sensor subsystem products with improved gesture support. This novel technology is expected to enable Bosch Sensortec’s customers to add their custom gestures to these new sensor subsystems and, hence, build customized very low energy solutions.

Downloads: 
PDF icon PosterPDF icon Abstract