Project Details
Projekt Print View

Methods for Activity Spotting With On-Body Sensors

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term from 2009 to 2014
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 101885733
 
Final Report Year 2014

Final Report Abstract

Today, commercial wearable devices such as wristbands or belt clips can detect simple, repetitive activities such as walking or running. Extending that capability towards more complex activities (e.g., nutrition, industrial maintenance) is an active research area. This project was devoted to a particularly difficult aspect of this problem: the spotting of subtle actions within a continuous signal stream dominated by non relevant “NULL” class events. The difficulty stems from the variability of human actions: both the ones that we want to spot and the non-relevant NULL class ones which can encompass virtually any conceivable human activity. To address the above problem, the project has proceeded in three directions. First, we have investigated the value of different additional sensing modalities beyond motion sensors. In particular wrist worn cameras combined with proximity sensors (as can be easily realized in a smart watch) have proven to a be valuable source of information. In addition we have shown that a combination of head motion pattern and eye blink analysis using sensor integrated in Google Glass and a capacitive neck band can be used to reliably recognize high level activities. Related to direct sensor work we have shown that geometric body model which is derived from body-placed sensors allows to improve recognition results and is beneficial for recognition of fine-grained activities. Second, we have developed new spotting methods based on sequences of “basic motions”. This included novel signal segmentation methods based on a fast polynomial approximation, a concrete approach to identifying characteristic, user and situation invariant parts of the motion signal based on physical constraints or abstract “Eigenmotifs” and new approaches to classifying sequences of such basic motions. Third, we have shown that decomposing complex high-level activities into simpler actions, allows for transfer and sharing between these high-level activities and subsequently improves performance. To exploit fine-grained location information in the absence of location providing signals, we have shown how to estimate location based on detection of routinely visited locations and a pocket-based inertial measurement sensor. We also looked into multi-modal approaches with language. More specifically we generated natural language descriptions for activity sequences and we investigated different approaches and features to understand the similarity of activities by combining different state-of-the-art visual and textual features. Finally, we have developed an abstract characterization of different types of activity spotting problems and a new, more versatile way to evaluate their performance. Several data sets that were recorded during the project are being released into the public domain for other groups to test their methods on them.

Publications

  • Online segmentation of time series based on polynomial least-squares approximations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(12):2232 – 2245, 2010
    Erich Fuchs, Thiemo Gruber, Jiri Nitschke, and Bernhard Sick
  • Temporal data mining using shape space representations of time series. Neurocomputing, 74(1 – 3):379 – 393, 2010
    Erich Fuchs, Thiemo Gruber, Helmuth Pree, and Bernhard Sick
  • Performance metrics for activity recognition. ACM Transactions on Intelligent Systems and Technology (TIST), 2(1):6, 2011
    Jamie A. Ward, Paul Lukowicz, and Hans W. Gellersen
  • Script data for attribute-based recognition of composite activities. In Proceedings of the European Conference on Computer Vision (ECCV), pages 144 – 157, 2012
    Marcus Rohrbach, Michaela Regneri, Mykhaylo Andriluka, Sikandar Amin, Manfred Pinkal, and Bernt Schiele
  • Translating video content to natural language descriptions. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 433 – 440, 2013
    Marcus Rohrbach, Wei Qiu, Ivan Titov, Stefan Thater, Manfred Pinkal, and Bernt Schiele
  • A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv., 46(3):33:1 – 33:33, January 2014
    Andreas Bulling, Ulf Blanke, and Bernt Schiele
    (See online at https://doi.org/10.1145/2499621)
  • Dealing with human variability in motion based, wearable activity recognition. In Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on, pages 36 – 40, 2014
    Matthias Kreil, Bernhard Sick, and Paul Lukowicz
    (See online at https://doi.org/10.1109/PerComW.2014.6815161)
  • On general purpose time series similarity measures and their use as kernel functions in support vector machines. Information Sciences, 281:478 – 496, 2014
    Helmuth Pree, Benjamin Herwig, Thiemo Gruber, Bernhard Sick, Klaus David, and Paul Lukowicz
    (See online at https://doi.org/10.1016/j.ins.2014.05.025)
  • Sensor placement variations in wearable activity recognition. IEEE Pervasive Computing, Volume: 13 , Issue: 4 , Oct.-Dec. 2014, 32-41
    Kai Kunze and Paul Lukowicz
    (See online at https://dx.doi.org/10.1109/MPRV.2014.73)
 
 

Additional Information

Textvergrößerung und Kontrastanpassung