Project Details
Projekt Print View

A Generalised Approach to Learning Models of Human Behaviour for Activity Recognition from Textual Instructions

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term from 2016 to 2019
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 314457946
 
Final Report Year 2019

Final Report Abstract

Computational models for activity recognition aim at recognising the user actions and goals based on precondition-effect rules. One problem such approaches face, is how to obtain the model structure. To reduce the need of domain experts or sensor data during the model building, methods for learning models of human behaviour from textual data have been investigated. Existing approaches, however, make various simplifying assumptions during the learning process. This renders the model inapplicable for activity recognition problems. To address this problem, this project aimed at developing a generalised methodology for learning the model structure from textual instructions. The methodology combines existing and novel methods for model learning. Given a textual input, the methodology generates a situation model based on which computational state space models (CSSMs) are generated. A situation model is a semantic structure representing the relevant elements discovered in the textual description (these are actions, objects, locations, properties of objects, abstraction of objects) and the corresponding causal, spatial, functional, and abstraction relations between the elements. Based on the semantic structure, the methodology then generates precondition-effect rules describing the possible actions that can be executed in the problem domain, the initial state of the problem and the possible goal states. The generated CSSMs are used for activity recognition tasks from the domain of daily activities. As the generated models are relatively general, one problem with activity recognition is that the model cannot correctly recognise the executed activities due to too many options. To address this problem, an optimisation phase follows where the action weights are adjusted based on existing plan traces. The generated models are compared to hand-crafted models for the same problem domains. Not surprisingly, the results show that models generated from texts cannot learn implicit common sense knowledge. This means, that we as humans add additional knowledge to the models in order to put relevant context information or to specialise the model. This phenomenon is to a degree resolved by the optimisation phase where the manually built models slightly outperform generated models for activity recognition tasks. The models are however unable to provide the additional contextual information that humans encode in hand-crafted models. This poses the challenging research question of how to combine multiple heterogenous sources of information in order to generate rich and accurate computational models for activity recognition.

Publications

 
 

Additional Information

Textvergrößerung und Kontrastanpassung