Home // International Journal On Advances in Life Sciences, volume 10, numbers 1 and 2, 2018 // View article


Wearable Eye Tracking for Multisensor Physical Activity Recognition

Authors:
Peter Hevesi
Jamie Ward
Orkhan Amiraslanov
Gerald Pirkl
Paul Lukowicz

Keywords: Wearable sensors; Machine learning; Activity recognition; Feature extraction; Computer vision

Abstract:
This paper explores the use of wearable eye-tracking to detect physical activities and location information during assembly and construction tasks involving small groups of up to four people. Large physical activities, like carrying heavy items and walking, are analysed alongside more precise, hand-tool activities, like using a drill, or a screwdriver. In a first analysis, gaze-invariant features from the eye-tracker are classified (using Naive Bayes) alongside features obtained from wrist-worn accelerometers and microphones. An evaluation is presented using data from an 8-person dataset containing over 600 physical activity events, performed under real-world (noisy) conditions. Despite the challenges of working with complex, and sometimes unreliable, data, we show that event-based precision and recall of 0.66 and 0.81 respectively can be achieved by combining all three sensing modalities (using experiment-independent training, and temporal smoothing). In a further analysis, we apply state- of-the-art computer vision methods like object recognition, scene recognition, and face detection, to generate features from the eye-trackers’ egocentric videos. Activity recognition trained on the output of an object recognition model (e.g., VGG16 trained on ImageNet) could predict Precise activities with an (overall average) f-measure of 0.45. Location of participants was similarly obtained using visual scene recognition, with average precision and recall of 0.58 and 0.56.

Pages: 103 to 116

Copyright: Copyright (c) to authors, 2018. Used with permission.

Publication date: June 30, 2018

Published in: journal

ISSN: 1942-2660