Published in

Health Monitoring and Personalized Feedback using Multimedia Data, p. 139-160

DOI: 10.1007/978-3-319-17963-6_8

Proceedings of the 1st ACM international workshop on Multimedia indexing and information retrieval for healthcare - MIIRH '13

DOI: 10.1145/2505323.2505327

Links

Tools

Export citation

Search in Google Scholar

Activity Detection and Recognition of Daily Living Events

Proceedings article published in 2013 by Konstantinos Avgerinakis, Alexia Briassouli, Ioannis Kompatsiaris ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

Activity recognition is one of the most active topics within computer vision. Despite its popularity, its application in real life scenarios is limited because many methods are not entirely automated and consume high computational resources for inferring information. In this work, we contribute two novel algorithms: (a) one for automatic video sequence segmentation - elsewhere referred to as activity spotting or activity detection - and (b) a second one for reducing activity representation computational cost. Two Bag-of-Words (BoW) representation schemas were tested for recognition purposes. A set of experiments was performed, both on publicly available datasets of activities of daily living (ADL), but also on our own ADL dataset with both healthy subjects and people with dementia, in realistic, life-like environments that are more challenging than those of benchmark datasets. Our method is shown to provide results better than, or comparable with, the SoA, while we also contribute a realistic ADL dataset to the community.