Published in

2012 IEEE 24th International Conference on Tools with Artificial Intelligence

DOI: 10.1109/ictai.2012.181

Links

Tools

Export citation

Search in Google Scholar

Recognition of Activities of Daily Living

Proceedings article published in 2012 by K. Avgerinakis, A. Briassouli, I. Kompatsiaris ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

This paper presents a new method for human action recognition which exploits advantages of both trajectory and space-time based approaches in order to identify action patterns in given sequences. Videos with both a static and moving camera can be tackled, where camera motion effects are overcome via motion compensation. Only pixels undergoing changing motion, found by extracting motion boundary-based activity areas, are processed in order to introduce robustness to camera motion and reduce computational complexity. In these regions, densely sampled grid points on multiple scales are tracked using a KLT tracker, leading to dense multi-scale trajectories, on which HOGHOF descriptors are estimated. The length of each trajectory is determined by detecting changes in the tracked points’ motion or appearance using sequential change detection techniques, namely the CUSUM approach. A vocabulary is created for each video’s features using Hierarchical K-means, and the resulting fast search trees are used to describe the actions in the videos. SVMs are used for classification, using a kernel based on the similarity scores between training and testing videos. Experiments are carried out with new and challenging datasets for which the proposed method is shown to lead to recognition results that are comparable to or better than existing state of the art methods.