Dissemin is shutting down on January 1st, 2025

Published in

Elsevier, Neurocomputing, (216), p. 778-789

DOI: 10.1016/j.neucom.2016.08.032

Links

Tools

Export citation

Search in Google Scholar

Weakly supervised activity analysis with spatio-temporal localisation

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Red circle
Postprint: archiving forbidden
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In computer vision, an increasing number of weakly annotated videos have become available, due to the fact it is often difficult and time consuming to annotate all the details in the videos collected. Learning methods that analyse human activities in weakly annotated video data have gained great interest in recent years. They are categorised as “weakly supervised learning”, and usually form a multi-instance multi-label (MIML) learning problem. In addition to the commonly known difficulties of MIML learning, i.e. ambiguities in instances and labels, a weakly supervised method also has to cope with large data size, high dimensionality, and a large proportion of noisy examples usually found in video data. In this work, we propose a novel learning framework that iteratively optimises over a scalable MIML model and an instance selection process incorporating pairwise spatio-temporal smoothing during training. Such learned knowledge is then generalised to testing via a noise removal process based on the support vector data description algorithm. According to the experiments on three challenging benchmark video datasets, the proposed framework yields a more discriminative MIML model and less noisy training and testing data, and thus improves the system performance. It outperforms the state-of-the-art weakly supervised and even fully supervised approaches in the literature, in terms of annotating and detecting actions of a single person and interactions between a pair of people. ; This work started when Feng Gu was at University of Leeds, working for DARPA Mind's Eye project VIGIL (W911NF-10-C-0083). It was then extended and improved during his employment at Kingston University, London, for the project “BREATHE—Platform for self-assessment and efficient management for informal caregivers” (AAL-JP-2012-5-045).