Published in

Proceedings of the 2008 international conference on Content-based image and video retrieval - CIVR '08

DOI: 10.1145/1386352.1386385

Links

Tools

Export citation

Search in Google Scholar

Accumulated motion energy fields estimation and representation for semantic event detection

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In this paper, a motion-based approach for detecting high-level semantic events in video sequences is presented. Its main characteristic is its generic nature, i.e. it can be directly applied to any possible domain of concern without the need for domain-specific algorithmic modifications or adaptations. For realizing event detection, the video is initially segmented into shots and for every resulting shot appropriate motion features are extracted at fixed time intervals, thus forming a motion observation sequence. Then, Hidden Markov Models (HMMs) are employed for associating each shot with a semantic event based on its formed observation sequence. Regarding the motion feature extraction procedure, a new representation for providing local-level motion information to HMMs is presented, while motion characteristics from previous frames are also exploited. The latter is based on the observation that motion information from previous frames can provide valuable cues for interpreting the semantics present in a particular frame. Experimental results as well as comparative evaluation from the application of the proposed approach in the domains of tennis and news broadcast video are presented.