Published in

2007 IEEE International Conference on Image Processing

DOI: 10.1109/icip.2007.4379322

Links

Tools

Export citation

Search in Google Scholar

Video Segmentation and Semantics Extraction from the Fusion of Motion and Color Information

Proceedings article published in 2007 by Alexia Briassouli, Vasileios Mezaris, Ioannis Kompatsiaris ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In recent years, digital multimedia technologies have evolved significantly, and are finding numerous applications, over the internet, and even over mobile networks. Thus, the video processing community has started focusing more intensively on the extraction of higher level information from multimedia data. This paper proposes a novel two-stage video processing system that aims to segment and extract semantically meaningful information, which can help achieve higher level interpretation of video. The flow fields present in the video are accumulated over several frames and their statistics are processed to derive an "activity area", that is characteristic of the type of events taking place. The color information complements the motion data, and is used for the accurate segmentation of the moving entities in each frame. The joint use of the activity area and accurate segmentation can serve as a first step to the further semantic interpretation of the video, including the recognition and accurate localization of moving objects of interest. We present experiments that demonstrate the effectiveness of our method for real videos.