Published in

Elsevier, Computers and Electrical Engineering, 3(40), p. 993-1005, 2014

DOI: 10.1016/j.compeleceng.2013.10.005

Links

Tools

Export citation

Search in Google Scholar

Feature aggregation based visual attention model for video summarization

Journal article published in 2013 by Naveed Ejaz, Irfan Mehmood ORCID, Sung Wook Baik
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Red circle
Postprint: archiving forbidden
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Video summarization is an integral component of video archiving systems. It provides small versions of the videos that are suitable for enhancing browsing and navigation capabilities. A popular method to generate summaries is to extract a set of key frames from the video, which conveys the overall message of the video. This paper introduces a novel feature aggregation based visual saliency detection mechanism and its usage for extracting key frames. The saliency maps are computed based on the aggregated features and motion intensity. A non-linear weighted fusion mechanism combines the two saliency maps. On the resultant map, a Gaussian weighting scheme is used to assign more weight to the pixels close to the center of the frame. Based on the final attention value of each frame, the key frames are extracted adaptively. The experimental results, based on different evaluation standards, demonstrate that the proposed scheme extracts semantically significant key frames.