Published in

Institute of Electrical and Electronics Engineers, IEEE Transactions on Multimedia, 1(14), p. 187-198, 2012

DOI: 10.1109/tmm.2011.2169775

Links

Tools

Export citation

Search in Google Scholar

Bottom-up saliency detection model based on human visual sensitivity and amplitude spectrum

Journal article published in 2012 by Yuming Fang, Weisi Lin, Bu-Sung Lee, Chiew-Tong Lau, Zhenzhong Chen, Chia-Wen Lin ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

With the wide applications of saliency information in visual signal processing, many saliency detection methods have been proposed. However, some key characteristics of the human visual system (HVS) are still neglected in building these saliency detection models. In this paper, we propose a new saliency detection model based on the human visual sensitivity and the amplitude spectrum of quaternion Fourier transform (QFT). We use the amplitude spectrum of QFT to represent the color, intensity, and orientation distributions for image patches. The saliency value for each image patch is calculated by not only the differences between the QFT amplitude spectrum of this patch and other patches in the whole image, but also the visual impacts for these differences determined by the human visual sensitivity. The experiment results show that the proposed saliency detection model outperforms the state-of-the-art detection models. In addition, we apply our proposed model in the application of image retargeting and achieve better performance over the conventional algorithms.