Published in

World Scientific Publishing, International Journal of Pattern Recognition and Artificial Intelligence, 01(38), 2024

DOI: 10.1142/s0218001423510229

Links

Tools

Export citation

Search in Google Scholar

Saliency and Depth-Aware Full Reference 360-Degree Image Quality Assessment

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

With the widespread adoption of virtual reality and 360-degree video, there is a pressing need for objective metrics to assess quality in this immersive panoramic format reliably. However, existing image quality assessment models developed for traditional fixed-viewpoint content do not fully consider the specific perceptual issues involved in 360-degree viewing. This paper proposes a 360-degree image full-reference quality assessment (FR-IQA) methodology based on a multi-channel architecture. The proposed 360-degree FR-IQA method further optimizes and identifies the distorted image quality using two easily obtained useful saliency and depth-aware image features. The convolutional neural network (CNN) is designed for training. Furthermore, the proposed method accounts for predicting user viewing behaviors within 360-degree images, which will further benefit the multi-channel CNN architecture and enable the weighted average pooling of the predicted FR-IQA scores. The performance is evaluated on publicly available databases to demonstrate the advantages brought by the proposed multi-channel model in performance evaluation and cross-database evaluation experiments, where it outperforms other state-of-the-art ones. Moreover, an ablation study exhibits good generalization ability and robustness.