Published in

World Scientific Publishing, International Journal of Pattern Recognition and Artificial Intelligence, 13(37), 2023

DOI: 10.1142/s0218001423540174

Links

Tools

Export citation

Search in Google Scholar

A Two-Stage Three-Dimensional Attention Network for Lightweight Image Super-Resolution

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In recent years, single image super-resolution (SISR) methods using convolutional neural networks (CNN) have achieved satisfactory performance. Nevertheless, the large model scale and the slow inference speed of these methods greatly limit the application scenarios. In this paper, we propose a two-stage three-dimensional attention network (ATTNet) for lightweight image super-resolution. First, we put forward the spatial feature encoder–decoder (SFE-D) with a spatial attention mechanism. Next, the channel transposed attention module (CTAM) with a channel self-attention mechanism is designed. Both the modules are used for fine feature extraction in the low-resolution stage. Finally, the content-based pixel recombination module (CPRM) is proposed to reconstruct the detailed content with a joint attention mechanism in the high-resolution stage. According to our experimental results, significant performance in terms of the quantitative metrics and the subjective visual quality can be achieved on average compared with the state-of-the-art lightweight SISR algorithms.