Published in

Institute of Electrical and Electronics Engineers, IEEE Transactions on Image Processing, p. 1-1, 2016

DOI: 10.1109/tip.2016.2531905

Links

Tools

Export citation

Search in Google Scholar

Learning Iteration-wise Generalized Shrinkage-Thresholding Operators for Blind Deconvolution

Journal article published in 2016 by Wangmeng Zuo, Dongwei Ren, David Zhang, Shuhang Gu, Lei Zhang ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Salient edge selection and time-varying regularization are two crucial techniques to guarantee the success of maximum a posteriori (MAP)-based blind deconvolution. However, the existing approaches usually rely on carefully designed regularizers and handcrafted parameter tuning to obtain satisfactory estimation of the blur kernel. Many regularizers exhibit the structure-preserving smoothing capability, but fail to enhance salient edges. In this paper, under the MAP framework, we propose the iteration-wise ell -{p} -norm regularizers together with data-driven strategy to address these issues. First, we extend the generalized shrinkage-thresholding (GST) operator for ell -{p} -norm minimization with negative p value, which can sharpen salient edges while suppressing trivial details. Then, the iteration-wise GST parameters are specified to allow dynamical salient edge selection and time-varying regularization. Finally, instead of handcrafted tuning, a principled discriminative learning approach is proposed to learn the iterationwise GST operators from the training dataset. Furthermore, the multi-scale scheme is developed to improve the efficiency of the algorithm. Experimental results show that, negative p value is more effective in estimating the coarse shape of blur kernel at the early stage, and the learned GST operators can be well generalized to other dataset and real world blurry images. Compared with the state-of-the-art methods, our method achieves better deblurring results in terms of both quantitative metrics and visual quality, and it is much faster than the state-of-the-art patch-based blind deconvolution method. ; Department of Computing