Discriminative Learning of Iteration-Wise Priors for Blind Deconvolution

Wangmeng Zuo, Dongwei Ren, Shuhang Gu, Liang Lin, Lei Zhang; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3232-3240


The maximum a posterior (MAP)-based blind deconvolution framework generally involves two stages: blur kernel estimation and non-blind restoration. For blur kernel estimation, sharp edge prediction and carefully designed image priors are vital to the success of MAP. In this paper, we propose a blind deconvolution framework together with iteration specific priors for better blur kernel estimation. The family of hyper-Laplacian $( \Pr (\mathbf{d})\propto {{e}^{-{\left\| \mathbf{d} \right\|_{p}^{p}}/{\lambda }\;}})$ is adopted for modeling iteration-wise priors of image gradients, where each iteration has its own model parameters $\{\lambda^{(t)}, p^{(t)}\}$. To avoid heavy parameter tuning, all iteration-wise model parameters can be learned using our principled discriminative learning model from a training set, and can be directly applied to other dataset and real blurry images. Interestingly, with the generalized shrinkage / thresholding operator, negative $p$ value $(p < 0)$ is allowable and we find that it contributes more in estimating the coarse shape of blur kernel. Experimental results on synthetic and real world images demonstrate that our method achieves better deblurring results than the existing gradient prior-based methods. Compared with the state-of-the-art patch prior-based method, our method is competitive in restoration results but is much more efficient.

Related Material

author = {Zuo, Wangmeng and Ren, Dongwei and Gu, Shuhang and Lin, Liang and Zhang, Lei},
title = {Discriminative Learning of Iteration-Wise Priors for Blind Deconvolution},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}