(ProQuest: ... denotes non-US-ASCII text omitted.)
Academic Editor:Yung-Kuan Chan
Department of Image, Chung-Ang University, Seoul 156-756, Republic of Korea
Received 29 July 2014; Accepted 10 October 2014; 4 November 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Restoration of motion blurred images is a fundamental problem of image processing especially under a poor illumination condition, where a long exposure creates unwanted motion blur. A number of blind image deconvolution methods have been proposed to remove motion blur. In this context practical blind image deconvolution can be categorized into three main varieties: single image-based, multiple image-based, and hardware-aided approaches.
Single image-based blind deconvolution estimates the blur kernel in the form of a point-spread-function (PSF) based on a simple parametric model using a single input image [1, 2]. However, a simple parametric curve cannot successfully represent the motion PSF made by various types of real camera motions. Fergus et al. proposed a general motion PSF estimation method which uses a sophisticated variational Bayesian method based on the natural image prior [3], which was followed up by related research in [4-8]. Although these methods provide a generalized camera motion model, a manual process of tuning parameters and high computational load are their disadvantages.
The multiple image-based blind deconvolution removes motion blur by appropriately combining long- and short-exposure images under the assumption that both images are captured from the same scene at the same time [9-11]. If the simultaneous acquisition assumption does not hold, the multiple image-based approach fails to remove motion blur.
The hardware-aided approach uses additional optical devices or electronic systems to overcome the limitations of the multiple image-based approach [12-16]. In spite of acquiring more accurate, robust data to estimate the motion PSF, the hardware-aided method needs a complicated optical system such as a coded-exposure or an embedded inertial sensor. An efficient implementation method of a built-in inertial sensor was introduced by Sindelar and Sroubek for mobile imaging devices [16]. But the performance of motion deblurring is not good enough because of the sensor noise and the use of a simple restoration filter.
For fast motion deblurring, both PSF estimation and the corresponding image restoration should be fast and accurate. In this paper, an adaptive image deblurring method is presented by generating the motion trajectory in the probabilistic manner and performing image restoration based on the local statistics to solve common issues in the deconvolution process. The contribution of the proposed research is twofold: (i) a novel motion PSF estimation method is proposed by minimizing the motion trajectory error based on a priori probability distribution, and (ii) a noniterative adaptive image restoration algorithm is proposed based on the local statistics of image to reduce ringing artifacts and noise amplification. The proposed method can quickly estimate the motion PSF using an inertial sensor and a priori probability distribution. The proposed adaptive image restoration algorithm minimizes restoration artifacts resulting from inaccurately estimated PSF. Both theoretical justification based on the image degradation model incorporating the projected camera motion and experimental results demonstrate that the proposed method outperforms existing state-of-the-art deconvolution methods.
2. Image Degradation Model Using Projected Camera Motion
Long-exposure photography is generally degraded by motion blur. If an inertial sensor samples K different poses of the shaky camera during the exposure period, an object point (X,Y,Z) in the three-dimensional (3D) object space is projected onto K different positions (xk ,yk ) , k=1,...,K , in the two-dimensional (2D) image plane as shown in Figure 1. More specifically, the image point is related with the object point using the homogeneous vectors as [figure omitted; refer to PDF] where Πk represents the projection matrix of the k th camera pose. If the motion trajectory is generated in the space-invariant manner, K points in the image plane generate the point-spread-function (PSF) of the corresponding motion blur as [figure omitted; refer to PDF] Given the space-invariant PSF, the image degradation model of motion blur is given in the vector-matrix form as [figure omitted; refer to PDF] where g represents the motion blurred image, H is the degradation matrix, f is the ideal image without motion blur, and η is additive noise. Assuming that the image size is N×N , all g , f , and η are expressed by N2 ×1 lexicographically ordered vectors, and H is an N2 ×N2 block circulant matrix defined by the PSF. In this work, we analyze the motion trajectory using inertial sensors and then compute the projection matrices. To estimate the motion PSF, each of scene points is projected into the image plane according to the projection matrices.
Figure 1: Motion trajectory generation process.
[figure omitted; refer to PDF]
3. PSF Estimation Using Camera Motion Tracking
3.1. PSF Estimation of Motion Blur Based on the Projected Trajectory
In estimating the size and shape of a motion PSF, only the relative position of the camera is needed because the PSF is the sum of reflected intensities from the first position to the last one of the camera motion as described in (2). Each camera position is projected onto the image plane and can be expressed using a planar homography as [figure omitted; refer to PDF] where C represents the camera intrinsic matrix, R is the rotation matrix, d is the scene depth, t is the translation vector, and nv is the normal vector to the image plane. The relationship between the motion trajectory and camera translation is shown in Figure 2, where the motion trajectory Δmt in the image plane is computed as [figure omitted; refer to PDF] where lf and Δtc , respectively, denote a focal length and translation of the camera. If the scene depth is assumed to be much larger than the focal length, Δmt can be neglected. For this reason, the camera translation does not affect the motion PSF under the large scene depth, and (4) is simplified as [figure omitted; refer to PDF] The camera coordinate is assumed to be aligned to the world coordinate whose origin lies on the optical axis of the camera. In this case, camera matrix C is determined by the focal length lf as [figure omitted; refer to PDF] Using the small-angle approximation [17] and space-invariant motion blur, the rotation matrix is computed as [figure omitted; refer to PDF] where ωix and ωiy represent the i th angular velocities around x and y axes, respectively. Since lf tan(ω)[approximate]lf ω for a very small ω , the projection matrix in (6) can be expressed as [figure omitted; refer to PDF] In this work, we use gyro data to estimate angular velocities according to the camera motion as shown in Figure 3 and compute correspondingly the projected positions in the image plane. Under the ideal condition, the projected trajectory is equal to the PSF of the camera motion. However, the gyro data are noisy under real circumstances. More specifically, noisy gyro data results in erroneous matching between the projected position in the image plane and the real PSF sample. For robust estimation of PSF using noisy gyro data, we assume that a point on the projected trajectory has Gaussian distribution, and as a result the projected trajectory consists of sum of Gaussian distributions as [figure omitted; refer to PDF] where G represents a two-dimensional Gaussian distribution and KG is the normalization constant. As a result, the PSF of camera motion becomes the accumulation of the reweighted trajectory using Gaussian distribution as shown in Figure 4(a). Gaussian distribution is estimated by analyzing the gyro data of a fixed camera as shown in Figure 4(b).
Figure 2: Motion trajectory according to the camera translation.
[figure omitted; refer to PDF]
Figure 3: Gyro data and the projected trajectory.
[figure omitted; refer to PDF]
(a) The motion PSF estimation using the reweighted trajectory and (b) estimation of Gaussian distribution using the gyro data of a fixed camera shown in the rectangular box.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
In this paper, we use "Sensor Data Logger" proposed in [18] to acquire gyro data which are synchronized with blurred frames. The gyro data and the corresponding blurred frame are time stamped, and both opening and closing times of the shutters are recorded to analyze the delay. In this paper, the unknown delay is experimentally determined for the test device.
3.2. Spatially Adaptive Image Restoration Using Local Statistics
Given the estimated PSF, the motion deconvolution becomes a simple image restoration problem. In recent years, many image restoration methods have been proposed to remove various types of image degradation factors. Since image restoration is an ill-posed problem, the regularized solution often requires computationally expensive iterative optimization. To remove motion blur without undesired artifacts, a novel image restoration method is presented using local statistics of the image by minimizing the energy function defined as [figure omitted; refer to PDF] where · denotes the Euclidean norm, "[composite function] " is the element-wise multiplication operator, Wm is the spatially varying activity map, C is a highpass filter, λ1 and λ2 , respectively, are the horizontal and vertical regularization parameters, and D1 and D2 , respectively, are the horizontal and vertical derivative operators. If the estimated f has artifacts such as ringing or noise amplification, Di f has sharp transitions, and as a result Di f-Di Cg becomes large.
The solution of the minimization problem is obtained by solving the equation that makes the derivative of (11) become zero, such as [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
Since ringing artifacts appear near edges and boundaries, a spatially adaptive activity map is used to reduce the ringing artifacts while preserving edges. The proposed activity map is computed as [19] [figure omitted; refer to PDF] where σl2 represents the local variance in the neighborhood of (x,y) in the input image and pt is a tuning parameter that makes the activity map distribute as evenly as possible in [0,1] . In this work, pt =1500 was used for the empirically best result with 5×5 blocks for the local variance.
Since matrix T is block-circulant for a space-invariant motion PSF as shown in (13), the linear equation in (12) can be solved using the two-dimensional (2D) discrete Fourier transform (DFT). Let f~ , g~ , h~ , d~ , w~m , and c~ be the DFTs of the estimated image, observed image, PSF, derivative filters, activity map, and highpass filter, respectively; then the solution of the restoration problem is given as [figure omitted; refer to PDF] The finally restored image f^ is obtained by the inverse DFT of f~ .
4. Experimental Results
The proposed motion deblurring method is tested using indoor and outdoor images of size 1280×960 acquired by a smartphone with Android OS and a 2.26 GHz application processor (AP). The performance of restoration is evaluated using the no-reference image quality assessment method proposed in [20] and the CPU processing time in a personal computer equipped with 3.40 GHz CPU and 16 GB RAM. The proposed method is also compared with two types of state-of-the-art methods including the single image-based [3, 5, 7] and hardware-aided approaches [16]. The gyro data in the smartphone are measured during the exposure time.
Figure 5 shows restored results using different blind deconvolution methods. Although Cho's method [7] can remove motion without ringing artifact, it has unnatural discontinuities and intensity saturation due to the bilateral filtering as shown in Figure 5(c). On the other hand, the proposed method can successfully remove the motion blur without unnatural discontinuities and preserve edge regions. The proposed method also outperforms Sindelar's method [16] due to Gaussian distribution-based trajectory estimation and adaptive image restoration as shown in Figures 5(e) and 5(f).
Comparison of different image restoration methods: (a) input motion-blurred image, (b) Fergus's method [3], (c) Cho's method [7], (d) Shan's method [5], (e) Sindelar's method [16], and (f) the proposed method.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
(e) [figure omitted; refer to PDF]
(f) [figure omitted; refer to PDF]
Figure 6 shows results of quantitative analysis using five 1280×960 test images. Since the proposed method needs the gyro data, quantitative analysis uses a no-reference metric of Liu's method that estimates the quality of motion deblurring. A large value of Liu's measure implies the high-quality. The result of the proposed method is comparable to or better than other deblurring methods as shown in Figure 6.
Figure 6: Comparison of different methods using Liu's method [20].
[figure omitted; refer to PDF]
Table 1 shows processing times of five different methods. The proposed method is the fastest except Sindelar's method that uses simple Wiener filter. However the proposed method produces 26% higher deblurring measure than Sindelar's method at the cost of approximately twice longer processing time. Generally, accurate camera calibration and synchronization of gyro data are not easy tasks. The proposed motion deblurring method provides the solution for both accurate PSF estimation and image restoration using gyro data.
Table 1: Comparison of processing times of five different restoration algorithms (sec.).
Methods | Image 1 | Image 2 | Image 3 | Image 4 | Image 5 |
Fergus et al. [3] | 10023.9 | 7437.4 | 9624.3 | 13353.0 | 8282.1 |
Shan et al. [5] | 261.0 | 275.0 | 287.0 | 289.0 | 298.0 |
Cho and Lee [7] | 20.1 | 20.4 | 20.7 | 20.3 | 20.4 |
Sindelar and Sroubek [16] | 0.7 | 0.7 | 0.8 | 0.8 | 0.8 |
Proposed method | 1.9 | 1.7 | 1.7 | 1.7 | 1.7 |
5. Conclusion
We have presented a novel motion trajectory estimation method using an embedded inertial sensor and a spatially adaptive image restoration algorithm for motion deblurring. For robust estimation of the motion PSF in the presence of sensor noise, the proposed method accumulated point-spread-functions (PSFs) of all camera positions using the projected trajectory based on Gaussian distribution. Based on the estimated motion PSF, the proposed motion deblurring algorithm can restore the image without undesired artifacts and noise amplification. The computational structure of the proposed algorithm does not need iterative minimization but uses the discrete Fourier transform domain filtering including local statistics-based spatially adaptive filtering. Since the proposed method estimates the motion trajectory using the embedded gyro sensor and performs restoration in the Fourier domain, it is much faster than existing state-of-the-art methods. Experimental results proved the performance of the proposed method in the sense of both image quality and the processing time. The future work will include a motion trajectory estimation using sensors according to scene depth for further improving the restoration performance.
Acknowledgments
This research was supported by the Chung-Ang University Research Scholarship Grants in 2014 and by the Ministry of Science, ICT & Future Planning as Software Grand Challenge Project (14-824-09-003).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] D. Kundur, D. Hatzinakos, "Blind image deconvolution," IEEE Signal Processing Magazine , vol. 13, no. 3, pp. 43-64, 1996.
[2] T. F. Chan, C.-K. Wong, "Total variation blind deconvolution," IEEE Transactions on Image Processing , vol. 7, no. 3, pp. 370-375, 1998.
[3] R. Fergus, B. Singh, A. Hertzmann, S. Roweis, W. Freeman, "Removing camera shake from a single photograph," ACM Transactions on Graphics , vol. 25, no. 3, pp. 787-794, 2006.
[4] J. Jia, "Single image motion deblurring using transparency," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), pp. 1-8, Minneapolis, Minn, USA, June 2007.
[5] Q. Shan, J. Jia, A. Agarwala, "High-quality motion deblurring from a single image," ACM Transactions on Graphics , vol. 27, no. 3, article 73, 2008.
[6] A. Levin, Y. Weiss, F. Durand, W. T. Freeman, "Understanding and evaluating blind deconvolution algorithms," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '09), pp. 1964-1971, Miami, Fla, USA, June 2009.
[7] S. Cho, S. Lee, "Fast motion deblurring," ACM Transactions on Graphics , vol. 28, no. 5, pp. 145-145, 2009.
[8] L. Xu, J. Jia, "Two-phase kernel estimation for robust motion deblurring," in Proceedings of the 11th European conference on Computer Vision: Part I (ECCV '10), pp. 157-170, September 2010.
[9] M. Tico, M. Trimeche, M. Vehvilainen, "Motion blur identification based on differently exposed images," in Proceedings of the IEEE International Conference on Image Processing (ICIP '06), pp. 2021-2024, October 2006.
[10] L. Yuan, J. Sun, L. Quan, H.-Y. Shum, "Image deblurring with blurred/noisy image pairs," ACM Transactions on Graphics , vol. 26, no. 3, 2007.
[11] S. D. Babacan, J. Wang, R. Molina, A. K. Katsaggelos, "Bayesian blind deconvolution from differently exposed image pairs," IEEE Transactions on Image Processing , vol. 19, no. 11, pp. 2874-2888, 2010.
[12] M. Ben-Ezra, S. K. Nayar, "Motion deblurring using hybrid imaging," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 657-664, June 2003.
[13] Y.-W. Tai, H. Du, M. S. Brown, S. Lin, "Correction of spatially varying image and video motion blur using a hybrid camera," IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 32, no. 6, pp. 1012-1028, 2010.
[14] R. Raskar, A. Agrawal, J. Tumblin, "Coded exposure photography: motion deblurring using fluttered shutter," ACM Transactions on Graphics , vol. 25, no. 3, pp. 795-804, 2006.
[15] N. Joshi, S. B. Kang, C. L. Zitnick, R. Szeliski, "Image deblurring using inertial measurement sensors," ACM Transactions on Graphics , vol. 29, no. 4, article 30, 2010.
[16] O. Sindelar, F. Sroubek, "Image deblurring in smartphone devices using built-in inertial measurement sensors," Journal of Electronic Imaging , vol. 22, no. 1, 2013.
[17] R. Szeliski Computer Vision: Algorithms and Applications , Springer, 2010.
[18] C. Jia, B. Evans, "Probabilistic 3-D motion estimation for rolling shutter video rectification from visual and inertial measurements," in Proceedings of the IEEE 14th International Workshop on Multimedia Signal Processing (MMSP '12), pp. 203-208, September 2012.
[19] S. Kim, E. Lee, M. H. Hayes, J. Paik, "Multifocusing and depth estimation using a color shift model-based computational camera," IEEE Transactions on Image Processing , vol. 21, no. 9, pp. 4152-4166, 2012.
[20] Y. Liu, J. Wang, S. Cho, A. Finkelstein, S. Rusinkiewicz, "A no-reference metric for evaluating the quality of motion deblurring," ACM Transactions on Graphics , vol. 32, no. 6, article 175, 2013.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 Eunsung Lee et al. Eunsung Lee et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This paper presents an image deblurring algorithm to remove motion blur using analysis of motion trajectories and local statistics based on inertial sensors. The proposed method estimates a point-spread-function (PSF) of motion blur by accumulating reweighted projections of the trajectory. A motion blurred image is then adaptively restored using the estimated PSF and spatially varying activity map to reduce both restoration artifacts and noise amplification. Experimental results demonstrate that the proposed method outperforms existing PSF estimation-based motion deconvolution methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed in various imaging devices because of its efficient implementation without an iterative computational structure.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer





