Double Discrete Wavelet Transform (DDWT)
When DDWT is applied to a blurred image, it sparsifies images and blur kernel simultaneously (sparse signal blurred by a sparse blur is sparse). It simplifies blur kernel estimation and image deblurring problems by providing a direct access to the sharp images. DDWT easily handles complex blur such as object motion and defocus, where the spatially varying blur size is determined by the object speed or distance from the camera.
Object Motion Blur
While defocus and camera shake have received considerable attention, object motion is by far the most difficult feature to detect due to its nonstationarity. Object motion blur, however, is potentially useful for scene analysis because it provides temporal cues from a single image. We developed a technique to infer the direction and the speed of the object motion by detecting spatially varying motion blur from a single image.
Spatially varying defocus blur can also be used to infer the scene depth. A typical blur kernel is disc shaped, as determined by the shape of the aperture opening in the lens system. DDWT simplifies this disc shape into its sparsified form—a ring shape—which is far less detrimental to the image.
DDWT coefficients represent discrete wavelet transform (DWT) coefficients of sharp images corrupted by a sparsified blur. As such, DDWT-based deblurring involves taking inverse DWT of DDWT coefficients that were disambiguated from the sparse blur kernels. We remove the blur without introducing ringing (an artifact that all conventional methods suffer from) or iterative steps (hence far faster than most deconvolution methods). Because of the direct access to sharp image DWT coefficients, we are able to recover unprecedented levels of image details.
|(2015): Fast Spatially Varying Object Motion Blur Estimation. In: Image Processing, 2015. ICIP 2015. IEEE International Conference on, IEEE 2015.|
|(2013): Blur Processing Using Double Discrete Wavelet Transform. In: Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 1091-1098, 2013.|