Removing spatially variant motion blur from a blurry image is a challenging problem as image blur can be complicated and difficult to model accurately. Recent progress in deep neural networks suggests that kernel-free single image deblurring can be achieved, but questions about deblurring performance persist. To improve performance, we proposed a deep convolutional neural network to restore a sharp image from a noisy/blurry image pair captured in quick succession. Two neural network structures, Deblur Long Short-Term Memory (LSTM) and DeblurMerger, are presented to fuse the pair of images in either sequential or parallel manner. To boost the training, gradient loss, adversarial loss, and spectral normalization are leveraged. The training dataset that consists of pairs of noisy/blurry images and the corresponding ground truth sharp image is synthesized based on the benchmark dataset GOPRO. We evaluated the trained networks on a variety of synthetic datasets and real image pairs. The results demonstrate that the proposed approach outperforms the state-of-the-art methods both qualitatively and quantitatively. DeblurLSTM achieves the best debluring performance, while DeblurMerger achieves nearly the same result but with significantly less computation time.
Many factors lead to spatially varying blur kernels in a blurred image, such as camera shake, moving objects, and scene depth variation. The traditional camera shake removal methods either ignore the influence of varying depth values or object motion in dynamic scenes, while the methods not limited to removing camera shake always make simple assumptions about camera motion trajectory. We consider these factors in a unified framework, with the aid of an alternate-exposure capture strategy and simultaneously recorded inertial sensor readings. The inertial measurements relate the long-exposed blurred image to preceding and succeeding short-exposed noisy images. The special exposure arrangement effectively addresses the problem inherent in reconstructing camera motion from inertial measurements. In addition, the noisy image pair bracketing the blurred image is used for motion detection and initial depth map estimation, making the proposed method free of user interaction and additional expensive devices. Contrary to previous methods that individually parametrize the motion blur of the moving foreground layer and the static background layer, we exploit the fact that camera shake has a global influence to decompose the motion of the foreground layer such that a more tight constraint between the motion of layers is established. Given the motion and image data, we propose a single-energy model and minimize it using alternating optimization to estimate the spatially varying motion blur and the latent sharp image. Comparative experimental results demonstrate that our method outperforms conventional camera motion deblurring and object deblurring methods on both synthetic and real scenes.
This paper addresses the problem of removing spatially varying blur caused by camera motion with the help of inertial measurements recorded during exposure time. By utilizing a projective motion blur model, the camera motion is viewed as a sequence of projective transformations on the image plane, each of which can be estimated from the corresponding inertial data sample. Unfortunately, measurement noise leads to temporally increasing drift in the estimated motion trajectory and can significantly degrade the quality of recovered images. To address this issue, this paper employs capturing a small sequence of images with different exposure settings along with the recorded inertial data. A special arrangement of exposure settings is designed to anchor the correct position of the camera trajectory, followed by a drift correction step, which makes use of the sharp image structures preserved in one of the captured images. The effectiveness of our approach is demonstrated by conducting comparison experiments on both synthetic images and real images.
The electronic rolling shutter mechanism found in many digital cameras may result in spatially-varying blur kernels if camera motion occurs during an imaging exposure. However, existing deblurring algorithms cannot remove the blurs in this case since the blurred image doesn't typically meet the assumptions embedded in these algorithms. This paper attempts to address the problem of modeling and correcting non-uniform image blurs caused by the rolling shutter effect. We introduce a new operator and a mask matrix into the projective motion blur model to describe the blurring process of each row in the image. Based on this modified geometric model, an objective function is formulated and optimized in an alternating scheme. In addition, noisy accelerometer data along x and y directions is incorporated as a regularization term to constrain the solution. The effectiveness of this approach is demonstrated by experimental results on synthetic and real images.
Camera motion blur is a common problem in low-light imaging applications. It is diffcult to apply image restoration techniques without an accurate blur kernel. Recently, inertial sensors have been successfully utilized to estimate the blur function. However, the effectiveness of these restoration algorithms has been limited by lack of access to unprocessed raw image data obtained directly from the Bayer image sensor.
In the work, raw CFA image data is acquired in conjunction with 3-axis acceleration data using a custom-built imaging system. The raw image data records the redistribution of light but is effected by camera motion and the rolling shutter mechanism. Through the use of acceleration data, the spread of light to neighboring pixels can be determined. We propose a new approach to jointly perform deblurring and demosaicking of the raw image. This approach adopts edge-preserving sparse prior in a MAP framework. The improvements brought by our algorithm is demonstrated by processing the data collected from the imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.