In this paper, we propose a learning method for deblurring Gaussian blurred images blindly by exploiting edge cues via deep multi-scales generative adversarial network: DeepEdgeGAN. We proposed the edges of the blurred images to be incorporated with the blurred image as the input of the DeepEdgeGAN to provide a strong prior constraint for the restoration, which is beneficial to solve the problem that gradients of the restored images with GANs methods tend to be smooth and not clear enough. Further, we introduce the perceptual and edge as well as scale losses to train the DeepEdgeGAN. With the trained end-to-end model, we directly restore the latent sharp images from blurred images and avoiding the estimation of pixel-kernel. Qualitative and quantitative experiments demonstrate that the visual effect of the restored images significantly improves better.
KEYWORDS: Image processing, Image restoration, Image analysis, Process modeling, Convolution, Machine learning, Lithium, Fluctuations and noise, Information technology, Data processing
Image deblurring is to estimate the blur kernel and to restore the latent image. It is usually divided into two stage, including kernel estimation and image restoration. In kernel estimation, selecting a good region that contains structure information is helpful to the accuracy of estimated kernel. Good region to deblur is usually expert-chosen or in a trial-anderror way. In this paper, we apply a metric named relative total variation (RTV) to discriminate the structure regions from smooth and texture. Given a blurry image, we first calculate the RTV of each pixel to determine whether it is the pixel in structure region, after which, we sample the image in an overlapping way. At last, the sampled region that contains the most structure pixels is the best region to deblur. Both qualitative and quantitative experiments show that our proposed method can help to estimate the kernel accurately.
Optical tracking systems need to measure the shift of target in real time so as to compensate the shift effect. For extended
target, template matching techniques are usually used to estimate the image shift, in which the shift can be computed up
to subpixel with the parabolic interpolation. In this paper, we propose a new method to estimate the shift accurately
building on geometric feature point tracking. The method first extracts feature points from the reference image using
Harris detector, and tracks the same feature point by correlating the small patch around it with that of each point detected
in other images. The subpixel feature point position utilized to estimate the image shift is then determined by the
modified Harris strength of the pixels around that point. Experimental results validates that the proposed method can
accurately measure image shifts over large distance under noisy conditions, and that the mean of estimate error is less
than 0.03 pixels. Moreover, the contrast of long exposure images before and after shift compensation is compared to
evaluate our algorithm in the optical tracking system.
For the segmentation of the dim point target and the extended target under sky background which has low resolution, an
automatic thresholding method based on spatial gradient information is presented. It can separate the point target and the
extended target from the background even under the strong noise condition, providing good segmentation results for the
point target as well as the accurate information of the shape for the extended target. This method first marks the pixels as
the background or the target using the image gray gradient roughly, then fit the grayvalue of pixels in the background
area to obtain two thresholds, with which the image can be segmented according to the pixel position, and finally the
morphological filter is adopted to remove the noise in the segmented image. The experiment shows that, when compared
to Otsu in low resolution images with even background, our method can work more accurately and robustly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.