Traditional image global registration algorithms are limited in principle and cannot accurately register large depth of field scenes or moving objects. The local registration method based on dense optical flow has the advantage of not being limited by a single transformation matrix, so that a better registration result can be obtained. However, traditional dense optical flow algorithms are limited by large computational complexity and are difficult to achieve real-time estimation, thus limiting their application. In recent years, many dense optical flow algorithms based on deep learning (such as PWC-Net) have emerged, which have achieved the effect of surpassing traditional optical flow algorithms on public datasets and can be estimated in real time. Based on this, this paper proposes an algorithm flow based on deep learning to predict dense optical flow and use it for registration. And a self-built optical flow data set for supervised learning of the network has also been proposed. Using the same network, the registration results of our datasets are better than those of existing datasets.
A generative adversarial network denoising algorithm which uses a combination of three kinds of loss functions was proposed to avoid the loss of image details in the denoising process. The mean square error loss function was used to make the denoising results similar to the original images, the perceptual loss function was used to understand the image semantic information, and the adversarial learning loss function was used to make images more realistic. The algorithm used the deep residual network, the densely connected convolutional network and a wide and shallow network as the component in the replaceable module of the network. The results show that the three networks tested can make images more detailed and have better peak signal to noise ratio while removing image noise. Among them, the wide and shallow network which uses fewer layers, larger convolution kernels and more feature maps achieves the best result.
In this paper, a new algorithm for non-uniformity correction of infrared focal plane arrays based on neural network and bi-exponential filter is proposed. Due to the edge preserving property of bi-exponential filter, the algorithm can estimate the gain and bias coefficients at the strong edge more accurately, thereby suppressing the ghosting effect. In order to suppress the blurring effect, a motion detection is carried out before the correction coefficients are updated. A motion evaluation index based on the L1 norm of the temporal variation of the image and the image roughness is designed to improve the accuracy of motion detection. Moreover, an adaptive learning rate calculation method is proposed, which makes the learning rate larger in the image smoothing region and smaller in the edge region. This results in a faster convergence in a uniform region of the image , and it is not easy to cause a correction coefficients estimation error in the edge region. Several infrared image sequences are used to verify the performance of the proposed algorithm. The results indicate that the proposed method can not only preserve the details of the image, but also reduce the non-uniformity. Besides, it has a good inhibitory effect on the phenomenon of ”ghosting” and ”blurring”.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.