Handheld thermal imaging devices can capture images in quick succession, each image with a slightly different orientation, resulting in image series that can be combined to produce an improved image through multi-image deconvolution. Implementing deconvolution algorithms that take advantage of the entire information contained in the image series to produce an image with a field of view that is as large as possible given the coverage of all collected images is a challenging problem given that the image series covers a possibly non-square area. In this paper, we present a multi-image deconvolution method that addresses this boundary condition problem. First, we determine the relative geometric transformations between the images to determine a rectangular canvas that can accommodate the full field of view covered by the image series. Next, we formulate the deconvolution problem as a regularized minimization problem with two terms, (i) the residue between the forward transformation applied to the reconstructed candidate and the measured images and (ii) a regularization term to take into account image priors. In order to accommodate for a non-square coverage of the combined images, which results in boundary artifacts when the forward model is used during iterative minimization, we recast the problem into one where the original scene is masked, thereby mitigating the effects of unknown image values beyond image boundaries. We characterize our method on both synthetic and experimental images. We observe both visual and quantitative improvements of the images at the boundaries where distortions are attenuated.
Many applications rely on thermal imagers to complement or replace visible light sensors in difficult imaging conditions. Recent advances in machine learning have opened the possibility of analyzing or enhancing images, yet these methods require large annotated databases. Training approaches that leverage data augmentation via simulated and synthetically-generated images could offer promising prospects. Here, we report on a method that uses generative adversarial nets (GANs) to synthesize images of a complementary contrast. Starting from a dual-modality dataset of co-registered visible and thermal images, we trained a GAN to generate synthetic thermal images from visible images and vice versa. Our results show that the procedure yields sharp synthesized images that might be used to augment dual-modality datasets or assist in visual interpretation, yet are also subject to the limitations imposed by contrast independence between thermal and visible images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.