The prime limitation of optical sensors is the need for external sources of illumination while capturing the scene. This prevents them from recognizing objects in extreme conditions, such as insufficient illumination or severe weather (e.g., under fog or smoke). The thermal imaging sensors have been introduced to circumvent this deficiency, which acquires the image based on thermal radiation emitted by the objects. The technological advancement in thermal imaging enables the visualization of objects beyond the visible range that promotes its use in many principal applications, such as military, medical, agriculture, etc. However, hardware point of view, the cost of a thermal camera is prohibitively higher than that of an equivalent optical sensor. This led to employ software-driven approaches called super-resolution (SR) to enhance the resolution of given thermal images. We propose a deep neural network architecture referred to as “ThermISRnet” as the extension of our earlier winner architecture in the Perception Beyond the Visible Spectrum (PBVS) thermal SR challenge. We use a progressive upscaling strategy with asymmetrical residual learning in the network, which is computationally efficient for different upscaling factors such as ×2, ×3, and ×4. The proposed architecture consists of different modules for low- and high-frequency feature extraction along with upsampling blocks. The effectiveness of the proposed architecture in ThermISRnet is verified by evaluating it with different datasets. The obtained results indicate superior performance as compared to other state-of-the-art SR methods.
We address an approach called “DepthFuseNet” for the fusion of thermal and visible images using convolutional neural networks (CNN). The thermal image acquires radiating energy of the sensed objects and hence it can easily distinguish the objects from its background. However, the visible image (i.e., the image acquired within the range of visible wavelength of electromagnetic spectrum) provides a more visual context of the objects with high spatial resolution. Due to this complement nature of thermal and visible images, it is always an interest of the community to combine those two images to obtain more meaningful information from the individual source images. In DepthFuseNet method, features are extracted from given source images using CNN architecture, and they are integrated using the different fusion strategies. The auto-weighted sum fusion strategy performs better than that obtained using the other existing methods. To reduce the complexity of the architecture, we use depthwise convolution in the network. The experimental evaluation demonstrates that the proposed method exhibits salient features from the source images, and hence it performs better than the other state-of-the-art fusion methods in terms of qualitative and quantitative assessments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.