We address an approach called “DepthFuseNet” for the fusion of thermal and visible images using convolutional neural networks (CNN). The thermal image acquires radiating energy of the sensed objects and hence it can easily distinguish the objects from its background. However, the visible image (i.e., the image acquired within the range of visible wavelength of electromagnetic spectrum) provides a more visual context of the objects with high spatial resolution. Due to this complement nature of thermal and visible images, it is always an interest of the community to combine those two images to obtain more meaningful information from the individual source images. In DepthFuseNet method, features are extracted from given source images using CNN architecture, and they are integrated using the different fusion strategies. The auto-weighted sum fusion strategy performs better than that obtained using the other existing methods. To reduce the complexity of the architecture, we use depthwise convolution in the network. The experimental evaluation demonstrates that the proposed method exhibits salient features from the source images, and hence it performs better than the other state-of-the-art fusion methods in terms of qualitative and quantitative assessments. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 3 scholarly publications.
Image fusion
Computer programming
Thermography
Optical engineering
Image processing
Convolution
Thermal modeling