In recent years, domestic infrared focal plane array detectors have made comprehensive progress, but image-crosstalk remains a major unresolved issue. Research on infrared image-crosstalk correction has mostly focused on the detector fabrication, and methods for correcting image-crosstalk through image processing are relatively few. We address the crosstalk phenomenon commonly occurring in domestic infrared focal plane array detectors, analyze the crosstalk generation mechanism at the readout circuit level, and establish an RC system crosstalk model. Given this foundation, we further propose an image-crosstalk correction method for infrared focal plane array detector based on RC model, which can directly correct crosstalk images through image processing. Firstly, we simulate the degradation and restoration of a step function using the proposed crosstalk correction principles, thereby validating the effectiveness of the method conceptually. Subsequently, the proposed method is applied to correct images of rod targets of three different sizes that are actually collected, with both qualitative and quantitative analysis demonstrating the effectiveness of our imagecrosstalk correction method. The study shows that this method provides a simple and practical image processing solution for the crosstalk problem in domestic analog infrared focal plane array detectors.
The grayscale mapping of infrared images is an important research direction in the field of infrared image visualization. The grayscale mapping method of infrared images directly determines important visualization indicators such as detail preservation and overall perception of the original infrared images and can be considered as the foundation and guarantee for detail enhancement. Although the current mainstream grayscale mapping methods for infrared images can achieve good mapping results, there is still room for improvement in terms of preserving image details and enhancing image contrast. In this paper, we propose a grayscale mapping method for infrared images based on generative adversarial networks. Firstly, our discriminator adopts a unique global-local structure, which allows the network to consider both global and local losses when calculating the loss, effectively improving the image quality in local regions of the mapped image. Secondly, we introduce perceptual loss in the loss function, which ensures that the generated image and the target image have consistent features as much as possible. We conducted subjective and objective evaluations on the mapping results of our method and eight mainstream methods. The evaluation results show that our method is superior in terms of preserving image details and enhancing image contrast. Further comparison with a parameter-free tone mapping operator using generative adversarial network (TMO-Net) indicates that our method avoids problems such as target edge blur and artifacts in the mapped images, resulting in higher visual quality of the mapped images.
Leakage of volatile organic compounds (VOC) gas is one of the main sources of air pollution, and it poses a great threat to health and safety in many ways. Optical gas imaging (OGI) technique utilizes mid-wave infrared camera to visualize VOC gas and helps people observe the leakage of VOC gas. In this paper, we propose a novel method that utilizes deep learning technique and convolutional neural networks to detect the leakage of VOC gas from single-frame mid-wave infrared image. The proposed method consists of three components: color transformation pre-processing unit, feature extraction networks, and single-stage object detection sub-networks. Location-aware deformable convolution, which adjusts its sampling grid to fit the ever-changing shape of VOC gases, is employed for better feature extraction. Besides, a new loss function called leakage center loss is introduced to estimate where leakage comes from, and it forces the network to pay more attention to leakage center where the density of VOC gases is higher than dissipated parts. The proposed method is evaluated using a self-collected dataset where thousands of gas images are captured and annotated. Experimental results show that location-aware deformable convolution contributes to around 7%mAP improvement, while leakage center loss contributes to around 4% mAP improvement. Finally, our method achieves 81%mAP, which is better than existing general-purpose object detection methods. By simplifying the network architecture, our proposed method can also be implemented on embedded system for handheld VOC leakage detection devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.