UAV thermal infrared remote sensing imagery allows for higher resolution LST (land surface temperature) to be acquired, but temperature drift during thermal camera data acquisition reduces the reliability of the data. Data with temperature drift cannot be accurately removed by the camera’s own automatic calibration or by using a fixed calibration function. In addition, during the acquisition of data by a thermal imaging camera at low altitude, the transmission of thermal radiation is affected by the atmosphere at mid-flight altitude, and the data need to be atmospherically corrected to characterize the actual LST. In this paper, the errors caused by temperature drift are removed by feature matching and linear fitting in the data processing process, so as to obtain more accurate mosaics of brightness temperatures. In the retrieval process, the LST is obtained by synchronizing the atmospheric wet temperature contour lines, based on the principle of thermal radiation transmission, and combining with the specific emissivity of the ground to obtain the high accuracy of the LST. The feasibility of the algorithm is verified using continuous actual measured LST. The results show that based on the synchronized atmospheric temperature and humidity contours, the atmospheric influence can be effectively eliminated, and the LST obtained by the retrieval has a high accuracy. From the experimental results, it can be seen that the method proposed and analyzed in this paper is a feasible method to obtain high-precision LST using thermal infrared remote sensing images from UAVs.
Natural camouflage is a crucial defense mechanism for species to ensure their survival. The most common strategies used in natural camouflage are background matching, environmental texture mimicry, and disruptive coloration. The Camouflage Object Segmentation (COS) task focuses on visually segmenting camouflaged objects that blend seamlessly with the surrounding background. This task poses a greater challenge than the Salient Object Segmentation (SOS) task. Currently, most disguised target segmentation models achieve their tasks by enhancing the texture and contour features of the target, fusing various hierarchical image features, or incorporating frequency domain information of the target. However, these existing models are built on a single framework that uses image context aggregation strategies to segment disguised objects. This approach overlooks the rich and diverse texture features in both spatial and frequency domains for targets. To address this issue, we introduce the cue of frequency components information as an auxiliary enhancement method for images. We also design a dual-stream encoder to process and fuse texture features from both RGB images and frequency perspective images, in order to refine the multi-level features of disguised target textures and contour segmentation. Specifically, we propose a frequency-aware aggregation module that fuses multi-scale features of target textures from a frequency perspective, which includes three different scales of offline discrete cosine transform modules and an image fusion module. A dual-stream encoder is designed to explore global and local texture expressions of targets in both spatial and frequency domains, that multi-layer in spatial domain encoder obtains frequency domain features as masks from the corresponding layers of frequency domain encoder. Additionally, a learnable transpose convolution decoder is used to enhance contour capture capability. Experimental results demonstrate that our model achieves state-of-the-art segmentation accuracy.
In response to the requirements for small size, low-power, and high-precision target recognition and ranging systems for platforms such as unmanned vehicles and drones, research has been conducted on a multi-camera array bionic compound eye system and its accompanying target recognition ranging algorithm. In this paper, we have designed a multi-camera array compound eye by combining multiple focal length lenses with a CMOS array. This compound eye system boasts a compact size and low power consumption. Integrated with an RV1126 core processing board for image processing, it is capable of independent target recognition and ranging within the system without the need for an external computer. Additionally, it provides real-time streaming of video and target distance information. Furthermore, this paper introduces a multi-focal length ranging algorithm based on the improved YOLOv5. By enhancing the small object recognition capability of the YOLOv5 and effectively utilizing the multi-scale target information obtained from multiple focal length lenses, the system achieves improved ranging accuracy, especially for distant small targets. The proposed multi-camera array compound eye system has dimensions of 80mm×80mm×80mm, weight of 460g, power consumption of 6.5W, and a single-frame image ranging speed of 60ms. Experimental results demonstrate that it successfully measures the distance to human targets within the range of 30m-120m, with an average ranging accuracy error of less than 1%. This bionic compound eye system holds significant potential for a wide range of applications, including emergency obstacle avoidance for drones, munition-borne reconnaissance, and navigation for underwater unmanned submersibles.
In the conventional infrared-polarization fusion method based on HSV color space, the problem of low color contrast and information loss or artifacts always exist which would impact the observation and processing of the fusion results. The cause of low color contrast in conventional method is analyzed in this paper, and an improved method is proposed. We find that some improvements could be done based on Channel S and Cannel V separately which could enhance the details in the regions of interest and maintain the precision of image as well. For Channel S, saliency extraction and morphological filtering are implemented to obtain the salient features of intensity image and degree of polarization image, while the redundant background information could be suppressed. The detail enhancement in Channel S could be realized through the fusion of common and unique feature of the salient images extracted above. For Channel V, the common and unique feature fusion between the salient degree of polarization image and intensity image could enhance the details in Channel V. Experimental results have proved that the method we illustrate could obtain better color contrast and less artifacts compared to the conventional fusion methods.
Underwater optical imaging has important application value, but it is also challenging. In traditional underwater imaging, the problems of uneven illumination, blurred texture details and low contrast often exist, in this paper we propose an underwater active polarization imaging algorithm based on low-rank sparse decomposition aiming to solve the problems above. According to the principle of underwater polarization imaging, the algorithm first performs target information enhancement on the acquired polarization images. Then combining with the low-rank characteristics of backscatter images in the scattered light field, the background information and target information could be separated from the captured images by the low-rank sparse decomposition principle, the high-quality image could be recovered from turbid water as a result. The results of experimental treatments with different turbidity levels demonstrate that the underwater polarization imaging algorithm based on low-rank sparse decomposition can improve the contrast of images, maintain the details of images and remove the background scattering at the same time. Moreover, the proposed method can effectively recover multiple targets and significantly improve the imaging quality which provides a new idea for underwater polarization clear imaging detection.
Imaging scenes or detecting objects hidden from a camera’s view have extensive applications. Previous studies have recently demonstrated impressive results in time-of-flight-based non-line-of-sight (NLOS) imaging systems, but several limitations remain. Measurements are captured by time-resolved detectors based on scanning sampling spots at the diffuse wall. Setting numerous sampling points slows down the process of data acquisition, but reconstruction quality cannot be ensured with limited sampling points. Here, we propose a method, transient interpolation to original sample data, to solve the contradiction between the time of data collection and imaging quality. We calculate the flight time of photons and simulate the photon count data with 15×15 sampling points and multiply them based on our method to obtain high-quality reconstruction within a shorter time. The simulated experiment shows that compared to the original data, transient multiplied data achieves high-quality reconstruction and shortens the measurements-captured time theoretically. Furthermore, our method is robust to different algorithms and can potentially be used in real-time imaging fields.
Radiation temperature measurement is a non-contact temperature measurement, which has important applications in quantitative remote sensing, industrial thermal monitoring, biomedical engineering and military field. The infrared radiation of an object is directly proportional to its emissivity, which is an important parameter that affects radiation temperature measurement. In order to obtain the spectral emissivity of an object, this paper proposes a method for measuring spectral emissivity based on the radiation at multiple temperatures. Based on Planck's law of radiation, the expression of spectral emissivity is theoretically given by deriving the relationship between spectral emissivity, contact temperature and radiation. The simulation is carried out based on theoretical derivation. The spectral emissivity of three samples is simulated. The waveband of the samples is 8-14μm, and the spectral emissivity does not change with temperature. Two algorithms are used to avoid the problem of singular values in direct calculation. Based on the constrained linear least-squares method, the average relative errors of the three samples are 7.0%, 7.2%, and 6.2%. The maximum relative errors are 22.1%, 18.9% and 15.0%. Based on the improved constrained linear least-squares method, the average relative errors of the three samples are 2.2%, 1.1%, and 3.0%, and the maximum relative errors are 6.7%, 3.2%, and 4.2%. The simulation results verify the feasibility of inversion of spectral emissivity at multiple temperatures. The results show that the improved constrained linear least-squares method has smaller average relative errors.
The time-of-flight (TOF) camera has recently received significant attention due to its small size, low cost, and low-power consumption, which can be widely used in fields such as automatic navigation and machine vision. The TOF camera can calculate 3D information of targets with dozens of frames per second. However, poor accuracy still exists in the presence of various inevitable disturbances. In particular, the imaging distance and object reflectivity are remarkable factors. In this study, the depth imaging conditions, including ambient light, detection distance, and object reflectivity, are theoretically analyzed using differential entropy. Because many coupled factors disturb the imaging accuracy simultaneously, we propose a type of supervised learning machine, entropy-based k-nearest neighbor, based on differential entropy. Experiments show that this method can significantly improve the accuracy of depth data obtained by a TOF camera.
Based on a large number of research results on ICCD/ICMOS, this paper summarizes and summarizes the performance evaluation of ICCD/ICMOS and the main features of the product. Firstly, based on the definition of ICCD/ICMOS, the development and application background of this kind of digital LLL detector are described. Then, relevant research results are summarized from 12 aspects, including system resolution, signal-to-noise ratio, static imaging quality, system modulation transfer function, photoelectric response uniformity, quality factor, Moire fringe, photo-energy coupling efficiency, seismic characteristic, photoelectron gain, dynamic range and spectral response. This can not only serve as the basis for the inspection and evaluation of this kind of products, but also help us to have a deeper understanding of this device, which has certain guiding significance for promoting its development and progress.
Salient object detection (SOD) has become an active research direction with extensive applications in computer vision tasks. Although integrating RGB and infrared thermal (RGB-T) data has proven to be effective in adverse environments, it is difficult for RGB-T SOD methods to highlight the salient objects completely when objects cross the image boundary. To address the aforementioned problem, this paper proposes an effective RGB-T SOD algorithm based on multi-spectral co-connectivity (MSCC) and collaborative graph ranking. Specifically, we introduce the multi-spectral weighted color distance to construct an improved undirected weighted graph and compute the MSCC-based saliency map. Simultaneously, the MSCC-based background probability map is also calculated and employed in the following processing of real background seeds selection. Then, we utilize collaborative graph learning (CGL) and calculate the CGL-based saliency map in a two-stage ranking framework. Finally, we integrate these two saliency maps through multiplying or averaging to enhance the final saliency result. The experimental comparison results of 5 quantitative evaluation indicators between the proposed algorithm and 9 state-of-the-art methods on RGB-thermal datasets VT821 and VT1000 datasets demonstrate the robustness and superiority of the proposed work.
Compound eyes of insect is an ideal miniaturized, multi-aperture and large-field-of-view optical system. It also has intelligent detection capability including high sensitivity for detecting moving targets, and high resolution for light intensity, wavelength (color) or polarization. In this paper, a compact visible bionic compound eyes system based on micro-surface fiber faceplate is studied, which uses nine micro-lens-groups of specific divergent angle lines of sight, and micro-surface fiber faceplate direct-coupled with large area (5120X5120 pixels) CMOS camera. The nine micro-lensgroups put the images onto micro-surfaces of fiber faceplate, implementing nine partially (~50%) overlapping field of views (FOVs) sub-aperture imaging system with real-time acquisition and output. Compound eyes system has features including the (stitching) large FOV, the super-resolution imaging of the overlapping FOVs, short-distance stereovision, and the intelligent high sensitivity for detecting moving targets. With polarizers or filters, it is a practical pattern can also implementing full-polarization imaging or multispectral imaging capability. In the fields of emergency obstacle avoidance, missile reconnaissance and short-range fuse, and underwater unmanned submarine navigation, compound eyes system has broad application prospects. At present, a compound eyes system with 80° FOV has been developed, and the real-time large-FOV image stitching method has been applied. In the system initialization stage, the image calibration and non-uniformity correction are pre-processed. In the real-time large-FOV image stitching stage, the CUDA parallel acceleration method is used. The single-frame stitching takes less than 30ms, which meets the requirement of real-time processing.
The small video camera of the video life detector, which can go into the narrow space that the rescuers can hardly reach, is used to get color video of the trapped people in the ruins after the disasters such as earthquake. Due to the covering of rubble and dust, it is difficult to detect the position of the trapped persons effectively and judge their physical condition from the video, which affects the search efficiency during the golden time of 72 hours for rescue. In this paper, a small dual-band (LWIR/VIS) video camera with common optical path is designed. The dual-band video camera contains a micro color video camera for details of the scene and a miniaturized thermal imaging component which has a micro uncooled infrared focal plane array (IRFPA) as the detector for capturing the location and vitality of trapped people. The field of view (FOV) of the color video camera, is partly matched with the FOV of the IRFPA component through a hot mirror as the visible and infrared optical splitter. In other words, the FOV of the dual-band imaging system is designed in common path. Then a real-time fusion algorithm of dual-band video is implemented on a DSP hardware image processing platform. As a result, the people and other hot targets in the scene are highlighted in the fused video, which can provide a basis for target detection and decision-making in the rescue process.
KEYWORDS: Signal to noise ratio, Imaging systems, Signal detection, Modulation transfer functions, Interference (communication), Signal processing, Target detection, Sensors, Image processing
Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance--Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.
A method of calculating the 1D polarimeter imaging results from 3D polarimeter intensity measurements is introduced, and the influence of the detector noise on the calculated results is studied. Noise analysis is mainly carried out for the commonly used Pickering imaging method, Fessenkovs imaging method and modified Pickering imaging method with additive noise and shot noise as real noise sources. The analysis and simulation results show that, if the analyzer angles are evenly distributed over the half-circle, the additive noise variance of the calculated image is only related to the channel number N of the polarization imaging system, and the shot noise variance of the calculated image is related to the Stokes vector, the calculated angle, the channel number N and the polarization imaging modality. If the analyzer angles are not evenly distributed over the half-circle, concrete analysis should be made according to concrete circumstance. On the whole, the modified Pickering method is a recommended imaging modality by reason of it can suppress the noise of the calculated image because it has more channel numbers and it is not affected by the incident Stokes vector or the calculated angle.
The measurement accuracy of the rotation and slope of terrain is a critical factor for the performance of some scene simulation application systems. As the main observed illuminant outdoors, the sun can furnish a rich source of information about the scene. In this paper, we analyze the relationship between the coordinates of the sun in photographs and the zenith and azimuth angles of the camera. By fitting a model of the predicted sun position to the pinhole camera model, we show how to measure the rotation and slope of terrain by using a photograph containing the sun. We test our methods on a sequence of photographs with known camera parameters, and obtain deviation of less than 1.7° for the rotation angle and 2.2° for the slope angle of the terrain. The measuring method by using a photograph containing the sun can be useful for a variety of practical applications such as navigation, time measurement and camera calibration.
KEYWORDS: Infrared radiation, Ray tracing, Infrared imaging, Monte Carlo methods, Bidirectional reflectance transmission function, Scene simulation, 3D modeling, Infrared technology, Ions, Sensors
Infrared image generation technology is being widely used in infrared imaging system performance evaluation, battlefield environment simulation and military personnel training, which require a more physically accurate and efficient method for infrared scene simulation. A parallel multiple path tracing method based on OptiX was proposed to solve the problem, which can not only increase computational efficiency compared to serial ray tracing using CPU, but also produce relatively accurate results. First, the flaws of current ray tracing methods in infrared simulation were analyzed and thus a multiple path tracing method based on OptiX was developed. Furthermore, the Monte Carlo integration was employed to solve the radiation transfer equation, in which the importance sampling method was applied to accelerate the integral convergent rate. After that, the framework of the simulation platform and its sensor effects simulation diagram were given. Finally, the results showed that the method could generate relatively accurate radiation images if a precise importance sampling method was available.
Temperature calculation is a prerequisite for infrared radiation simulation, which is widely applied in the military field. However, conventional sequential temperature computation approaches are computationally inefficient and inadequate with regard to thermal details. A method of parallel temperature calculation based on OptiX is proposed to solve this problem. First, a brief introduction to OptiX is given, and its abilities to implement parallel temperature calculation and infrared scene simulation are demonstrated. As the foundation of the temperature calculation, the one-dimensional thermal conduction equation and boundary conditions of the natural scene are then described. After that the basic equations are generalized to the case of multilayer materials. Finally, the relationship between the geometric models and relevant material properties is articulated to facilitate the temperature calculation procedure. The results show that the developed platform is feasible for parallel temperature calculation, and qualitative analysis is preliminarily presented to substantiate the rationality of the simulation result.
The leakage of toxic or hazardous gases not only pollutes the environment, but also threatens people's lives and property safety. Many countries attach great importance to the rapid and effective gas leak detection technology and instrument development. However, the gas leak imaging detection systems currently existing are generally limited to a narrow-band in Medium Wavelength Infrared (MWIR) or Long Wavelength Infrared (LWIR) cooled focal plane imaging, which is difficult to detect the common kinds of the leaking gases. Besides the costly cooled focal plane array is utilized, the application promotion is severely limited. To address this issue, a wide-band gas leak IR imaging detection system using Uncooled Focal Plane Array (UFPA) detector is proposed, which is composed of wide-band IR optical lens, sub-band filters and switching device, wide-band UFPA detector, video processing and system control circuit. A wide-band (3µm~12µm) UFPA detector is obtained by replacing the protection window and optimizing the structural parameters of the detector. A large relative aperture (F#=0.75) wide-band (3μm~12μm) multispectral IR lens is developed by using the focus compensation method, which combining the thickness of the narrow-band filters. The gas leak IR image quality and the detection sensitivity are improved by using the IR image Non-Uniformity Correction (NUC) technology and Digital Detail Enhancement (DDE) technology. The wide-band gas leak IR imaging detection system using UFPA detector takes full advantage of the wide-band (MWIR&LWIR) response characteristic of the UFPA detector and the digital image processing technology to provide the resulting gas leak video easy to be observed for the human eyes. Many kinds of gases, which are not visible to the naked eyes, can be sensitively detected and visualized. The designed system has many commendable advantages, such as scanning a wide range simultaneously, locating the leaking source quickly, visualizing the gas plume intuitively and so on. The simulation experiment shows that the gas IR imaging detection has great advantages and widely promotion space compared with the traditional techniques, such as point-contact or line-contactless detection.
Imaging polarimeter is one of the primary tools of state of polarizatio analysis, and it enhances the information available in variety of applications. The foundations of polarimeter are discussed. Two classes of typical imaging polarimeters are reviewed, including division of time polarimeters and simultaneous measurement polarimeters. Both of the classes are subdivided in to several kinds of polarimeters, characteristics of them are discussed. Our current work on imaging polarimeter is revealed at last.
Detector MTF based micro-scanning image reconstruction (dMTF-MSIR) algorithm was presented to reduce image blur due to spatial integration degradation effect (SIDE) of focal plane detector. Firstly, a high-resolution oversampling image was generated from four successive frames in micro-scanning image sequence using inter-frame difference oversampling reconstruction (IFDOR) algorithm, in which case the required inherent inter-frame offsets were obtained by calibration. Secondly, a Wiener filter was built based on the SIDE model characterizing the image blur due to spatial integration of radiation intensity distribution at sensor cell surface. Finally, a high-resolution reconstructed image was generated by processing the oversampling image using the Wiener filter to reduce image blur due to SIDE. Simulation results showed that if spatial sampling frequency of focal plane detector was fixed and micro-scanning images were noise-free, the loss of reconstructed image detail increased with the increase of duty cycle of detector. However the influence of duty cycle of detector was gradually exceeded by that of image noise with the increase in image noise level, and in this case suppressing image noise should be given priority. Furthermore, the performance of the presented method was restricted by aliasing, so imaging light path should introduce an optical low-pass filter with the cutoff frequency that is less than or equal to twice as large as Nyquist sampling frequency of detector. The experiment based on infrared images of actual scene showed that reconstructed images generated by the presented method had a higher image contrast and sharpness than that of IFDOR algorithm.
Range-gated technology has been a hot research field in recent years due to its high effective back scattering eliminating.
As a result, it can enhance the contrast between a target and its background and extent the working distance of the
imaging system. The underwater imaging system is required to have the ability to image in low light level conditions, as
well as the ability to eliminate the back scattering effect, which means that the receiver has to be high-speed external
trigger function, high resolution, high sensitivity, low noise, higher gain dynamic range. When it comes to an intensifier,
the noise characteristics directly restrict the observation effect and range of the imaging system. The background noise
may decrease the image contrast and sharpness, even covering the signal making it impossible to recognize the target. So
it is quite important to investigate the noise characteristics of intensifiers.
SNR is an important parameter reflecting the noise features of a system. Through the use of underwater laser
range-gated imaging prediction model, and according to the linear SNR system theory, the gated imaging noise
performance of the present market adopted super second generation and generation Ⅲ intensifiers were theoretically
analyzed. Based on the active laser underwater range-gated imaging model, the effect to the system by gated intensifiers
and the relationship between the system SNR and MTF were studied. Through theoretical and simulation analysis to the
image intensifier background noise and SNR, the different influence on system SNR by super second generation and
generation Ⅲ ICCD was obtained. Range-gated system SNR formula was put forward, and compared the different effect
influence on the system by using two kind of ICCDs was compared. According to the matlab simulation, a detailed
analysis was carried out theoretically. All the work in this paper lays a theoretical foundation to further eliminating back
scattering effect, improving image SNR, designing and manufacturing higher performance underwater range-gated
imaging systems.
Compared with traditional infrared imaging, infrared polarization imaging system can detect and identify the man-made or camouflaged target more efficiently by using the difference in the degree of polarization (DoP) between the target and background. The scene’s radiation is attenuated by the path atmosphere firstly, and then modulated by the polarizer and the optical system. Because of the effect of the atmosphere (such as absorption, radiation, diffusion etc.), the final radiation intensity the sensor received changes, which affects the result of detection and identification. In this paper, the component characteristic of particles in atmosphere was discussed particularly. And the propagation of signal was described by analyzing the scattering effect between atmospheric particles and photons. After the process of free path sampling, selecting the radius of the colliding particles, the scattering angle and azimuth sampling, and particle collision and extinction judgment, a Monte Carlo model of polarized light propagation in atmosphere was present by use of the Stokes/Mueller formalism and Meridian planes method. Then two different methods (the radiation intensity and the DoP) used for target recognition in atmosphere were simulated. The relationship between the received radiation intensity, the DoP and the distance was developed. The contrast showed that the DoP had a better performance than the intensity measurements on the whole. However, there was a maximum distance for polarization imaging system using short wavelength to make the most of the advantage. When beyond this distance, the polarization imaging advantage will disappear. Polarized light with longer wavelengths had a better ability to maintain the state of polarization after propagation in the atmosphere.
Infrared polarization imaging technology is a new detection technique, and it can distinguish busy background and target
through the infrared polarization characteristics of target. Compared with the traditional infrared imaging technology,
infrared polarization imaging technology has obvious preponderance at the aspect of target identify. Based on the
characteristics that the infrared polarization image can highlight the target contour and enhance the detail of the scene, an
infrared polarization fusion method based on wavelet packet transform is proposed. And a new color reconstruction
method based on wavelet packet is presented through analyzed corresponding relation between HSV color space and
radiation and polarization images. The innovative point is that use the fusion image instead of radiation image be
transmitted channels V of HSV space, and the fusion images are gain by wavelet packet transform. Meanwhile, it is
objective evaluated that the color reconstruction image by objective evaluation function based on details. The evaluation
function is contrast index of frequency band. The result show that the new method that is presented can enhance detail
information, and it is helpful to study color fusion.
Fibroblast is the main part in the loose connective tissue and differentiates from the mesenchymal cell when it is in embryo. It exhibits highly reproducible growth kinetics and reproducible healing dynamics in the scratch-wound assay and the height of it could show this prediction. In order to measure the height of these cells, we construct an
interferometer measuration system. As we all know, the interference pattern should be unwrapped first, there are plenty of methods that are under research. In this paper we want to find out a typical methods that could be used in living cell's interference pattern during image processing, and also we can get the conclusion that how to use the method and why it
is fit to unwrap the phase of cells. There are mainly three parts in this paper: Firstly, we have designed an Interference
system which can be used to get the interference pattern, here we used multiphase interference microscope to measure the cell height. Secondly, a typical method which is based on Goldstein's branch cuts algorithm were used to guide the way that how the phase is unwrapped, this method is the most efficient way to phase unwrapping, and it could induct the unwrapping path through using the branch cut method which could get rid of the residues as much as it could be. As a comparison, we also used some other methods to find different results. Such as the quality-guided path following phase unwrapping; and the Costantini phase unwrapping. Finally, we analyzed the results of the three-dimensional model of the
cell surface topography, as a result of the various noises during the experiment, all these unwrapping methods above can't eliminate all the residues and noises, but compared with the other results, the Goldstein's branch cut method has the
fittest advantages, it gives the most fluent topography of the living cells.
Images associated with underwater imaging systems are normally degraded by the intervening water medium. The
imaging instrument records not only the signal of interest, i.e., the radiance diffusely reflected from underwater target,
but also the radiance scattered into the field of view by water molecule and particulates. In order to improve the system
performance, range gated underwater imaging system is used to enhance image quality and visibility in turbid conditions.
Range gated imaging utilizes time discrimination to improve signal-to-backscattering noise ratio by rejecting
backscattered light from the medium. The range gated underwater imaging system basically consists of a pulsed laser
system, a control and synchronous logics and a high-speed gated camera. Because a laser is a highly coherent light
source, speckle noise results from the randomly constructive or destructive interference of the scattered light rays will
appear in the images obtained from the range gated underwater imaging system. The random granular speckle noise
brings great difficulty for the image processing. So the formation causes of speckle noise are discussed and several
different material objects under standard light source and laser are chosen to carry out speckle noise comparative
analysis. And a multidirectional morphological filtering algorithm for reducing speckle noise is proposed by using the
characteristics of morphology's multi-resolution analysis and fast-computing. In order to evaluate the method
objectively, equivalent number and speckle index are introduced. The experimental results demonstrate that the approach
that is adopted not only can reduce the speckle noise of the image effectively but also can preserve the feature detail
efficiently.
Range gated underwater laser imaging technique can eliminate backscattering noise effectively. While the images
associated with underwater imaging systems are normally degraded seriously by the intervening water medium. And the
speckle noise is especially severe for the reason that we adopt the system based on intensified gate imaging technology.
Well known causes of image degradation underwater include turbidity, particulate matters in the water column, and the
interaction between light and medium as light travels through water. Consequently, using full image formation models to
design restoration algorithms is more complex in water than in air because it's hard to get the values of the model
parameters relating to water properties, e.g., attenuation and scattering coefficients. To improve the quality of the low
signal-to-noise ratio images obtained through range gated laser imaging system, an enhancement algorithm is proposed.
The main purpose of the algorithm proposed for processing underwater images is to filter out unwanted noises and
remain desired signals. This algorithm is based on the principle of the least square error method, which fits discrete
image data to continuous piecewise curves. To simply the fitting of image data, the interval of each row and column is
subdivided into several subintervals. Then a curve is used to fit the image data within the subinterval. To merge two
adjacent lines together, a weighting technique with a linear weighting factor is imposed. A series of experiments are
carried out to study the effects of the algorithm. And the signal-to-noise ratio shows that the proposed algorithm can
achieve high quality enhancement images.
Range-gated underwater laser imaging technique can eliminate most of the backscattering and absorption noise
effectively. It has a range of from 4 to 6 times that of a conventional camera with floodlights in the strongly scattering
waters, which becomes a useful technique in oceanic research, deep-sea exploration, underwater remote control and
robotic works. While because of the laser pulse stretching, the image obtained through range gated underwater imaging
system has obvious nonuniformly illuminated character, such as brighter center and darker edge. Low contrast and
grayish white of the image also bring great difficulty for processing. In order to adjust the lightness of the nonuniformly
illuminated image of range-gated underwater imaging system, the water degradation is assumed as illumination variation
and retinal-cortex theory based on color constancy is introduced. Frame integral algorithm has to be applied first to
eliminate system noise for the reason that we adopt the system based on intensified gate imaging technology. And gray
stretch ensures that we can attain appropriate output. In retinal-cortex models, McCann model and McCann-Frankle
model have obvious effect. So we choose the two models for comparison and improve the second one considering the
exponential characteristics of eyes for illumination. In order to evaluate the methods objectively, strength uniformity of
signals is applied. The experimental results demonstrate that the approaches we adopted are all effective and can enhance
the image contrast. And the improved McCann-Frankle model gets more satisfying visual effect.
A great deal segmented mirror errors consisting of piston and tip-tilt exist when space large aperture segmented optics
system deploys. These errors will result in the departure of segmented mirrors images from the view. For that, proper
scanning function should be adopted to control actuators rotating the segmented mirror, so that the images of segmented
mirror can be put into the view and placed in the ideal position. In my paper, the scanning functions such as screw-type,
rose-type, and helianthus-type and so on are analyzed and discussed. And the optimal scanning function principle based
on capturing images by the fastest velocity is put forward. After capturing, each outer segmented mirror should be
brought back into alignment with the central segment. In my paper, the central and outer segments with surface errors
have the different figure, a new way to control the alignment accuracy is present, which can decrease the bad effects
from mirror surface and position errors effectively. As a sample, a simulation experiment is carried to study the
characteristics of different scanning functions and the effects of mirror surface and position errors on alignment
accuracy. In simulation experiment, the piston and tip-tilt errors scale and the ideal position of segmented mirror are
given, the capture and alignment process is realized by utilizing the improved optics design software ZEMAX, the
optimal scanning function and the alignment accuracy is determined.
KEYWORDS: Image intensifiers, Luminescence, Light sources, Night vision, Imaging systems, Luminous efficiency, Visibility, Imaging devices, System integration, Process control
Luminance gain is one of the evaluation parameters for light quantum performance of image intensifier. In the traditional
testing methodology of the luminance gain of image intensifier, the result is obtained only by measuring the incident and
emergent radiation without considering the spectrum matching. In this paper, the expression of luminance gain is
presented by considering the spectral visibility function and spectral character of incident and emergent radiation. Thus a
corrective factor is deduced in detail for the practice application pattern. And the adjusted testing methodology of
luminance gain of intensifier is studied and realized in a digitized integral system. The results after long running period
are compared with those of routine measurements. And the factors that affect the measurement accuracy are analyzed.
The results show that the adjusted test methodology has high measurement precision and stability. The methodology can
also be used to evaluate such performance of similar imaging devices after adjusting the illuminance source unit.
KEYWORDS: Imaging systems, Laser imaging, Pulsed laser operation, Laser systems engineering, CCD cameras, Cameras, Control systems, Power supplies, High power lasers, Backscatter
The range-gated laser imaging technology can eliminate the backscattering noise and has a range of from 4 to 6 times
that of a conventional camera with floodlights in the strongly scattering waters, which becomes a useful technique in
oceanic research, deep-sea exploration, underwater remote control and robotic works. The characteristics of range-gated
underwater laser imaging were analyzed in this paper, and the basic requirements for gated ICCD were presented. As a
result, Gated Gen II+ Image Intensifier and progressive scanning CCD were assembled together to meet the requirements,
which was called the Gate Intensified CCD camera. Combined with the small-sized programmable high voltage power
supply, video image acquisition/control system and high power lamp-Pumped blue-green Nd:YAG pulse laser, the
experimental system was developed. Experiments were carried out in pipes full of saturated salt water and large pools.
When tested by USAF 1951 resolution target, the detecting range can extend to about 30 meters.
In order to analyze the invalidation of the Semiconductor Device and IC, we proposed a novel digital thermal microscope
based on the uncooled focal plane detector. We give the operating principle, system's construction and the mathematical
mode of noise equivalent temperature difference (NETD). Based on the mathematical model, some measures were taken
to increase the system temperature resolution. Furthermore we proposed an adaptive nonuniformity correction algorithm
for the UFPA. The software for the thermal microscope is provided based on Visual C++. Results of real thermal image
experiments have shown that the digital thermal microscope is designed successfully and achieves good performance.
Thus it will become an effective means for invalidation Analysis. This method is a novel and unique contribution to field
of semiconductor device and IC invalidation analysis.
A great deal segmented mirror errors consisting of piston and tip-tilt exist when space large aperture segmented optics
system deploys. These errors will result in the departure of segmented mirrors images from the view. For that, proper
scanning function should be adopted to control actuators rotating the segmented mirror, so that the images of segmented
mirror can be put into the view and placed in the ideal position. The key of capturing segmented mirror images is
selecting an optimal scanning function. This paper put forward the optimal scanning function principle based on
capturing images by the fastest velocity. The scanning functions, such as screw-type, rose-type, and helianthus-type and
so on, have their own merits and demerits. In my paper, the scanning functions above will be analyzed and discussed. As
a sample, a simulation experiment is carried to study the effects of different scanning functions on three mirror
astigmatism system, whose primary mirror with six segmented mirror. In simulation experiment, the piston and tip-tilt
errors scale and the ideal position of segmented mirror are given, three scanning functions above are used to realize the
capture process by utilizing the improved optics design software ZEMAX, the relationship between scanning functions
and optical system are analyzed and the optimal one is determined.
Image intensifier and intensified CCD (ICCD) are critical components in the field of night vision technology. There are some specifications, such as luminance uniformity, fixed pattern noise, resolution, modulation transfer function (MTF), etc., which can be used to evaluate the performance of such components. A digitally integrated test system for performance evaluation of image intensifier and ICCD is described in this paper. The system can test 11 specifications for imaging intensifier (generation 1, 2 and 3) and ICCD with some essential accessories. The system operation theory, structure and testing results are represented in detail. The system has been run at North Night Vision Technology Co. for about 10 months. The results after long running period are given. And the factors that affect the measurement accuracy are analyzed. The results show that the digitally integral system has high measurement precision and stability.
KEYWORDS: Thermography, Video, Temperature metrology, Image analysis, Databases, Local area networks, Computer programming, Human-machine interfaces, Power supplies, Interfaces
A real-time 16-bit digital video grabber based on USB 2.0 protocol, thermal temperature measurement and dynamic analysis software are presented. One kind of 35 μm square pixel pitch's uncooled focal plane array thermal imaging module with 16-bit digital video output was selected. As long as being equipped with suitable camera lens, power supply and monitor, the module can be integrated as a thermal imaging system for observation, recognition, tracking and thermal images detection. Cypress Corp. CY7C68013 USB protocol chip is utilized. Main developments of video grabber are stressed on data transfer and logic control between imaging module and CY7C68013 with CPLD devices, and programming windows drivers based on Windriver. The measurement and dynamic analysis software involves not only traditional false color coding, point/line/area temperature analysis, but also several new functions: 1. Automatically mark and monitor the area when its temperature is higher than a setting threshold, and plot the curve of temperature histogram against time. 2. Monitor and plot the temperature movement versus time in manually setting points or area. 3. Database based on local area network convenient for sharing and managing data. Statistics form, curve plot, single-frame and sequence images can enter into database by manual operation or in term of some conditions set previously, such as files saving interval. Moreover, different functions are designed according to the authority of accessing local area network.
Nonuniformity correction (NUC) is a critical task for achieving higher performances in modern infrared imaging systems. The striping fixed pattern noise produced by the scanning-type infrared imaging system can hardly be removed clearly by many scene-based non-uniformity correction methods, which can work effectively for staring focal plane arrays (FPA). We proposed an improved nonuniformity algorithm that corrects the aggregate nonuniformity by two steps for the infrared line scanners (IRLS). The novel contribution in our approach is the integration of local constant statistics (LCS) constraint and neural networks. First, the nonuniformity due to the readout electronics is corrected by treating every row of pixels as one channel and normalizing the channel outputs so that each channel produces pixels with the same mean and standard deviation as median value of the local channels statistics. Second, for IRLS every row is generated by pushbrooming one detector on line sensors, we presume each detector has one neuron with a weight and an offset as correction parameters, which can update column by column recursively at Least Mean Square sense. A one-dimensional median filter is used to produce ideal output of linear neural network and some optimization strategies are added to increase the robustness of learning process. Applications to both simulated and real infrared images demonstrated that this algorithm is self-adaptive and able to complete NUC by only one frames. If the nonuniformity is not so severe then only the first step can obtain a good correction result. Combination of two steps can achieve a higher correction level and remove stripe pattern noise clearly.
KEYWORDS: Image restoration, 3D image processing, Signal to noise ratio, 3D image restoration, Microscopes, Image resolution, Image processing, Super resolution, Deconvolution, Biological research
A new method is proposed for the 3D (three-dimensional) image restoration of the wide-field microscope based on the MPMAP algorithm (Poisson-MAP Super-resolution image restoration algorithm with Markov constraint) according to the 3D features of the microscopic image. The neighborhood of the Markov random field in MPMAP algorithm is extended to 3D, and the regularization parameter α of the bound term in MPMAP is simplified. As a result, the restoration of 3D image of wide-field microscope is achieved, and the more perfect effect of the image restoration is got. When images within noise are restored by different value α, different attainable resolution and signal-to-noise ratio (SNR) in the restored image. Experiment results show that it is necessary to select appropriately value of α, and take tradeoff between resolution and SNR in the restored image so that the more perfect effect of the image restoration is got.
Optical microscope is a common tool in histology and microanatomy, but it is difficult for traditional optical microscope to analyze a three-dimensional specimen. Adopting optical slices will overcome the three-dimensional effect. Optical slices are a series of two-dimensional images by moving the specimen along the focal plane axis. Managing every image with deblurring algorithms to weaken or wipe off the out-of-focus information of the neighbor slices. In this paper, a optical microscope imaging system is analyzed. The relations of defocus error, optical transfer function(OTF) and point spread function(PSF) with defocus distance of specimen-space ▵z are educed. And thick specimen imaging of this system is simulated, which is consistent with the real imaging process of this system. The results are useful for the study of deblurring algorithms.
Based on the radiation transmission theory, a model of backscattering light and signal light for atmospheric range-gated imaging system has been developed. This model gives time dependent irradiance of backscattering light and signal light on photocathode during the propagation of illuminating pulsed laser in atmosphere. The model could be used to predict and optimize parameters of range-gated imaging systems. Examples with typical system parameters under fog conditions are computed with this model. The results can lead to several conclusions. The first one is that the photocathode irradiance of object image could be higher than that of the background even if optical signal power is lower than optical backscattering power. The second one is that increase in peak power of illumination laser could not improve the image contrast between an object and its background. The last one is that image contrast could be improved by reducing laser pulse width while keeping average laser power unvaried, reducing sensors’ field-of-view, or increasing the separation between transmitter and receiver.
A three band infrared integrated radiometer was developed for field measurement. The composition and work theory of the radiometer are described in this paper. The detailed calibration methods are analyzed for extended source and point source, and corresponding measurement results are given. With test data, the experimental results and instrument performance are analyzed in detail, in respect of the equivalent temperature of inter reference blackbody, linearity with target temperature, measurement error of point source and extended source. To lessen the influents of calibration for measurement, the attentive problems in actual calibration of extended source and point source are summarized, and satisfied results are acquired.
Pyroelectric uncooled focal plane array (FPA) thermal imager has the advantages of low cost, small size, high responsibility and can work under room temperature, so it has great progress in recent years. As a matched technique, the modulate chopper has become one of the key techniques in uncooled FPA thermal imaging system. Now the Archimedes spiral cord chopper technique is mostly used. When it works, the chopper pushing scans the detector's pixel array, thus makes the pixels being exposed continuously. This paper simulates the shape of this kind of chopper, analyses the exposure time of the detector's every pixel, and also analyses the whole detector pixels' exposure sequence. From the analysis we can get the results: the parameter of Archimedes spiral cord, the detector's thermal time constant, the detector's geometrical dimension, the relative position of the detector to the chopper's spiral cord are the system's important parameters, they will affect the chopper's exposure efficiency and uniformity. We should design the chopper's relevant parameter according to the practical request to achieve the chopper's appropriate structure.
KEYWORDS: Contrast sensitivity, Colorimetry, Composites, Color vision, Modulation transfer functions, Spatial frequencies, Human vision and color perception, Visual system, Visual process modeling, Image fusion
There are lots of experimental results concerned with chromatic contrast detection (based on changes in chromatic distribution over space or time), but compared with luminance contrast detection, there has been no theory to describe chromatic contrast detection and thus no mathematic model for isoluminant contrast sensitivity function. After reviewing of many papers, two reasons may contribute to this: I . Isoluminant contrast sensitivity function has the same description method as the isochromatic contrast sensitivity function which is unfit to express the chromatic changes. 2. The color vision is so complicated that further study is needed. Based on detailed analysis of previous works, this paper utilizes colorimetry fundamental principle to present a new theory for color contrast detection, and indicates that isoluminant contrast sensitivity function can be described by two aspects of chromatic changes——dominant wavelength and colorimetric purity. This theory can well explain that red-green and yellow-blue contrast sensitivity functions have similar characteristics.
Dual-band uncooled Focal Plane Array (FPA) thermal imaging system adopts an Archimedes spiral cord chopper and a matched dual-band light filter to achieve two single-band IR images in one imaging system. Traditional methods of getting two bands images need two single-band thermal imagers, this system only needs one detector and one optical imaging system, so the system's structure becomes smaller and the cost can be reduced. This paper studies the dualband light filter and the realization of capturing dual bands images, it also researches the algorithm of dual-band temperature measurement, using this algorithm, two bands infrared images can be fused into a temperature image.
A new band model was established in this paper. Co2 CO, H2O and CH4 are the main infrared absorption gases in atmosphere. The infrared spectral absorption coefficient of the gases can be expressed in a simple function with the new band model, and the expression can be used to calculate directly. In a certain wavelength range, the mean absorptance and transmittance of the gases were calculated with the new band model, the error was in reasonable precision range by compared with experimental data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.