PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662301 (2008) https://doi.org/10.1117/12.795331
This PDF file contains the front matter associated with SPIE Proceedings Volume 6623, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662302 (2008) https://doi.org/10.1117/12.791266
This paper presents a robust algorithm which relies only on the information contained within the captured images for the
construction of massive composite mosaic images from close-range and high-resolution originals, such as those obtained
when imaging architectural and heritage structures. We first apply Harris algorithm to extract a selection of corners and,
then, employ both the intensity correlation and the spatial correlation between the corresponding corners for matching
them. Then we estimate the eight-parameter projective transformation matrix by the genetic algorithm. Lastly, image
fusion using a weighted blending function together with intensity compensation produces an effective seamless mosaic
image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662303 (2008) https://doi.org/10.1117/12.791267
A novel two-channel single-output joint transform correlator system with the Mach-Zehnder configuration using
encoding technique based on HSV color space for color pattern recognition is introduced. The large zero order term can
be removed directly by the Stokes relations in only one step in this structure. By the image encoding technique, the size
of the liquid crystal spatial light modulators will be smaller with interlaced rearrangement of hue and saturation color
components. Furthermore, the utilization of Lagrange multipliers to synthesize the reference image for reducing the
correlation sidelobes is also studied. The computer numerical results are presented to verify the performance of the
proposed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662304 (2008) https://doi.org/10.1117/12.791268
There has been much recent interest in mobile systems for augmented reality. However, existing visual tagging solutions
are not robust at the low resolutions typical of current camera phones or at the low solid angles needed for
"across-the-room" reality augmentation. In this paper, we propose a new 2D barcode symbology that uses multiple colors
in order to address these challenges. We present preliminary results, showing the detection of example barcodes in this
scheme over a range of angles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662305 (2008) https://doi.org/10.1117/12.791269
High speed photography is a major means of collecting data from human body movement. It enables the
automatic identification of joints, which brings great significance to the research, treatment and recovery of injuries, the
analysis to the diagnosis of sport techniques and the ergonomics. According to the features that when the adjacent joints
of human body are in planetary motion, their distance remains the same, and according to the human body joint
movement laws (such as the territory of the articular anatomy and the kinematic features), a new approach is introduced
to process the image thresholding of joints filmed by the high speed camera, to automatically identify the joints and to
automatically trace the joint points (by labeling markers at the joints). Based upon the closure of marking points,
automatic identification can be achieved through thresholding treatment. Due to the screening frequency and the laws of
human segment movement, when the marking points have been initialized, their automatic tracking can be achieved with
the progressive sequential images.Then the testing results, the data from three-dimensional force platform and the
characteristics that human body segment will only rotate around the closer ending segment when the segment has no
boding force and only valid to the conservative force all tell that after being analyzed kinematically, the approach is
approved to be valid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662306 (2008) https://doi.org/10.1117/12.791270
Tracking vehicles is an important and challenging problem in video-based Intelligent Transportation Systems, which has
been broadly investigated in the past. A robust and real-time method for tracking vehicles is presented in this paper. The
proposed algorithm includes two stages: vehicle detection, vehicle tracking. Vehicle detection is a key step. The concept
of tracking vehicle is built upon the vehicle-segmentation method. According to the segmented vehicle shape, a predict
method based on Kalman filter is proposed. By assuming that the vehicle moves with a constant acceleration from the
current frame to the next, a Kalman filter model is used to tracking and predicting the trace of a vehicle. The model can
be used in the real traffic environment, and can track multi-targets in a big area. So it is practical in the vehicle tracking.
The proposed method has been tested on a number of monocular traffic-image sequences and the experimental results
show that the algorithm is robust and can meet the real-time requirement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662307 (2008) https://doi.org/10.1117/12.791271
The space-borne remote sensor is the modern high performance astronautic detecting tool without limit of
domain. It has high ability on object finding and widely used in military and civil field. The image quality of space
remote sensor is often determined by its optical design, manufacture, system adjustment and etc... But on the other
hand, the space environment such as the vibration and shaking of satellite, the wind of solar and so on have heavy
influence on the quality of image. All these facts often result the jitter of the optical axis. And the jitter would
greatly decrease the image quality. In order to realize the diffraction resolution limit of optical design, an image
tracking system must be used in the space remote sensor. On the basis of previous study, a complete and systematic
analysis is conducted on the effect of jitter on the image quality of a push-broom camera. Compared the merit and
demerit of space based image tracking technology; the star image tracking system is suggested since the fast
detecting speed. The simulation test bed is designed and the experiments show that the centroid location
algorithm's precision achieves 0.1 CCD pixels in theory and 0.3 CCD pixels in this test bed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662308 (2008) https://doi.org/10.1117/12.791272
A particle-pair of filaments is obtained by using liquid electrodes in a dielectric barrier discharge system. It travels in the
direction of the larger filaments and rebound at the boundary of discharge area. By using image processing and image
analyzing of the recorded pictures and video, the traveling velocity of particle-pair is calculated to be about 1.2 cm/s.
Moreover, the interparticle distance of particle-pair changes periodically with a period about 0.5 s.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662309 (2008) https://doi.org/10.1117/12.791273
Mutual occlusion is an attribute of an augmented reality system. It makes the user confirm that the
virtual objects truly exist in the real world. Traditional optical see-through displays are not capable of
correctly presenting the mutual occlusion of real and virtual environments, since the synthetic objects
always appear as translucent ghosts floating in front of the real scene. This paper presents a novel
optical see-through HMD. Aim at this new type HMD some feasible method is presented to realize
the mutual occlusion. A LCD panel is introduced in our display for the occlusion. Experimental
results show that the methods based on the novel display can integrate a virtual object in a real scene
seamlessly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230A (2008) https://doi.org/10.1117/12.791274
Previous studies indicate that parallel computing for hyperspectral remote sensing synthetic image generation is quite
feasible. However, due to the limitation of computing ability within single cluster, one can only generate three bands and
a 1000*1000 pixels image in a reasonable time period even using a 40-50 node parallel computing cluster. In this paper,
we discuss the capability of using Grid computing where the so-called eScience or cyberinfrastructure is utilized to
integrate distributed computing resources to act as a single virtual computer with huge scientific computational abilities
and storage spaces. The technique demonstrated in this paper demonstrates the feasibility of a Grid-Enabled Monte Carlo
Hyperspectral Synthetic Image Remote Sensing Model (GRID-MCHSIM) for future coastal water quality remote
sensing algorithm developments and detection of bottom features and targets in water.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230B (2008) https://doi.org/10.1117/12.791275
To obtain a color night vision image, we proposed a color transfer algorithm in YUV color space based on the color
transfer algorithm in lαβ color space which Reinhard proposed. After rendering the simple statistics (means and
standard deviations) of the target image to the source image, the color appearance of the target image is transferred
to the source image. 2D chromatic histogram (UV histogram) which can help to find an appropriate target image is
established. Finally, we illustrated several examples of color transfer to multi-band fused images which are fused in
MIT fusion scheme. After a color fused image is obtained, the color transfer is executed to render the color
information of the target image to the fusion image. The final image could have a day-like color appearance.
Besides, the algorithm has less operation than which in lαβ color space because of less transform complexity. It can
be realized in real time in digital signal processors without color space transformation between RGB and YUV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230C (2008) https://doi.org/10.1117/12.791276
The Computed-Tomography Imaging Interferometer (CTII) is a novel imaging spectrometer, which combines the
advantages of the conventional Fourier Transform Imaging Spectrometer (FTIS) and the ordinary
Computed-Tomography Imaging Spectrometer (CTIS). CTII obtains multi-angle projection interferograms by rotating
Dove prism placed in the collimating light beams. The image reconstruction is carried out by using
computed-tomography reconstruction algorithm named Radon transform. However, in experiments, images
reconstructed from the raw projection-interferogram sequences, are badly distorted. To solve this problem, we find when
Dove prism is rotated, its rotation center is not coincident to the optic axis of CTII. Therefore, the raw
projection-interferogram sequences have a few deviations to the ideal sequences. And then we find the deviations follow
certain law, so it is possible to rectify the raw projection-interferogram sequences. In this paper, two methods, the Linear
rectification method and the Cosine rectification method are proposed. The Linear rectification method uses an image
processing method, gets the dither value at each rotation angle, and rectifies the raw images. The Cosine rectification
method supposes the dither follows cosine change; the detail is presented in this paper. Finally, the reconstruction images
are presented. The reconstruction results show these two rectification methods are feasible and effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230D (2008) https://doi.org/10.1117/12.791277
This paper presents a novel method for automatically segmenting and detecting targets in complex environment using the
improved unit linking pulse coupled neural networks (ULPCNN) combining with contour tracking. On the one hand, the
typical ULPCNN model is improved including linear modulate, linear attenuation of dynamic threshold and the
attenuation parameter matrix Δ , which is more suitable for segmenting and detecting the target under complex
environment. On the other hand, we determine the iteration times and obtain the optimal segmentation result using
contour tracking based on maximum line contour point. In order to verify the efficiency, various simulations were
conducted for different images acquired from real scenes. Experimental results show, as compared to the conventional
approaches, the proposed method can overcome the drawbacks of PCNN and obtain the good results for segmenting and
detecting targets against complex background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230E (2008) https://doi.org/10.1117/12.791278
Multispectral imaging technique combines space imaging and spectral detecting. It can obtain the spectral information
and image information of object at the same time. Base on this concept, A new method proposed multispectral camera
system to demonstrated plant diseases. In this paper, multispectral camera was used as image capturing device. It
consists of a monochrome CCD camera and 16 narrow-band filters. The multispectral images of Macbeth 24 color
patches are captured under the illumination of incandescent lamp in this experiment The 64 spectral reflectances of each
color patches are calculated using Spline interpolation from 400 to 700nm in the process. And the color of the object is
reproduced from the estimated spectral reflectance. The result for reproduction is contrast with the color signal using
X-rite PULSE spectrophotometer. The average and maximum ΔΕ*ab are 9.23 and 12.81. It is confirmed that the
multispectral system realizes the color reproduction of plant diseases from narrow-band multispectral image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230F (2008) https://doi.org/10.1117/12.791279
The key to the auto-focusing technique based on image processing is the selection of focus measure reflecting image
definition. Usually the measures derived are on the premise of the images acquired with the same scene. As for the
remote-sensing camera working in linear CCD push-broom imaging mode, the premise doesn't exist because the scenes
shot are different at any moment, which brings about difficulties to the selection of the focus measure. To evaluate the
image definition, the focus measure based on blur estimation for rough adjustment is proposed to estimate the focused
position by only two different lens positions, which greatly saves the auto-focusing time. Another evaluation function
based on edge sharpness is developed to find best imaging position in the narrow range. Simulations show that the
combination of the two measures has the advantages of rapid reaction and high accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230G (2008) https://doi.org/10.1117/12.791280
In this paper, we introduced information hiding technique into remote sensing area, proposed its characters, requirements
and difference from general information hiding technique and illuminated that general image hiding algorithm doesn't
adapt to remote sensing image. There often exists some secret annotation related to a remote sensing image, therefore we
proposed a new secret spatial information hiding technique for remote sensing image, which realizes to hide the secret
spatial annotation into the related remote sensing image. And we also proposed a wavelet information hiding algorithm
adapting to features of a remote sensing image based on DWT embedding strategy and HVS character. The experimental
results show that the secret spatial information hiding technique and algorithm for a remote sensing image proposed in
the paper not only has the advantages of good transparency, strongness, large information capacity and correct extraction
of secret, but also has a strong robustness against JPEG lossy compression and noise adding. Furthermore the novel
spatial information hiding technique and algorithm has no influence on applied value of a remote sensing image and
doesn't need the original remote sensing image while extracting the secret spatial information, namely it is a blind
algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230H (2008) https://doi.org/10.1117/12.791281
The cavitation flow images are always used to research cavitation phenomena. A cavitation flow image program is
developed, which introduces many self-designing image processing functions different from the general ones and
overcome the restrictions of the commercial softwares. By employing the program, the outline, boundary, gray level and
area of the flow images around a super-cavitation hydrofoil in the cavitation zone can be extracted successfully.
Moreover, the evolutional period of cavitation can be estimated based on image gray, so that the cavity configuration and
transformation can be researched quantificationally. The reliability and frequency of the processing cavitation have been
improved in order to understand the mechanism of cavitation flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230I (2008) https://doi.org/10.1117/12.791403
An additive wavelet fusion method based on local wavelet energy to fuse multi-focus images is presented in this paper.
First, multi-resolution decomposition of source images is obtained by wavelet transform. Second, the corresponding
sub-band images by using different rules are fused, that is, the method based on average is used in low frequency
components while high frequency components are fused by an additive wavelet method based on local energy. Finally,
the fused image is obtained by inverse wavelet transform. Entropy and spatial frequency are used to evaluate the image
fusion results. The new method is tested by two sets of multi-focus images. The experimental results show that the
proposed method can achieve a better fusion performance. Moreover the proposed method is effective, applicable and
adaptive for different image data. The influence of the block size of wavelet coefficient and two different wavelet
transforms (Mallat's and 'a trous' wavelet) on the fusion results is also discussed, which is of great value for research and
experiment in this field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230J (2008) https://doi.org/10.1117/12.791404
The atmospheric turbulence severely limits the angular resolution of ground based telescopes. When using Adaptive
Optics (AO) compensation, the wavefront sensor data permit the estimation of the residual PSF. Yet, this estimation is
imperfect, and a deconvolution is required for reaching the diffraction limit. A joint deconvolution method based on
power spectra density (PSD) for AO image is presented. It deduces from a Bayesian framework in the context of imaging
through turbulence with adaptive optics. This method uses a noise model that accounts for photonic and detector noises.
It incorporates a positivity constraint and some a priori knowledge of the object (an estimate of its local mean and a
model for its power spectral density). Finally, it reckons with an imperfect knowledge of the point spread function (PSF)
by estimating the PSF jointly with the object under soft constraints rather than blindly. These constraints are designed to
embody our knowledge of the PSF. Deconvolution results are presented for both simulated and experimental data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230K (2008) https://doi.org/10.1117/12.791405
We present a new operator, named the normalized negative Laplacian of Gaussian (NNLoG) operator to model the
centre-surround mechanism of biological vision. We proved in mathematically that the NNLoG is invariant to scale. A
computational scheme for selective detection of intensity spots is proposed. To detect intensity spots of specific size, the
algorithm uses only one NNLoG of appropriate size. To detect intensity spots of unspecific size, the algorithm uses a set
of NNLoG with equidistance sizes; the location and size of intensity spots can be determined simultaneously. This paper
also investigated how to track target as a single spot, and to track rigid-body object with many spots on it. In the tracking,
Kalman filter and particle filter are used as the probabilistic frameworks respectively. The robustness and effectiveness
of the proposed method is demonstrated on both synthetic images and real sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230L (2008) https://doi.org/10.1117/12.791406
In this paper, we process images of different patterns with a Fast Fourier Transformation (FFT) to investigate
the spatial development of patterns in dielectric barrier discharge system. A bifurcation scenario from hexagonal
pattern to square pattern is observed under circular boundary as the driving voltage increasing. The spatial
characteristics of hexagonal pattern and square pattern are studied by analyzing their related spatial Fourier spectra.
In addition, a transition from hexagons to squares and a further development of square pattern with dislocation
defect are also researched by analyzing their Fourier spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230M (2008) https://doi.org/10.1117/12.791407
The diffraction efficiency of the volume grating written and read out in Ce:KNSBN crystal by using orthogonally
polarized light beams (solid state laser with 532nm) is experimentally studied, which exhibits a loop versus the variation
of the fringe modulation. And compare with the extraordinary polarization, the diffraction efficiency is improved 20%
with mutually orthogonal polarized wave while the angle between the incident plane and the polarization direction of the
pump beam equals to 30°. The properties of two-wave coupling edge-enhancement under different intensity ratio of the
reference beam to the object beam are experimentally investigated by using the setup of the Fourier-transform hologram
real-time writing and reading with Ce:KNSBN crystal as the recording medium. It is found that the effect of the image
edge-enhancement strongly influenced by the intensity ratio of the reference beam to the object beam. There is no
edge-enhancement with the intensity ratio of the reference beam to the object beam of 50:1, the high frequency
component in the object beam is enhanced greatly and the low frequency component is obviously weakened with the
intensity ratio of the reference beam to the object beam of 3:1. Along with the decreasing of the intensity ratio of the
reference beam to the object beam, the effect of the image edge-enhancement is still obvious even when the intensity
ratio reverses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230N (2008) https://doi.org/10.1117/12.791411
In this paper, a new algorithm is proposed for edge detection. A nonliear reaction diffusion equation is employed to
extract the image edges. The mechanism of the new algorithm is based on the local dynamics of the reaction diffusion
system. Three characteristics of dynamics including excitable, Turing/Hopf instability, and bistable dyanmics can be
obtained depending on the control parameters. In the bistable region the system has the ability to detect the edges of
images exactlly. Some of the best results are attained from a number of standard test problems. Compared with the
conventional methods, the new algorithm indicates a higher accuracy and continuity for the image. Moreover, the edge
detection process is not sensitive to the noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230O (2008) https://doi.org/10.1117/12.791414
The technology of three-dimensional laser imaging is applied widely in the field of military use and civilian use etc.
There are mainly two methods for three-dimensional laser imaging. One of them is based on APD arrays, and the other is
based on streak tube. The latter represents relatively mature technology for providing high-resolution 3D laser radar
images. In both of them, the realization of intensity image and range image is the foundation and key of
three-dimensional laser imaging. It presents the method for three-dimensional laser imaging using multiple-slit streak
tube to get clear, exact intensity image and range image. The multiple-slit streak tube imaging lidar (MS-STIL) approach
uses several slits instead of the usual single slit to provide a number of additional capabilities over conventional laser
radar systems. And it researches into the algorithm for the realization of intensity image and range image and processes
the simulative streak tube image with it via the analysis of multiple-slit streak tube's imaging theory, and finally carries
through the simulation of intensity image and range image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230P (2008) https://doi.org/10.1117/12.791415
An adaptive operator which can be used generally in image fusion algorithm of SAR/CCD images is proposed. The
operator is gotten from the two source images' cross-entropy, and it can be used in popular image fusion algorithm to
accomplish adaptivity. For traditional pyramid image fusion algorithm, the operator optimizes image fusion rules by
effecting in each decomposed level, and for image fusion based on wavelet transform, the operator let algorithms achieve
adaptivity by adjusting wavelet coefficients of the low-frequency sub-images' fusion and high-frequency sub-images'
fusion. Experimental simulation is carried out to get the fuse effect by contrast each original algorithm with the
algorithms using the adaptive operator. The results show that the image fusion algorithms improved by the adaptive
operator have stable fuse effect and strong adaptability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230Q (2008) https://doi.org/10.1117/12.791416
We present a robust geometric active contour model to track targets in video sequences captured from mobile cameras.
The target's contour is tracked on each frame of the sequence by both region and boundary information. The region
information is formulated by minimizing the Bhattacharyya coefficient between the color histogram of reference target
and that of the background. For the boundary information, we use gradient vector flow field to attract the contour from
either side of the target's boundary. The contour's evolution is implemented using the level set method. For each frame
coming from the sequence, template matching is performed before the curve evolution process to locate the region of
interest. The robustness and effectiveness of the proposed algorithm is demonstrated on real sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230R (2008) https://doi.org/10.1117/12.791417
An improved corner detection algorithm based on SUSAN principle is proposed. Because SUSAN operator is hard to
distinguish the corner from some special points on the digital image edges, a double template is constructed. It extracts
potential corners by SUSAN operator and then decides the accurate location of corners by a 5×5 template. Meanwhile,
an adaptive selection of gray threshold t is proposed on the basis of the local gray discreteness of pixel. The experiment
results show that the improved algorithm further raises the accuracy of corner detection and is more suitable for
application in digital image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230S (2008) https://doi.org/10.1117/12.791419
Phase-height mapping algorithm is the key technology of phase measurement profilometry (PMP). Because of lens
distortion of the projector, phase-height mapping is not simple linear transform and the mapping procedure become
complex. A method is introduced to simplify phase-height mapping. The detail of this method is expressed as following:
(1) Two suits of sinusoidal gratings which are perpendicular to each other are projected to the calibration target
respectively. (2) The position of each target mark in calibration target is estimated using standard image processing
technique. (3) Distortion coefficients of projector are estimated from the phase of target marks and their positions,
according to the camera model. (4) Ideal phase distribution for projecting is designed. (5) According to the camera
distortion model, distorting and transforming the ideal phase distribution, distorted phase distribution in the image plane
is acquired. Then the phase-shifting sinusoidal fringes with distorted phase are generated. The anamorphic sinusoidal
fringes are distorted inversely during projecting because projecting process is the reverse process of imaging. Therefore,
the ideal sinusoidal fringes are projected. The phase-height mapping of PMP system can be ideally expressed and the
mapping procedure is simplified. A practical PMP measurement system was constructed and the distortion coefficients
are estimated by calibrating the system. They are k1=-6.989×10-2 and p1=5.957×10-3. Then the distorted sinusoidal fringes
are generated and projected for calibrating. Distortion coefficients are estimated again. They are k1=-7.882×10-3 and
p1=-3.777×10-3. It is found from the experimental results that the distortion of projector is reduced a lot after grating
correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230T (2008) https://doi.org/10.1117/12.791420
Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical
demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system
based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer,
electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo
matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment
results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced
by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure
precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230U (2008) https://doi.org/10.1117/12.791421
To solve the "semantic gap" problem, image semantic understanding is the key technique. In this paper, firstly
analyzes the current situation of image semantic understanding research, which includes some methods of image
semantic representation and image semantic extraction, and then it discusses the applications of image semantic
understanding, and finally it presents the trend about image semantic understanding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230V (2008) https://doi.org/10.1117/12.791422
Image smoothing and super-resolution are realized by using a reaction diffusion model which is a typical partial
differential equation. The new method is based on the theory of self-organization. In the Turing instability at the bistable
region diffusion process in space leads to the availability of image smoothing and decides the smoothing effects. After
comparing with average filter and median filter it is found that the effects of image smoothing by the reaction diffusion
model are better than that by other filters. Super-resolution can also be achieved by the reaction diffusion model in a
suitable region of control parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230W (2008) https://doi.org/10.1117/12.791423
Image fusion based on a Brovey transform (BT) and wavelet transform (WT) is developed to merge SPOT-5 images. The
main objective of this research was to study the effects of BT and WT on the information capacity of panchromatic and
multispectral images. The results show that the spatial resolution of images merged by BT and WT is higher than that of
the original SPOT-5 images. The two transforms techniques merge the features of the panchromatic and multispectral
images very well. However, the hue of the WT merged image is very different from that of the original image, indicating
that WT led to obvious color distortion. And the hue of the BT merged image is approximately the same as for the
original image, with no image distortion. Furthermore, the discussion of the information capacity considers quality in
terms of hue and definition, and quantity in terms of entropy, average gradient and spectral authenticity. Experimental
results show that images merged by BT showed higher spatial resolution and better spectral features than the original
SPOT-5 imagery. Images merged by WT also showed higher spatial resolution, but lost some spectral information.
Therefore, BT is very efficient and highly accurate for merging SPOT-5 images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230X (2008) https://doi.org/10.1117/12.791424
According to the correlated characteristic of remotely sensed multispectral images (RSMI) in the spectral and spatial
domains, an effective and lossy YCrCb+IWT compression algorithm is proposed. The algorithm combines YCrCb
transform with integer wavelet transform (IWT) to compress data, and data redundance of spectral and spatial domains is
removed respectively. The important degree of the each subband is determined according to the energy of the each
subband. Furthermore, each subband is quantified using adaptive threshold according to their important degree, then
fixed bit-plane coding and Run Length Encoding are individually used to the quantified data of every subband and
important graph. When implementing compression algorithm, in order to ensure better quality of reconstructed image,
the compression with little distortion is utilized for luminance information Y. Simultaneously, in order to obtain higher
compression ratio, the compression with biggish distortion is carried out for chrominance information Cr and Cb. The
simulation experiment indicates that this algorithm can receive good compression performance of average CR≥ 7 and
average PSNR ≥ 33dB for RSMI of different content and texture. In addition, the algorithm requires small storage and is
easy to be realized in hardware, so it is suitable for space-borne application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230Y (2008) https://doi.org/10.1117/12.791425
Image stabilization can be used in variety of situations including tracking system on an unstable platform. In
order to realize this aim, some questions need to be solved, including feature point matching, impact of moving
object, existence of abnormal value, how to make certain feature points on the target, and so on. In this article, a
new image stabilization algorithm for digital image tracking is proposed and resolved upon questions by it. The
experiment result shows that this method can realize object tracking based on image stabilization and prove its
significance in practical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66230Z (2008) https://doi.org/10.1117/12.791426
High-resolution remote sensing images are the most important information sources for the 3D digital city. The general
remote sensing techniques by satellite and aircraft can get the geographic information of large area, but there are some
limitations in detail information acquisition. In the paper, we design an implement a new remote sensing means by UAV(
Unmanned Air Vehicle). UAV is a new platform of Remote Sensing which is navigated by the telecontrol device. The
system we developed named UAV-II, which has integrated the RS and UAV technology. The system comprises of RS
device, RS platform and tele-controlling system. RS device is used to acquire photos, and we installed three CCD digital
cameras on UAV-II in order to get wide-angle images. The platform is the carrier of RS device which is made mainly of
glass fibre reinforced plastic. The tele-controlling system is used to control the flying state and all devices working
normally. From tests of UAV images acquired from various shooting conditions, we draw the conclusion that 45° and
300 meter are the ideal shooting angle and height to get the most abundant information. After the images acquisition, we
can extract geometry and texture information from UAV images by photogrammetry. Different from the traditional
photogrammetry which needs stereo-mate, we developed a monoscopic photogrammetry method which only need a
single photo. Using all the information we acquired from UAV images, we can reconstruct 3D models of the city.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662310 (2008) https://doi.org/10.1117/12.791427
A 3D measurement system of solder pastes was established. The system aims to extract the height and other values of
solder paste, and realize the quality control of Surface Mount Technology (SMT). 3D laser measurement technique was
applied to this system. The calibration process is divided into two steps, the internal parameters of CCD camera are
obtained by the RAC method of Tsai, and the laser plane parameters are calibrated with one special multi-arris block.
The scanning technique fulfills the acquisition of final 3D profile. Experimental results at the product line prove that the
system is a more easily operated device with high performance, and its repeatable precision reaches ±1 µm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662311 (2008) https://doi.org/10.1117/12.791428
The high-precision measurement method for the position and orientation of remote object, is one of the hot issues in
vision inspection, because it is very important in the field of aviation, precision measurement and so on. The position and
orientation of the object at a distance of 5 m, can be measured by near infrared monocular vision based on vision
measurement principle, using image feature extraction and data optimization. With the existent monocular vision
methods and their features analyzed, a new monocular vision method is presented to get the position and orientation of
target. In order to reduce the environmental light interference and make greater contrast between the target and
background, near infrared light is used as light source. For realizing automatic camera calibration, a new
feature-circle-based calibration drone is designed. A set of algorithms for image processing, proved to be efficient, are
presented as well. The experiment results show that, the repeatability precision of angles is less than 8"; the repeatability
precision of displacement is less than 0.02 mm. This monocular vision measurement method has been already used in
wheel alignment system. It will have broader application field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662312 (2008) https://doi.org/10.1117/12.791429
Within a Bayesian framework, Brady proposed the adaptive texture approach for more accurate description and applied
this model in texture segmentation with a neighbourhood-based algorithm. In this paper, the efficiency of the texture
model in Brady's segmentation method is investigated. In the segmentation experiments of Brodatz texture mosaics and a
remote sensing image, the results show that the good segmentation performance mainly owes to the
neighbourhood-based algorithm, but not Brady's texture description model. Moreover, this probabilistic model is applied
in texture classification with a MAP method. To improve the correct classification rate of the image bank, a method
combining the best adaptive texture description of each class is proposed and obviously improves the rate from 91% to
95%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662313 (2008) https://doi.org/10.1117/12.791430
Compared with unimodal wavelet packet subbands, multimodal subbands have strong texture discriminatory power. The
existence of mulitimodal subbands in dual-tree complex wavelet packet transform is proved. Similar to the multimodal
subbands in real wavelet packet transform, there are shift-modal subbands in complex transform to capture the
periodicities running through the texture images. Furthermore, the stability of multimodal subbands in real transform is
investigated through a classification experiment. It is concluded that, to the textures with small and very regular
periodicities, stable multimodal subbands can be obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662314 (2008) https://doi.org/10.1117/12.791431
A novel approach is proposed to extract the tree crown from remote sensing image.The method is based on
Reversible Jump Markov Chain Monte Carlo sampler(RJMCMC), and improved data term is developd to describe the
tree crown, and jump and diffusion strategy of sampling is employed to optimize the energy function. Similar or better
extracting result is achieved with great efficiency , and the pre-segmentation is not need. The mothod is verified on
remote sensing images
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662315 (2008) https://doi.org/10.1117/12.791495
The hyperspectral imaging spectrometer can supply hundreds of narrow band spectral data, which has high spatial
and spectral resolution, and meanwhile the amount of data becomes huge. Therefore, the efficient compression
algorithms become necessary. According to the characteristics of hyperspectral images, spatial and spectral decorrelation
is necessary before compression. In this paper, the characteristics of hyperspectral images are presented firstly. Secondly,
the research on hyperspectral image decorrelation is summarized. Techniques based on prediction, techniques based on
vector quantization, and techniques based on transform coding are presented here. Finally, the future development is
referred.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662316 (2008) https://doi.org/10.1117/12.791496
In this paper a new deconvolution algorithm is presented concerning images contaminated by periodic stripes. Inspired
by the 2-D power spectrum distribution property of periodic stripes in the frequency domain, we construct a novel
regularized inverse filter which allows the algorithm to suppress the amplification of striping noise in the Fourier inverse
step and further get rid of most of them, and mirror-wavelet denoising is followed to remove the left colored noise. In
simulations with striped images, this algorithm outperforms the traditional mirror-wavelet based deconvolution in terms
of both visual effect and SNR comparison, only at the expense of slightly heavier computation load. The same idea about
regularized inverse filter can also be used to improve other deconvolution algorithms, such as wavelet packets and
wiener filters, when they are employed to images stained by periodic stripes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662317 (2008) https://doi.org/10.1117/12.791497
Video-based vehicle detection is one of the most valuable techniques for the Intelligent Transportation System (ITS). The
widely used video-based vehicle detection technique is the background subtraction method. The key problem of this
method is how to subtract and update the background effectively. In this paper an efficient background updating scheme
based on Zone-Distribution for vehicle detection is proposed to resolve the problems caused by sudden camera
perturbation, sudden or gradual illumination change and the sleeping person problem. The proposed scheme is robust and
fast enough to satisfy the real-time constraints of vehicle detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662318 (2008) https://doi.org/10.1117/12.791498
For the growing web intrusion issues, we propose a new method for intrusion detection. In this paper, statistical learning
theory (SLT) is introduced to intrusion detection and a method based on support vector machine (SVM) is presented.
Theory of SVM is introduced first, and then in data pretreatment, we propose a method of reducing the dimension of
primal data sets and a method of transforming eigenvalue from characters to numbers. In virtue of the network data sets
which appear variable, small and with high dimension, we introduce the Sequential Minimal Optimization (SMO)
algorithm which is especially for large scale problems. The testing result based on the DARPA data show that the method
is effective and efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662319 (2008) https://doi.org/10.1117/12.791499
A novel approach is proposed for obtaining high resolution image of removing the optical aberrations by disturbing the
optical wave-front phase and digital image processing. An optical random phase mask of the phase spectrum
fluctuation corresponds to Kolmogorv distribution is placed between the exit pupil and image plane of optical system to
make the optical aberration image blurred termed the intermediate image. The intermediate image acquired by digital
detector is restored through the blind deconvolution algorithm base on maximum-likelihood estimation technique. The
effect of optical aberrations on restoration image and superresolution performance of image was explored. As a
demonstration to verify the utility of this method, the primary aberrations corresponding to the optical system are
applied, and the image of removing the aberrations by a computer simulation and experiment is shown. The results
suggest that the present method is well suited for improving the imaging quality of the optical system, and partly
removing the diffraction effect of optical system on restoration image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231A (2008) https://doi.org/10.1117/12.791500
The measurement system currently used in Phase Measuring Profilometry(PMP) consists of a Digital Light
Projector(DLP), a CCD camera, and a computer system. However, the inherent gamma nonlinearity of the DLP and
CCD camera can affect the output with a nonsinusoidal fringe image. In the same tine the systematic noise is an
important error source. Some conventional filtering algorithms may either make the fringe more blurring or be
inefficient for fringe images. Eventually, the obtained fringe image is non-sinusoidal and with systematic random
noise inevitably. Aimed at these problems, a pre-processing method for the fringe image is presented in this paper.
Firstly, an anti-deforming light model is designed and then projected by the DLP. Through the gamma nonlinear
response of the whole system, the waveform is corrected. Secondly, an improved orientation filter-based method is
designed to overcome the systematic random noise and can achieve better effect than other algorithm does.
Experiment, according to aforementioned two steps, is carried out and preferable fringe images can be gained. The
fringe waveform is more close to the ideal sinusoidal wave. Also the systematic noise is reduced effectively while
the fringe image is still clear. In the paper, two steps of this method are detailed and some experimental results are
also reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231B (2008) https://doi.org/10.1117/12.791501
This paper base on online detecting and position-setting of tiles and introduce two methods of corner detecting. The
methods are MIC corner to withdraw the operator and Harris corner to withdraw the operator. It compares the characters
of the two algorithms and uses Harris corner extraction operator to offer on-line detection and localization of tiles. It also
analyses and divides the image detection window and offers important method for extracting and analyzing of color
character of image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231C (2008) https://doi.org/10.1117/12.791502
The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear
interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular
image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and
corrected to conventional rectangular image without distortion, which is much more coincident with people's
vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The
experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction
has the lower computation complexity and the architecture for dynamic panoramic image processing has lower
hardware cost and power consumption. And the proposed algorithm is valid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231D (2008) https://doi.org/10.1117/12.791504
The binocular stereo vision system was established in order to automatically measure the seedling perpendicularity without touching.
Firstly, the designed idea of calibration referenced objects was described and image edge tracing technique was proposed to acquire
image coordinates of the control points. The simple linear method was used to calibrate two cameras in the system. Secondly, two
seedling images with 24 bits color were obtained after camera calibration, and then image segmentation was performed to extract
seedling from the soil background. After that, image graying and image binarization processing was implemented. In binary image, a
new algorithm was proposed to express the angle information of seedling as a line for each of the two seedling images in 2D space.
Finally, line reconstruction in 3D space was accomplished based on two cameras' parameter and the two simplified lines. So the seedling
space angel can be calculated through the reconstructed line equation. The experiments show that the linear camera calibration method
and our proposed algorithms can achieve better precision. The stereo vision measurement system can meet the need of practical
application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231E (2008) https://doi.org/10.1117/12.791509
It is important for scene generation to keep the texture of infrared images in simulation of anti-ship infrared imaging
guidance. We studied the fractal method and applied it to the infrared scene generation. We adopted the method of
horizontal-vertical (HV) partition to encode the original image. Basing on the properties of infrared image with sea-sky
background, we took advantage of Local Iteration Function System (LIFS) to decrease the complexity of computation
and enhance the processing rate. Some results were listed. The results show that the fractal method can keep the texture
of infrared image better and can be used in the infrared scene generation widely in future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231F (2008) https://doi.org/10.1117/12.791510
The enhancement, segment and edge detection of infrared images are one of the key techniques in precise guidance. It
owns strong application background and is studied widely. The soft mathematical morphology can compress the noise
effectively, get better processing results and complete processing in real time. In addition, the structuring systems are
constructed in advance and remain unchanged in general image processing. This property can make the method run in
parallel effectively. So it is well adapted to infrared images processing. In this paper, we analyzed and discussed the
applications of soft morphology theory in infrared ship image. We introduced the soft mathematical morphological
operations in section two. Then the hardware framework of DSPs was given briefly. The software design and code
optimal design were also discussed. At last the conclusions are drawn.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231G (2008) https://doi.org/10.1117/12.791512
A non-contact online detection method of molten-iron weighing based on image processing being researched in the paper.
The digital image of the cross on torpedo cans can be obtained via video camera. The moved distance of the spring on
flatbed can be measured through image processing. Meanwhile, the surface resolution can be flagged by the use of
parallel-lines detection method. Finally, the weight and velocity of molten iron can be calculated with the elastic law.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231H (2008) https://doi.org/10.1117/12.791514
Phase retrieval could be achieved by solving the intensity transport equation (ITE) under the paraxial
approximation. For the case of uniform illumination, Neumann boundary condition is involved and it makes the
solving process more complicated. The primary mirror is usually designed segmented in the telescope with large
aperture, and the shape of a segmented piece is often like an annulus sector. Accordingly, It is necessary to analyze
the phase retrieval in the annulus sector domain. Two non-iterative methods are considered for recovering the phase.
The matrix method is based on the decomposition of the solution into a series of orthogonalized polynomials, while
the frequency filtering method depends on the inverse computation process of ITE. By the simulation, it is found
that both methods can eliminate the effect of Neumann boundary condition, save a lot of computation time and
recover the distorted phase well. The wavefront error (WFE) RMS can be less than 0.05 wavelength, even when
some noise is added.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231I (2008) https://doi.org/10.1117/12.791515
2D image matching is a precondition for 3D object reconstruction, and plays an important role in machine
vision. The key of image matching is that establishes the correspondence between images. The fundamental matrix
encapsulates all the information between two images. Hence, precise estimation of the fundamental matrix is very
important. Due to noise disturbing and correspondence outliers, it is hard to estimate the fundamental matrix.
Aiming at this problem, the paper puts forward a new eight-point algorithm which integrates the normalization and
LMS algorithm ( least median square) together, solving error estimation result, because of wrong matching points.
Through the method above, the result of experiment indicates that the method has higher matching precision. It can
be used for multi objects image matching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231J (2008) https://doi.org/10.1117/12.791516
A method of locating eyes in a face image is presented in this paper. Compared with most methods in this field, this
method is insensitive to the rotation of image. And when applying this method to the color image, eyes can be
detected in a profile face image although with some false alarms. When dealing with grey-level images, some
geometry features related to eyes are generalized to locate eyes more precisely. When dealing with color images, a
color transforming is utilized to detect the faces and scleras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231K (2008) https://doi.org/10.1117/12.791517
As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied
and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so
widespread studied, although the color image is the principal part for multi-medium usages. Considering the
characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper.
In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity
component (I) of the host color image was used for watermark date embedding, and while embedding watermark the
amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image,
preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the
same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was
scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to
form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to
several common attacks, and has good perceptual quality at the same time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feng Wang, Bin Li, De-xian Zhang, Hui Yin, Yi-tao Liang
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231L (2008) https://doi.org/10.1117/12.791518
In this paper, the techniques of image preprocessing and measuring stations' arrangement needed in a method for
measuring missile target's 3D poses based on the stereovision principle were studied. A strategy of combining region
segmentation technology with mathematical morphological tools to detect the contour of object, and then applying
Hough transform method together with the least squares fit skills to obtain the central axis line of the object on the image
plane was proposed and tested. The test results consistently show that the strategy has high degree of accuracy. The
matters need attention in measurement stations' arrangement were illustrated. All of this can be as the important
references for practicing the pose measurement method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231M (2008) https://doi.org/10.1117/12.791519
The traditional tabletop AR systems general use head mounted display (HMD) that has some shortcomings, such as
the imprecise precision and low flexibility. To solve these problems, a new design of video see-through tabletop system
is presented. In this paper, we describe an outline of the system and the registration algorithms for the system. A system
origin calibration algorithm is proposed. In the calibration experiment, a sign cube is introduced for the first shooting of
the camera. The position of the sign cube becomes the origin of the world reference frame, in which the translation and
rotation of the virtual objects relative to the origin can be calculated easily. The experimental results show that the video
see-through tabletop system meets the precision and flexibility requirements very well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231N (2008) https://doi.org/10.1117/12.791521
Airborne laser scanning, also known by the acronym LIDAR (Light Detection And Ranging), is an operationally
mature remote sensing technology and it can provide rapid and highly-accurate measurements of both object and ground
surface over large areas. Presently, there are mostly two class of methods are used to process the LIDAR data. One
method is a method that processing the lidar image like two dimensions ordinary image; the other method is a way that
directly processing the point clouds of airborne LIDAR data, that is the non-ground points are filtered from all point
clouds of LIDAR data. Among the second class method, some algorithms have been also developed to process the point
clouds of LIDAR data. In this paper, a statistical algorithm-change of Kurtosis is presented to separate non-ground
points and ground points. From the curve of kurtosis's change, its inflexion is easily found to separate the object points
and ground points. The algorithm will be test on three study areas of LIDAR data provided by ISPRS Commission III
Working Group 3: City site 3, City site 4 and Forest site 5. The algorithm efficiently separates ground and object points.
Furthermore, lower objects, such as bridge, can be distinguished from other higher vegetation by the change of Kurtosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231O (2008) https://doi.org/10.1117/12.791522
The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious
alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its
principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts
based on data hiding or steganography designed as a means for the image authentication. This paper presents a color
image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the
convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of
image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced
because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be
minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the
receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on
convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity
check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and
restored need not accessing the original image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231P (2008) https://doi.org/10.1117/12.791523
This paper proposes a new idea of detecting meter using image arithmetic- logic operation and
high-precision raster sensor. This method regards the data measured by precision raster as real
value, the data obtained by digital image-processing as measuring value, and achieves the aim of
detecting meter through the compare of above two datum finally. This method utilizes the dynamic
change of meter pointer to complete subtraction processing of image, to realize image segmentation,
and to achieve warp-value of image pointer of border time. This method using the technology of
image segmentation replaces the traditional method which is low accuracy and low repetition caused
by manual operation and ocular reading. Its precision reaches technology index demand according to
the arithmetic of nation detecting rules and experiment indicates it is reliable, high accuracy. The
paper introduces the total scheme of detecting meter, capturing method of image pointer, and also
shows the precision analysis of indicating value error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231Q (2008) https://doi.org/10.1117/12.791524
According to the principle of optical measurement, an effective and simple method to measure the distortion of CCD
camera and lens is presented in this paper. The method is based on computer active vision and digital image processing
technology. The radial distortion of camera lens is considered in the method, while the camera parameters such as the
pixel interval and focus of camera are calibrated. The optoelectronic theodolite is used in our experiment system. The
light spot can imaging in CCD camera from the theodolite. The position of the light spot should be changed without the
camera's rotation, while the optoelectronic theodolite rotates an angle. All view reference points in the image are worked
out by computing the angle between actual point and the optical center where the distortion can be ignored. The error
correction parameters are computed, and then the camera parameters are calibrated. The sub-pixel subdivision method is
used to improve the point detection precision in our method. The experiment result shows that our method is effective,
simple and practical.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231R (2008) https://doi.org/10.1117/12.791526
During the image acquisition procedure of an automatic iris recognition system, the iris image with low quality may lead
to the personal identification failure in some cases. Therefore it is very important to adopt the image quality evaluation
procedure before the image processing. In this paper, we proposed a fast image quality evaluation method based on
weighted information entropy combining iris image segmentation through localization. Through this method, we can fast
grade the images and pick out the high quality iris images from the video sequence captured by the image acquisition
device. Experimental results show that this method can quickly and effectively screen out appropriate images to meet the
requirements of the iris recognition algorithm. It can also improve the speed and accuracy of the iris recognition system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231S (2008) https://doi.org/10.1117/12.791527
In order to reduce the influence of noise on edge extracting and improve the precision of edge localization on the
image, after analyzed the principle, strong points and short points of some traditional edge detecting methods, an
effective algorithm for edge extracting in noise image was proposed in this paper. Adopting thought of traditional
multi-directional and multistage combinational filtering, an image detail-preserving adaptive filter is designed to
remove noise, and then extract the edge in the image. On the basis of the classical Sobel operator, we introduced an
algorithm with resisting noise, good real-time and locating accurate edge. The algorithm can distinguish real edge
from noise in terms of the theory of successive and smooth edge and random noise. The algorithm was
accomplished under visual C++ 6.0 environment and tested by several standard images. The experimental result
prove that the presented method is feasible and effective when the salt-pepper pollution of image is smaller than
15%, furthermore the method can extract edges with high location precision and good continuity accurately and
effectively, at the same time, it has high processing speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231T (2008) https://doi.org/10.1117/12.791528
The article studies 3-D image of OCT reconstructing and denoising method. Optical coherence tomography
(OCT), which is a novel tomography method, is non-contact, noninvasive image of the vivo tomograms, and have
characteristic of high resolution and high speed; therefore it becomes an important direction of biomedicine
imaging. However, when the OCT system using in vivo specimen, noise and distortion are appeared, because the
speed of the system was confined, therefore image needs the method of denoising and rebuilding. This paper
studies the high scattering medium, such as specimen of skin, researches the filter and rebuilding algorithm. It
proposes a novel dynamic average background estimation algorithms based on time-domain estimation method;
then combining with filter in frequecy-domain avoid longitudinal direction distortion and deep's amplitude
distortion. By experement it compares and discusses result of above methods and shows this algorithms to
improve the qualities of image. It uses gray and color index method, which is aim to increase visualization
resolution, to implement pseudo-color coding in OCT ophthalmolgy image; uses iterative reconstruction to
optimize algorithm, realizes OCT system data's 3-D reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231U (2008) https://doi.org/10.1117/12.791529
This paper presents a new algorithm based on mixing transform to eliminate redundancy, SHIRCT and subtraction
mixing transform is used to eliminate spectral redundancy, 2D-CDF(2,2)DWT to eliminate spatial redundancy, This
transform has priority in hardware realization convenience, since it can be fully implemented by add and shift operation.
Its redundancy elimination effect is better than (1D+2D)CDF(2,2)DWT. Here improved SPIHT+CABAC mixing
compression coding algorithm is used to implement compression coding. The experiment results show that in lossless
image compression applications the effect of this method is a little better than the result acquired using
(1D+2D)CDF(2,2)DWT+improved SPIHT+CABAC, still it is much better than the results acquired by JPEG-LS,
WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, NMST and MST.
Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, on the
average the compression ratio of this algorithm exceeds the above algorithms by 42%,37%,35%,30%,16%,13%,11%
respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231V (2008) https://doi.org/10.1117/12.791531
This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the
algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when
the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform
to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for
compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using
hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band
number is not the power of 2, lossless compression result of this compression algorithm is much better than the results
acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of
Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this
algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the
power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for
groupings based on different numbers, considering factors like compression storage complexity, the type of wave band
and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better
compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231W (2008) https://doi.org/10.1117/12.791532
In this paper, we propose a new blind watermarking algorithm for images which is based on tree structure. The algorithm
embeds the watermark in wavelet transform domain, and the embedding positions are determined by significant
coefficients wavelets tree(SCWT) structure, which has the same idea with the embedded zero-tree wavelet (EZW)
compression technique. According to EZW concepts, we obtain coefficients that are related to each other by a tree
structure. This relationship among the wavelet coefficients allows our technique to embed more watermark data.
If the watermarked image is attacked such that the set of significant coefficients is changed, the tree structure allows the
correlation-based watermark detector to recover synchronously. The algorithm also uses a visual adaptive scheme to
insert the watermark to minimize watermark perceptibility. In addition to the watermark, a template is inserted into the
watermarked image at the same time. The template contains synchronization information, allowing the detector to
determine the geometric transformations type applied to the watermarked image. Experimental results show that the
proposed watermarking algorithm is robust against most signal processing attacks, such as JPEG compression, median
filtering, sharpening and rotating. And it is also an adaptive method which shows a good performance to find the best
areas to insert a stronger watermark.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231X (2008) https://doi.org/10.1117/12.791533
In order to integrate seamlessly a virtual object in a real scene in augmented reality (AR) system, we need to simulate the
interactions of the virtual object with the illumination of the scene. Acquiring the knowledge of illuminant direction is
crucial in this work. We present a novel approach for estimating the direction from a single image of a scene that is
illuminated by a light source regardless it is point light source or directional one. We propose to employ a marker cube,
which is used to register to determine the rigid transformation relating 2D images to known 3D geometry, and a
Lambertian probe sphere, which is used to estimate the light source direction by image processing. The key process is
finding and extracting the intensity occluding curve on the sphere. Experimental results show that our approach is
computationally efficient and the light source direction can be accurately obtained by it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231Y (2008) https://doi.org/10.1117/12.791535
The real-time processing system of infrared and Low level light(LLL) image fusion is developed. The system
consists of uncooled infrared imaging system, LLL TV system, real-time image processor, image acquisition card,
computer and monitor. Infrared imaging system is based on a 384×288-element uncooled microbolometer focal
plane arrays. LLL TV system uses a super Gen-llimage intensifier. The real-time image processor is designed
which can process the video outputs of uncooled infrared imaging system and LLL TV system. The real-time
image fusion based on weighted pixel average is relized by image processor. The image fusion algorithms based on
PCA weighted pixel average and pyramidal decomposition are simulated in computer. The results are given and
analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 66231Z (2008) https://doi.org/10.1117/12.791536
A fast SEM (Scanning electron microscopy) image super resolution algorithm is proposed for e-beam inspection system.
There are many super resolution algorithms for optical images. But there are two difference points between SEM image
and optical image. Firstly, there is distortion in a SEM image sequence. Secondly, the gray level values for different
frame of images are not uniform. To solve the two issues, the whole image is divided into sub-images. Gray level value
regularization and sub-pixel shift estimation are done for sub-images. The low complexity of algorithm meets the
requirement of real time processing and image display. The test results prove that the super resolution images based on
the proposed algorithm restore more detail information...
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662320 (2008) https://doi.org/10.1117/12.791545
A pattern search algorithm is proposed to search the region that the users are interested in for SEM inspection system.
Effective parameters are needed to classify the image patterns of the wafer. Sometimes there are errors in the ROI
recognition because of the strong noise and other reasons. A filter is proposed to remove the wrong selections. Test for
multi-patterns SEM images show the algorithm meets the requirement of precision and speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662321 (2008) https://doi.org/10.1117/12.791546
Laser active imaging system, which is of high resolution, anti-jamming and can be three-dimensional (3-D) imaging, has
been used widely. But its imagery is usually affected by speckle noise which makes the grayscale of pixels change
violently, hides the subtle details and makes the imaging resolution descend greatly. Removing speckle noise is one of
the most difficult problems encountered in this system because of the poor statistical property of speckle. Based on the
analysis of the statistical characteristic of speckle and morphological filtering algorithm, in this paper, an improved
multistage morphological filtering algorithm is studied and implemented on TMS320C6416 DSP. The algorithm makes
the morphological open-close and close-open transformation by using two different linear structure elements respectively,
and then takes a weighted average over the above transformational results. The weighted coefficients are decided by the
statistical characteristic of speckle. This algorithm is implemented on the TMS320C6416 DSPs after simulation on
computer. The procedure of software design is fully presented. The methods are fully illustrated to achieve and optimize
the algorithm in the research of the structural characteristic of TMS320C6416 DSP and feature of the algorithm. In order
to fully benefit from such devices and increase the performance of the whole system, it is necessary to take a series of
steps to optimize the DSP programs. This paper introduces some effective methods, including refining code structure,
eliminating memory dependence, optimizing assembly code via linear assembly and so on, for TMS320C6x C language
optimization and then offers the results of the application in a real-time implementation. The results of processing to the
images blurred by speckle noise shows that the algorithm can not only effectively suppress speckle noise but also
preserve the geometrical features of images. The results of the optimized code running on the DSP platform show that
the optimized outcome realizes better instruction-level parallelism and pipeline operation and the program is proved to
be reliable, effective and high real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662322 (2008) https://doi.org/10.1117/12.791549
Considering the physical characters of infrared and visible image, a parallel processing fusion algorithm is
proposed to fuse target regions and background regions respectively. Firstly the improved marker-controlled
watershed algorithm and "mutual mapping" approach are used to segment the images into corresponding target and
background regions. For the quadrate IR and visible target regions, the target fused image is acquired by direction
contrast and region maximum standard deviation method based on wavelet domain fusion. For the IR and visible
background regions, the background fused image is acquired by variance weighted information entropy (VWIE)
method based on background complex degree(BCD). The total fused image is acquired by mathematical
superposition approach based on the target and background fused images. Comparing with several common
algorithms by "quality coefficient" that is an objective and integrative evaluation index, this paper method proves
to be better to keep the IR features of IR image and the detailed information of visible image, this paper method
can effectively fuse background images too. The experiment result shows the parallel processing fusion algorithm
not only improves the fusion veracity, but also enhances the operation speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662323 (2008) https://doi.org/10.1117/12.791550
Aimed at the imaging principle and characteristic of infrared and visual equipment and their application demands,
an effective algorithm is proposed for small moving target detection based on fused infrared and visual image. The
algorithm suppresses background clutter by morphologic Top-hat transform, and the results are enhanced by
tree-structure wavelet transform with the use of improved fusion rule based on "absolute value" matching degree.
Filter processing can enhance targets as well as suppress partial clutter and false targets effectively. Use difference
operation among three consecutive frame images to accomplish target segmentation. Improve SNR by N frames
energy accumulation. Combine continuity and regularity of small moving target to eliminate false targets, noise
point and background remnant. All that helps detect the small targets. Finally, compare the pre-processing
performance among traditional filter approaches and this proposed algorithm for image pre-processing. Thus, for
this type of method, detection and tracking results prove the validity the proposed algorithm. At the same time, two
parameters, RMSE (relative mean square error) and BSF (background suppression factor), are given to evaluate the
filtering performance of this paper approach. Four indexes, Mutual Information (MI), Associated Entropy (AE),
SNR, RMSE, are used to evaluate the fusion quality. Experimental results show that the multilevel and
multifunctional algorithm proposed is better than other methods in image pre-processing, image fusion and small
moving target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662324 (2008) https://doi.org/10.1117/12.791551
Fringe projective 3D profile sensor can obtain dense coordinates map of object outline fast and quantitatively. Phase
unwrapping technique plays an important role in the performance of the sensor when phase-shifting fringe patterns are
used. The multi-period phase shift method can considerably increase the accuracy of measuring phase value. However,
due to error of intensity approximation arisen from digital grating and the spatial resolution limits on the projector and
cameras of the senor, the result of unwrapped phase value is still within noise only by means of temporal
phase-unwrapping. A new algorithm combining the spatial and temporal phase-unwrapping methods is presented. The
algorithm, which is a good compromise between the number of needed gratings and the unwrapping reliability, is
especially designed for using two groups of digital phase-shifting gratings with different period. Experiments have been
carried out by selecting about thirty pairs of periods to illustrate the efficiency of temporal and spatial criterions, and the
results offer the tolerance of phase calculation which is unwrapped correctly. The algorithm has been implemented on a
test system, and the ratio of unwrapping successfully reaches to 80% with the repeated error 0.05 rad.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662325 (2008) https://doi.org/10.1117/12.791588
The presence of noise in interferograms is unavoidable, it may be introduced in acquisition and transmission. These
random distortions make it difficult to perform any required processing. Removing noise is often the first step in
interferograms analysis. In recent yeas, partial differential equations(PDEs) method in image processing have received
extensive concern. compared with traditional approaches such as median filter, average filter, low pass filter etc, PDEs
method can not only remove noise but also keep much more details without blurring or changing the location of the
edges. In this paper, a fourth-order partial differential equation was applied to optimize the trade-off between noise
removal and edges preservation. The time evolution of these PDEs seeks to minimize a cost function which is an
increasing function of the absolute value of the Laplacian of the image intensity function. Since the Laplacian of an
image at a pixel is zero if the image is planar in its neighborhood. these PDEs attempt to remove noise and preserve
edges by approximating an observed image with a piecewise planar image .piecewise planar images look more nature
than step images which anisotropic diffusion (second order PDEs)uses to approximate an observed image .The
simulation results make it clear that the fourth-order partial differential equatoin can effectively remove noise and
preserve interferogram edges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662326 (2008) https://doi.org/10.1117/12.791589
Estimation of camera pose is an integral part and classical problem of augmented reality (AR) system and computer
vision. Accurate pose estimation is crucial in determining the rigid transformation relating 2D images to known 3D
geometry. Therefore, the algorithm should be not only fast and accuracy, but also robust in AR system. Orthogonal
iterative (OI) algorithm is a good method, but it requires a proper initialization and cannot deal with problems of pose
ambiguity. A new method based on OI we presented before, provides a good initialization and solves a problem of pose
ambiguity introduced by coplanar markers. However, two more potential problems usually make the algorithm calculate
some wrong results, and lead to the algorithm unsteady and not robust. In this paper, we develop the method by resolving
pose ambiguities, which originate from potential problems in algorithm. Two more constraints are employed in our
method. One is camera must be located in front of the marker, while the other is camera must be oriented to the marker.
It's proved that the improved method is steady in experiments, and can calculate the pose of camera fast and correctly.
Moreover, since the method can deal with pose ambiguity, it is rather robust in AR system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662327 (2008) https://doi.org/10.1117/12.791590
In this paper, we propose a new method for the detection of dim and small targets in infrared images with clouds cluster.
Pixels in the image can be divided into three classes, such as clear sky and cloud inner areas class, cloud edge areas class,
small targets class. And different models for each class are established with corresponsive feature vectors. Then based on
the fuzzy classification, correlation coefficients are calculated to determine the most relevant class which each pixel
belongs to. Extract the pixels if belonging to the target class, and then the detection is complete. Experimental results
have shown that the proposed method can efficiently detect dim and small targets in infrared images with low SNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wen-zhou Zhang, Da-yuan Yan, Da-ming Zhang, Chun Wang, Yang Zhang
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662328 (2008) https://doi.org/10.1117/12.791591
Network transmission is of great effect to capability of the desktop AR system, so speed and veracity of
network transmission is very important. This article introduces hardware and software design about the network
transmission system and experimental result shows that both speed and veracity of network transmission can
satisfy the requirement of image data transmission in desktop AR system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, 662329 (2008) https://doi.org/10.1117/12.791592
A real-time camera tracking algorithm using natural features for augmented reality applications is proposed. The system
relied on the passive vision techniques to obtain the camera pose online. A limited number of calibrated key-frames and
a rough 3D model of the part of the real environment were required. Accurate camera tracking could be achieved by
matching inputting image and the key-frame, whose viewpoint was as close as possible to the current one. Wide baseline
correspondence problem was solved by rendering intermediate image. Previous frames information was applied for jitter
correction. Algorithm performance was tested by real image sequences. Experimental results demonstrated that our
registration algorithm not only was accurate and robust, but also could handle significant aspect changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.