PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6789, including the Title Page, Copyright
information, Table of Contents, Preface, and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-aided diagnosis (CAD) provides a computer output as a "second opinion" in order to assist radiologists in the
diagnosis of various diseases on medical images. Currently, a significant research effort is being devoted to the detection
and characterization of lung nodules in thin-section computed tomography (CT) images, which represents one of the newest
directions of CAD development in thoracic imaging. We describe in this article the current status of the development and
evaluation of CAD schemes for the detection of lung nodules in thin-section CT. We found that current schemes for nodule
detection appear to report many false positives, and, therefore, significant efforts are needed in order further to improve the
performance levels of current CAD schemes for nodule detection in thin-section CT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of birthing-process medical science, and insurance requirement of prepotency, the ultrasound
technique is widely used in the application of obstetrics realm, especially on the monitoring of embryo's growth. In the
recent decade, the introduction of high resolution three-dimensional ultrasonic and color power Doppler scanner
provides a much more direct, sensitive, forerunner method for the monitoring of embryo and gravida's prediction. A
novel method that depends on examining images of vasculature of placenta to determine the growth of embryo is
introduced in this paper. First, get a set of placenta vascularity images of the pregnant woman, taken by Color Doppler
Ultrasonic Scanner, then mark some points in these images, where we get a section image, thus we can observe the
internal blood vessel distribution at those points. This method provides an efficient tool for doctors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a multi-step registration method of brain atlas and clinical Magnetic Resonance Imaging (MRI) data based
on Thin-Plate Splines (TPS) and Piecewise Grid System (PGS) is presented. The method can help doctors to determine
the corresponding anatomical structure between patient image and the brain atlas by piecewise nonlinear registration.
Since doctors mostly pay attention to particular Region of Interest (ROI), and a global nonlinear registration is quite
time-consuming which is not suitable for real-time clinical application, we propose a novel method to conduct linear
registration in global area before nonlinear registration is performed in selected ROI. The homogenous feature points are
defined to calculate the transform matrix between patient data and the brain atlas to conclude the mapping function.
Finally, we integrate the proposed approach into an application of neurosurgical planning and guidance system which
lends great efficiency in both neuro-anatomical education and guiding of neurosurgical operations. The experimental
results reveal that the proposed approach can keep an average registration error of 0.25mm in near real-time manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Software of 3D reconstructions for medical images is basically focused on the application of visualization. Few people
have studied the techniques and software about numerical simulation oriented 3D reconstructions for medical images
that mainly solve the problems such as visualization for 3D reconstructions, model modifying and mesh generation for
finite element analysis. These issues were reviewed and analyzed, the development toolkits were introduced, and some
attentive questions and their corresponding solutions as well as some new perspectives were proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Liver segmentation is critical in designing and developing computer-assisted systems that have been used for liver
disease diagnosis before surgery or transplantation. The purpose of this study is to develop a computerized system for
extracting liver contours and reconstructing liver volume using contrast-enhanced hepatic CT images. The automatic
liver segmentation method adopted the graph optimal algorithm with ratio contour as its salient measure. This new cost
function encoded the Gestalt laws and synthesized the gap length, the liver region area, the length of the closed contour
and the average curvature of the closed boundary. With the extracted liver contours, a promising system to exclude
tissues outside the liver was developed. It promised to save time and simplify liver volume reconstruction by minimizing
intervention operations. Some 3D-rendered reconstruction results were also created to demonstrate the final results of our
system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CT (computed tomography) imaging is a technology which uses X-ray beams (radiation) and computers to form detailed,
cross-sectional images of an area of anatomy. However, the random scattered X-ray in CT imaging system will reduce
radiographic contrast greatly in CT images. In this paper, a four-step method is proposed for decoding CT images: first,
the EGSnrc Monte Carlo simulation system is used to simulate CT imaging and simulated data will be validated by real
experimental data in the same experimental conditions; second, scattered X-ray image simulated by EGSnrc will be
transformed into ICA-domain (independent component analysis-domain) to obtain the main magnitude of scattering data;
third, a noise-reduction algorithm based on ICA-domain shrinkage is applied to smooth the CT image; fourth, the
conventional linear deconvolution follows. The simulation results show that the reconstructed image is dramatically
improved in comparison to that without the noise-removing filters, and the proposed method is also applied to real
experimental X-ray imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A vision-based Augmented Reality computer assisted surgery navigation system is presented in this paper. It applies the Augmented Reality technique to surgery navigation system, so the surgeon's vision of the real world is enhanced. In the system, the camera calibration is adopted to calculate the cameras projection matrix, and then make the virtual-real registration by using the transformation relation. The merging of synthetic 3D information into user's vision is realized by texture technique. The experiment results demonstrate the feasibility of the system we have designed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel method for Computerized tomography (CT) cerebral hemorrhage (CH) image automatic segmentation is presented in the paper, which uses expert system that models human knowledge about the CH automatic segmentation problem. The algorithm adopts a series of special steps and extracts some easy ignored CH features which can be found by statistic results of mass real CH images, such as region area, region CT number, region smoothness and some statistic CH region relationship. And a seven steps' extracting mechanism will ensure these CH features can be got correctly and efficiently. By using these CH features, a decision tree which models the human knowledge about the CH automatic segmentation problem has been built and it will ensure the rationality and accuracy of the algorithm. Finally some experiments has been taken to verify the correctness and reasonable of the automatic segmentation, and the good correct ratio and fast speed make it possible to be widely applied into practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The wind retrieval of single Doppler radar data is important for severe weather forecasting. The contrast of wind retrieve
methods between VVP and advanced simple adjoint model has been made. The typhoon and rainstorm wind retrieval
results indicate two methods have their own advantage and disadvantage because the assume conditions and processing
ways are different, this work is valuable for the retrieval wind field application in research and operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The forward problem of the fluorescent molecular tomography(FMT), which is usually described by two coupled
diffusion equations corresponding to the excitation and emission light respectively, is usually solved in a sequential
manner. However, sequential computation often limits the FMT image reconstruction speed. In this paper, a novel
parallel forward computation algorithm is proposed in conjunction with a reconstruction algorithm based on the
adaptively refined mesh, in which the priori information obtained from the other imaging modalities can be easily
incorporated in the process of mesh generation. The experiment results and comprehensive discussion given in this paper
have demonstrated that the proposed parallel forward computation and reconstruction strategies based on the adaptively
refined mesh can improve the performance of the FMT reconstruction in terms of image reconstruction speed and final
image quality significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is very important for physicians to accurately determine breast tumor location, size and shape in ultrasound image. The
precision of breast tumor volume quantification relies on the accurate segmentation of the images. Given the known
location and orientation of the ultrasound probe, We propose using freehand three dimensional (3D) ultrasound to
acquire original images of the breast tumor and the surrounding tissues in real-time, after preprocessing with anisotropic
diffusion filtering, the segmentation operation is performed slice by slice based on the level set method in the image
stack. For the segmentation on each slice, the user can adjust the parameters to fit the requirement in the specified image
in order to get the satisfied result. By the quantification procedure, the user can know the tumor size varying in different
images in the stack. Surface rendering and interpolation are used to reconstruct the 3D breast tumor image. And the
breast volume is constructed by the segmented contours in the stack of images. After the segmentation, the volume of the
breast tumor in the 3D image data can be obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effectiveness of Hilbert scan in lossless medical images compression is discussed. In our methods, after coding of
intensities, the pixels in a medical images have been decorrelated with differential pulse code modulation, then the error
image has been rearranged using Hilbert scan, finally we implement five coding schemes, such as Huffman coding,
RLE, lZW coding, Arithmetic coding, and RLE followed by Huffman coding. The experiments show that the case,
which applies DPCM followed by Hilbert scan and then compressed by the Arithmetic coding scheme, has the best
compression result, also indicate that Hilbert scan can enhance pixel locality, and increase the compression ratio
effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main disadvantage of optical tomography is the low spatial resolution, in order to conquer it, a new combination
reconstruction algorithm is proposed, where the MRI anatomical structural information is incorporated into the optical
tomography based on the transport equation. In the process of reconstruction, the structural information from MRI is
used to determine the initial distribution of optical properties and the specific steps are presented in detail. The
Levenberg Marquardt algorithm is introduced to implement the optimized computation of the objective function, in
which the values of hyper parameters are critical to the reconstructed results. In experiments, phantoms simulating the
breast tissue are presented to verify this algorithm, and the corresponding images reconstructed by algorithms with and
without MRI prior information are given. And then, the impacts of hyper parameters and the number of source-detector
pairs on reconstructed results are discussed, and a few criteria are presented to evaluate the quality of reconstructed
images. Finally, by the comparison of reconstructed results, it can be concluded that this new reconstruction algorithm
with MRI prior information can achieve more accurate reconstructed results, improve the spatial resolution of images
and the localization of abnormal region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed a new MAP method more suitable for low signal to noise (SNR) measurements. We took the projection space as a Gibbs random field, under such assumption, new priori was defined which is not limited to a small neighborhood region. We choose the hyperparameter of the penalty using maximum-likelihood estimation. We applied filtering scheme in the proposed method to control reconstruction results. The proposed method was applied to reconstruct both simulated data and real clinical data, and the results are discussed. Future work is mentioned at the end of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Region counting is a conception in computer graphics and image analysis, and it has many applications in medical area
recently. The existing region-counting algorithms are almost based on filling method. Although filling algorithm has
been improved well, the speed of these algorithms used to count regions is not satisfied. A region counting algorithm
based on region labeling automaton is proposed in this paper. By tracing the boundaries of the regions, the number of the
region can be obtained fast. And the proposed method was found to be fastest and requiring less memory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image local elastic transformation, compact support radial basis functions are used to implement image elastic
deformation. The elastic deformation area is related to the support of the radial basis function. However, how to choose
the support based on space distribution of landmarks still is an unresolved problem. In this paper, the relation between
the support and three landmarks space locations is analyzed using simple triangle structure for Wendland radial basis
function. Moreover, for landmarks set, Delaunay triangle is constructed to obtain the support of each triangle, and the
optimal support of radial basis function is chosen as the maximum. Experiments of artificial images and medial images
show the feasibility of our conclusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Boundary extracting and segmentation for ROI of medical image is an important prerequisite for analyzing,
understanding and handling the images. Since snake model was proposed, it has been widely used at object contour
detecting and tracking and the field of computer vision. In traditional algorithms, snake curve initialized manually was
not accurate and the snake curve was easily attracted by the complex background, and its costing-time was so high. In
order to overcome these shortcomings, this paper proposes a boundary extracting model based on region growing and
snake model for medical images which have irregular region and complex features. Firstly, an improved adaptive region
growing algorithm is used for boundary extracting approximately, then the region boundary is divided into four
sub-boundaries, sample points in these boundaries, keep the points at large curvature position and balanceable between
the sub-boundaries. Lastly, take these sampled points as the input of the contour searching and tracking in the snake
model, and then improve and disperse inner and external energy function based on traditional snake model. The
experimental results show that the new algorithm can detect the contour and deep boundary concavities of complex
objects or malformed objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a coarse region segmentation of liver cancer in ultrasound Images is introduced. The reason employing coarse
region segmentation is to reflect the inhomogeneous distribution of the image gray levels and provide the features such as
the distribution, shape and size of the suspect region of liver cancer. Then combine with the prior knowledge we can divide
the image into three different classes, which the results of the analysis of the region's location can be used by a classifier in
a multilayer classifier. Furthermore, the result of the coarse region segmentation will support the texture analysis for
further classification. The segmentation is based on watershed algorithm in order to receive an integrated region and two
processing techniques are adopted to avoid the over segmentation of watershed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, several image enhancement methods for B-model US image, including Linear Transformation, Histogram Equalization, Brightness Preserving Bi-histogram Equalization (BBHE, Minimum Brightness Error Bi-Histogram Equalization (MMBEBHE), and Fuzzy Enhancement, were compared each other. Based on the subjective evaluations from human vision and feature extraction of the ROI after the enhancement, the advantages and disadvantages of each method were found out. Furthermore, the best enhancement algorithm, fuzzy enhancement algorithm was applied to our study and its impact on the feature extraction was compared with or without the fuzzy enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate lung segmentation from high resolution CT images is a challenging task due to various detail tracheal
structures, missing boundary segments and complex lung anatomy. One popular method is based on gray-level threshold,
however its results are usually rough. A united geometric active contours model based on level set is proposed for lung
segmentation in this paper. Particularly, this method combines local boundary information and region statistical-based
model synchronously: 1) Boundary term ensures the integrality of lung tissue.2) Region term makes the level set
function evolve with global characteristic and independent on initial settings. A penalizing energy term is introduced into
the model, which forces the level set function evolving without re-initialization. The method is found to be much more
efficient in lung segmentation than other methods that are only based on boundary or region. Results are shown by 3D
lung surface reconstruction, which indicates that the method will play an important role in the design of computer-aided
diagnostic (CAD) system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a specific feature analysis of liver ultrasound images including normal liver, liver cancer especially
hepatocellular carcinoma (HCC) and other hepatopathy is discussed. According to the classification of hepatocellular
carcinoma (HCC), primary carcinoma is divided into four types. 15 features from single gray-level statistic, gray-level
co-occurrence matrix (GLCM), and gray-level run-length matrix (GLRLM) are extracted. Experiments for the
discrimination of each type of HCC, normal liver, fatty liver, angioma and hepatic abscess have been conducted.
Corresponding features to potentially discriminate them are found.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital x-ray imaging system is composed of an x-ray generator and a digital image acquisition system. In this paper, we
designed a wireless trigger pulse generation circuit, detection trigger board, to capture the image accurately by
established the synchronization between x-ray generator and digital image acquisition system and we analyzed its
performance and compared to conventional method.
There are two pulses generated by this study, the ACQ_START pulse, which indicates the detection of x-ray radiation
from x-ray generator, and the ACQ_END pulse, which indicates the x-ray disappearance from x-ray generator. These
trigger the image acquisition system of digital x-ray imaging system, to start the image capturing or to stop. Geiger tube
were used to detect x-ray radiation from the air. Image acquisition is activated only this time between ACQ_START and
ACQ_END signal.
By detecting the x-ray radiation signal from the air and generate the trigger pulses, we can get more accurate timing for
capturing the x-ray image. Also, owing to omitting the installation wire between x-ray generator and digital image
acquisition system, Installation will be very easy. In addition to that, any type of x-ray generator can be installed without
incompatibility.
With this experiment, we tried to capture images of the resolution chart to compare the experimental result. We got 3.5
line pair / mm resolution at 20 mAs of x-ray level with resolution chart. This is same or better image comparing to
conventional way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In medical image processing, the image degradation often occurs. The image restoration is to recover the original image
from its noisy and blurred version. A restoration approach using cultural algorithms for medical images is presented in
this paper. First of all, the representation of image degradation model is built. Secondly, an image is encoded as an
individual; the fitness of an individual is defined. An algorithm based on the principle of cultural algorithms is presented
for obtaining the ideal images from the blurred image. The algorithm consists of the population space, the belief space,
and the communication protocol that describes the exchange mode of knowledge between the population space and
belief space. A few type of knowledge, such as the situational knowledge and the normative knowledge etc., are used.
The images with better quality are obtained by the evolution of populations. The experimental results show that the
image restoration approach proposed in this paper can obtain the good approximations of the original image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time needle segmentation and tracking is very important in
image-guided surgery, biopsy, and therapy. In this paper,
we described an automated technique to provide real-time needle segmentation from a sequence of 2-D ultrasound
images for the use of guidance of a needle to the target in soft tissues. The Hough transform is used to find straight lines
or analytic curves in binary image. Hough transform is applied usually to binary images. Hence one needs to convert,
initially, the gray level image to a binary one (through thresholding, edge detection, or thinning) in order to apply the HT.
While in the process of binarization, some information about line segments in the image may be lost when an
inappropriate threshold is used. Gray-Scale Hough Transform can detect the line without binarization. Unfortunately, its
high computational cost often prevents it from being applied in
real-time applications without the help of specially
designed hardware. In this paper, we proposed a needle segmentation technique based on a real-time gray-scale Hough
transform. It is composed of an improved Gray Hough Transformation and a coarse-fine search strategy. Furthermore,
the RTGHT (Real-Time Gray-Scale Hough Transform) technique is evaluated by patient breast biopsy images.
Experiments with patient breast biopsy ultrasound (US) image sequences showed that our approach can segment the
biopsy needle in real time (i.e., less than 60 ms) with the angular rms error of about 1° and the position rms error of
about 0.5 mm an affordable PC computer without the help of specially designed hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical,
vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery
and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a
minimally invasive ablation system using an RF button electrode which is needle-like is being used to destroy tumor
cells or stop bleeding currently. Now a 3D US guidance system has been developed to avoid accidents or death of the
patient by inaccurate localizations of the electrode and the tumor position during treatment. In this paper, we described
two automated techniques, the 3D Hough Transform (3DHT) and the 3D Randomized Hough Transform (3DRHT), which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging
guidance. Based on the representation (Φ , θ , ρ , α ) of straight lines in 3D space, we used the 3DHT algorithm to segment needles successfully assumed that the approximate needle position and orientation are known in priori. The 3DRHT
algorithm was developed to detect needles quickly without any information of the 3D US images. The needle
segmentation techniques were evaluated using the 3D US images acquired by scanning water phantoms. The
experiments demonstrated the feasibility of two 3D needle segmentation algorithms described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a virtual breast plastic surgery planning method is proposed, which reconstructs the breast after excision for
certain diseases such as cancer. In order to achieve a rational result, we calculate shape, area, volume and depth of the
skin and muscle for the reconstruction, based on the other healthy breast. The steps are as follows: 1) input breast's MRI
data of patient; 2) get the healthy breast using balloon segmentation algorithm and get triangle mesh on breast surface; 3)
flatten the triangulated skin of breast using deformable model to attain the shape and volume of the flap for breast
reconstruction. Other methods such as mesh smoothing and cutting of triangulated surface are also introduced. The
doctors validation and evaluation process are also provided to ensure the robust and stable result of virtual surgery
planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An new enhancement method is proposed to the Stochastic Active Contour Scheme (STACS) for image segmentation
using Principle Component Analysis(PCA). STACS is a method developed for segmentation of cardiac Magnetic
Resonance Imaging(MRI) images and is based on the level set method in which the contour is driven by the
minimization of a function of four terms−region based, edge based, shape prior, and curvature. STACS derives each of
these forces from the original image that is to be segmented. In our method, PCA is performed on the entire set of eight
images of the same slice of the heart taken at different instants of time in the cardiac cycle and then segment each image
separately. The various terms in the energy functional in this new scheme are obtained from different principal
components(Eigenvectors). Thus, STACS is improved by emphasizing each term in the energy functional with the help
of the principal component that gives the most accurate result. Experimental results are presented with the proposed
scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel level set approach is proposed for segmentation of medical ultrasound images. Considering the
speckle noise and low contrast of medical ultrasound images, we add an extra stopping term into the formulation of level
set evolution without re-initialization (LSEWR). Compared with traditional active contour models, the proposed level set
approach has more flexible initialization and larger capture range. It is insensitive to the initial contour and larger time
step can be used. The initial contour can be easily initialized as a circle or rectangular, thus achieving semi-automatic
segmentation of ultrasound medical images. The experimental results show that the proposed method can be used for
semi-automatic and high-quality segmentation of medical ultrasound images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3-D reconstruction of the CT images is one important part of the 3-D visualization technology in the medical image
processing which can support the clinical diagnosis and treatment. Because of the complexity and huge amounts of data in
the reconstruction algorithm, it's critical to select a convenient and high-efficient development tool. IDL have many
advantages in the medical images processing. It can read directly CT DICOM file and implement visualization rapidly on
PC. It is also easy to be operated by general users. A method of 3-D reconstruction of the CT medical images was presented
and an example was illustrated in the end.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel and reliable approach is proposed to visualize three dimensional (3D) brain atlases for
image-guided neurosurgery. Since the existing atlas is either in 2D or a 3D atlas, we firstly apply nonlinear interpolation
on digitized 2D TT atlas [3], and pre-registered it into a referenced MRI data with defined AC-PC coordinate.
Meanwhile, we apply a Fast Marching and Morphological Reconstruction segmentation to the same referenced MRI data
to create a 3D atlas. Hence the two atlases are mediately registered together. Then, the dissect names of the ROIs
(Regions of Interest) are labeled according to the gray values of the atlases. Finally, the 3D visualization of the atlases is
implemented and it is integrated into the neurosurgical operating system. The system is tested by a neurosurgon to be
useful for clinical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The point spread function (PSF) parameters of the imaging system are not often known a prior in super-resolution
enhancement applications. In our super-resolution algorithm, we identify the PSF and regularization parameters from the
raw data using the generalized cross-validation method (GCV). Motivated by the success of GCV in identifying optimal
smoothing parameters for image restoration, we have extended the method to the problem of estimating blur parameters.
To reduce the computational complexity of GCV, we propose efficient approximation techniques based on the Arnoldi
process. The Arnoldi process can yield a small and condensed Hessenberg matrix which is orthogonal bases of the
Krylov subspaces. Experiments are presented which demonstrate the effectiveness and robustness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new optimization model for non-rigid registration of images using multi-metrics. The ordinary searching
step of optimization has been often trapped in local minima and produces wrong registration results. In this paper, if the
condition occurs, multi-metrics model will switch to the other metrics to get rid of the local minima, vice versa, until
optimization cannot proceed any more for any of the metrics. We have tested our approach in a variety of experimental
conditions and compared the results with the optimization without multi-metrics. The results indicate that the new model
is robust and fast in non-rigid registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm is proposed to calculate the left ventricular volume by the number of the filled pixel. Firstly, A closed
B-spline curve is used to show the left ventricle's each section outline, then we can fill the left ventricle and a regular
three dimension object whose volume is known at the same display mode, finally, the left ventricular volume can be got
by the ratio of the two objects' filled pixel numbers. When we fill the left ventricle, three methods are used. They are
methods to get the intersect points of a horizontal plane and
B-spline curves, to get the extremum point of a B-spline
curve, and to fill a closed B-spline curve. Taking results from CMRI as reference, it is showed that the algorithm in this
paper is more valid and reliable than that of COMPACT4.2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of electronic medical archives requests to mosaic the medical microscopic images to a whole one, and
the stitching result is usually a massive file hard to be stored or accessed. The paper proposes a file format named
Medical TIFF to organize the massive microscopic image data. The Medical TIFF organizes the massive image data in
tiles, appends the thumbnail of the result image at the end of the file, and offers the way to add medical information into
the image file. Then the paper designs a three-layer system to access the file: the Physical Layer gathers the Medical
TIFF components dispersed over the file and organizes them hierarchically, the Logical Layer uses a two dimensional
dynamic array to deal with the tiles, and the Application Layer provides the interfaces for the applications developed on
the basis of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality
surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical
structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation
using Chan-Vese model (C-V model) and anatomy prior knowledge. In
pre-processing stage, the candidate kidney
regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce
the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified
kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series
show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new interpolation method based on multi-resolution technique is presented and used for medical image zooming. The
aim of this work is to focus on similarity analysis of adjacent
sub-bands provided by Discrete Wavelet Transform (DWT)
to enhance the accuracy of the interpolation. First, decompose the original image into sub-bands by the DWT; second,
consider the similarity between adjacent sub-bands to calculate the high frequency components; third, use the original
image as the low frequency component and apply the inverse DWT to obtain the final interpolation result. Experimental
results on magnetic resonance (MR) images and positron emission tomography (PET) images illustrate the effectiveness
of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electroencephalogram (EEG) recorded during motor imagery tasks can be used to move a cursor to a target on a
computer screen. Such an EEG-based brain-computer interface (BCI) can provide a new communication channel for the
subjects with neuromuscular disorders. To achieve higher speed and more accuracy to enhance the practical applications
of BCI in computer aid medical systems, the ensemble classifier is used for the single classification. The ERDs at the
electrodes C3 and C4 are calculated and then stacked together into the feature vector for the ensemble classifier. The
ensemble classifier is based on Linear Discriminant Analysis (LDA) and Nearest Neighbor (NN). Furthermore, it
considers the feedback. This method is successfully used in the 2003 international data analysis competition on BCI-tasks
(data set III). The results show that the ensemble classifier succeed with a recognition as 90%, on average, which is
5% and 3% higher than that of using the LDA and NN separately. Moreover, the ensemble classifier outperforms LDA
and NN in the whole time course. With adequate recognition, ease of use and clearly understood, the ensemble classifier
can meet the need of time-requires for single classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ultrasonic estimation of fetal weigh before delivery is of most significance for obstetrical clinic. Estimating fetal
weight more accurately is crucial for prenatal care, obstetrical treatment, choosing appropriate delivery methods,
monitoring fetal growth and reducing the risk of newborn complications. In this paper, we introduce a method which
combines golden section and artificial neural network (ANN) to estimate the fetal weight. The golden section is
employed to optimize the hidden layer node number of the back propagation (BP) neural network. The method greatly
improves the accuracy of fetal weight estimation, and simultaneously avoids choosing the hidden layer node number
with subjective experience. The estimation coincidence rate achieves 74.19%, and the mean absolute error is 185.83g.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3-D) median filtering is very useful to eliminate speckle noise from a medical imaging source, such
as functional magnetic resonance imaging (fMRI) and ultrasonic imaging. 3-D median filtering is characterized by its
higher computation complexity. N3(N3-1)/2 comparison operations would be required for 3-D median filtering with
N×N×N window if the conventional bubble-sorting algorithm is adopted. In this paper, an efficient fast algorithm for
3-D median filtering was presented, which considerably reduced the computation complexity for extracting the median
of a 3-D data array. Compared to the state-of-the-art, the proposed method could reduce the computation complexity of
3-D median filtering by 33%. It results in efficiently reducing the system delay of the 3-D median filter by software
implementation, and the system cost and power consumption by hardware implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise
quantification of absolute activity contained in the thyroid gland. However, the shape of thyroid gland is irregular and
difficult to calculate. For precise estimation of thyroid volume by ultrasound imaging, this paper presents a novel semiautomatic
minutiae matching method in thyroid gland ultrasonic image by means of thin-plate spline model. Registration
consists of four basic steps: feature detection, feature matching, mapping function design, and image transformation and
resampling. Due to the connectivity of thyroid gland boundary, we choose active contour model as feature detector, and
radials from centric points for feature matching. The proposed approach has been used in thyroid gland ultrasound
images registration. Registration results of 18 healthy adults' thyroid gland ultrasound images show this method
consumes less time and energy with good objectivity than algorithms selecting landmarks manually.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multi-material segmentation framework which is able to separate a given 3D image into more than two segments is
discussed. First, the original volume image was segmented by 3D thresholding algorithm. Different parts and materials
can be got by setting the appropriate thresholds. Then the segmented parts were filled into a blank 3D image one after
another, and the multi-material segmented image was achieved. Finally, the result was visualized with volume rendering
method. This framework was developed based on the VTK and ITK libraries. An example of head in CT series images
segmented by this approach is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coronary angiogram is an important examination tool in clinical medicine for the precise diagnosis of cardiac disease. It
is obtained by injecting of the patient with a contrast medium through a catheter. This paper presents a method to
increase vessel contrast and to attenuate background. The enhancement is achieved by subtracting the estimated
background from a live (contrast-containing) angiogram. The
multi-scale morphology opening, with structuring elements
of different dimension for each pixel, is employed to get the estimation of background. The dimension of structuring
element for each pixel is calculated by the response difference between opening filtering results of original image with
different structuring elements. The proposed algorithm is tested on real x-ray angiogram.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blood vessel images have strongly morphological characteristic as well as complicated backgrounds. However,
traditional boundary extraction algorithms rarely utilize the local uniform direction of the pixels in the vascular margin
and often incur discontinuities of edges due to background noises. In this paper a new vascular boundary extraction
algorithm based on direction filtering is presented, in which some algorithms, such as chaotic filtering, direction and
distance matrix, as well as edge tracing, are put forward. The experimental results indicate that the algorithm has
achieved better effects in noise depression and vascular extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By combining the median characteristics of the neighboring pixels around the interpolation points and their spatial
information, a novel image interpolation algorithm is introduced in this paper. The proposed interpolator first utilizes
both the aggregated gray differences and the spatial distances to compute the weights associated with the neighboring
pixels and then employs a data-adaptive filter to estimate the interpolated pixels. The experimental results demonstrate
the validity of the proposed interpolator by showing significant performance improvements against the conventional
interpolation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parallel magnetic resonance imaging has emerged as an effective means to reduce scan time for high-speed MRI. We
made comparison on four different image reconstruction algorithms of SMASH, GRAPPA, SENSE and PILS in the
same imaging conditions including receiver coil array configuration, reduction factor and central lines. It is critical to
simulate and test which parallel imaging method provides optimum performance before implementing them on the
scanners. Results of reconstruction on raw data showed that SENSE and GRAPPA provide better image reconstruction.
And GRAPPA is the best choice in the case of increase of the reduction factor. SENSE and PILS which have no matter
with the central lines is the best choice in the case of decrease of the central lines. While PILS fails to reconstruct the
image in the case of the increase of reduction factor equal to the number of coils. But the cost of reconstruction time
based on SENSE algorithm is most efficient. Results in this paper can help us select the most optimal reconstruction
algorithm on the same imaging parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a Topology Resolve-Map (TRM) model is proposed to restore the gathered colors to standard values. We
select relevant color range in L*a*b* color space of human skin and mucosa. Then we choose standard color in SG color
chart, separately carry on topological cutting to the one-dimensional L* space and the two-dimensional a*b* space.
Standard color in SG color chart is taken as image domain Mi, chromaticity value of SG color chart captured under
different condition as inverseimage domain Ni . We establish mapping function from Ni to the Mi F(X):Ni-> Mi, and run
the linearity mapping to the one-dimensional L* space F(Xl) and triangulation mapping to the two-dimensional a*b*
space F(Xab), respectively, and present corresponding mapping function. Furthermore, we calculate image F(Li)and
F(Xaibi), value of inverse image, from Li',ai', bi', values of sample under various kinds of conditions, with F(Xl)and
F(Xab), respectively. We use 13 skin colors in SG color codes, and restore these colors from the gathered images to
standard values. The differences of ΔL, Δab and ΔE have obvious decrease in TRM model. The mean difference of
whole color is 3.77. The experimental results show that TRM model has good accuracy and stability, and outperform
significantly the professional software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital subtraction angiography (DSA) is an important technology in both medical diagnoses and interposal therapy,
which can eliminate the interferential background and give prominence to blood vessels by computer processing. After
contrast material is injected into an artery or vein, a physician produces fluoroscopic images. Using these digitized
images, a computer subtracts the image made with contrast material from a series of post injection images made without
background information. By analyzing the characteristics of DSA medical images, this paper provides a solution of
image fusion which is in allusion to the application of DSA subtraction. We fuse the images of angiogram and
subtraction, in order to obtain the new image which has more data information. The image that fused by wavelet
transform can display the blood vessels and background information clearly, and medical experts gave high score on the
effect of it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical image 3D reconstruct is an important application filed for volume rendering, for it special using, it required fast
interactive speed and high image quality. The ray casting algorithm (RCA) is a widely used basic volume rendering
algorithm. It can get high quality image but the rendering speed is very slowly for powerful computing capacity. Due to
these shortcomings and deficiencies, the accelerated ray casting algorithm is presented in this paper to improve its
rendering speed and apply it to medical image 3D reconstruct. Firstly, accelerate algorithms for ray casting are fully
studied and compared. Secondly, improved tri-linear interpolation technology has been selected and extended to
continuous ray casting in order to reduce matrix computation by matrix transformation characteristics of re-sampling
points. Then ray interval casting technology is used to reduce the number of rays. Utilizing volume data sets cropping
technology that improving boundary box technique avoids the sampling in empty voxel. Finally, the synthesized
accelerate algorithm has been proposed. The result shown that compare with standard ray casting algorithm, the
accelerate algorithm not only improve the rendering speed but also produce the required quality images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the symmetry analysis for brain MRI images in 2D slices. It can be used to detect 2D pathological
brain automatically. The main challenges in this work are the extraction of the geometry symmetry axis (GSA) from both
normal and pathological neuroimages and the quantification of the symmetry for the gray level distribution (GLS) of the
brain with the GSA. We present a fast approach to extract the GSA based on a group of assistant parallel lines and to
make veracity estimation for the GSA using the resultant moment of gravitational force (RMGF), followed by
quantification for the two hemispheres partitioned by the GSA based on the correlation to the GLS. Finally, the
quantification results are considered as a feature to distinguish the normal and abnormal brain slices. In the experiment
result, the mean running time of each symmetry quantization measure for 181×217 2D 8bits MRI images was 0.91
seconds, the area of the corresponding ROC curve to distinguish the normal and abnormal brain of this approach is
0.9628, which shows that to detect the pathological brain in MRI based on this symmetry analysis is fast and effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mammographic mass segmentation plays a crucial role in
computer-aided scheme (CAD). In this paper, we propose a
method based on maximum entropy principle and active contour model to do segmentation. There are two main steps in
this method. First, maximum entropy principle was applied on the background-trend corrected regions of interest (ROIs) to
obtain the initially detected outlines. Secondly, active contour model was used to refine the initially detected outlines of the
masses. The regions of interest used in this study were extracted from images in the Digital Database for Screening
Mammography (DDSM) provided by the University of South Florida. The preliminary experimental results are
encouraging. The segmentation algorithm performs robustly and well for various types of masses. The overlap criterion
analysis shows that the proposed segmentation results are more similar to radiologists' manual segmentation compared
with other experimented methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel spatially adaptive wavelet thresholding method based on Bayesian maximum a posteriori (MAP)
criterion is proposed for speckle removal in medical ultrasound (US) images. The method firstly performs logarithmical
transform to original speckled ultrasound image, followed by redundant wavelet transform. The proposed method uses
the Rayleigh distribution for speckle wavelet coefficients and Laplacian distribution for modeling the statistics of
wavelet coefficients due to signal. A Bayesian estimator with analytical formula is derived from MAP estimation, and
the resulting formula is proven to be equivalent to soft thresholding in nature which makes the algorithm very simple. In
order to exploit the correlation among wavelet coefficients, the parameters of Laplacian model are assumed to be
spatially correlated and can be computed from the coefficients in a neighboring window, thus making our method
spatially adaptive in wavelet domain. Theoretical analysis and simulation experiment results show that this proposed
method can effectively suppress speckle noise in medical US images while preserving as much as possible important
signal features and details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In digital virtual human project, image data acquires from the freezing slice of human body specimen. The color and
brightness between a group of images of a certain organ could be quite different. The quality of these images could bring
great difficulty in edge extraction, segmentation, as well as 3D reconstruction process. Thus it is necessary to unify the
color of the images. The color transfer algorithm is a good algorithm to deal with this kind of problem. This paper
introduces the principle of this algorithm and uses it in the medical image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients
who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this
paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been
transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and
corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are
implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber
volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement
method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some
measures should be taken to simplify the handcraft preprocess working as to images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diffuse optical tomography (DOT) is to reconstruct the images of internal optical parameters distribution from boundary
measurements. Due to the amount of available boundary measurements is less than the number of unknown optical
parameters to be recovered, this inverse problem usually shows the ill-posed characteristics. This will result in the
problem of low reconstruction image quality. In this paper, an adaptive regularization method based on the objective
function values is proposed, which reduces the ill-posed characteristics in the inverse problem by selecting an
appropriate regularization value at teach iteration. Results from computer simulations indicated that using this
regularization technique, DOT imaging quality is improved effectively. Furthermore, using the regularization technique,
the sensitivity to noise of the reconstructed images can be decreased greatly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To investigate blind image restoration problem, we propose to combine Parzen-window estimate with regularization
technique (PWERT) in this paper. Parzen-window estimate method is engaged to obtain point spread function. And
regularization technique is utilized to control the noise deterioration and ill-posed case during image restoration.
Experiment results show that PWERT is able to deblur degraded image effectively and is robust to noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Through the method of image segmentation, based on the fact that there is a big difference in grey scale between auricle
parenchyma and bone tissue, this article discussed using grey scale threshholding method, that is, using dual grey scale
thresholds to distinguish auricle and surrounding tissues, to process the CT image of auricle. This article provided a
fundament for reversely design individualized 3-d auricle bracket model from the patient's normal side auricle CT image,
and manufacture lifelike auricle bracket through rapid prototyping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a
problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing
a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a
virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the
medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in
medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical
research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a watershed-based algorithm in the analysis of light microscopic image for reticulocyte (RET), which will be
used in an automated recognition system for RET in peripheral blood. The original images, obtained by micrography, are
segmented by modified watershed algorithm and are recognized in term of gray entropy and area of connective area. In
the process of watershed algorithm, judgment conditions are controlled according to character of the image, besides, the
segmentation is performed by morphological subtraction. The algorithm was simulated with MATLAB software. It is
similar for automated and manual scoring and there is good correlation(r=0.956) between the methods, which is resulted
from 50 pieces of RET images. The result indicates that the algorithm for peripheral blood RETs is comparable to
conventional manual scoring, and it is superior in objectivity. This algorithm avoids time-consuming calculation such as
ultra-erosion and region-growth, which will speed up the computation consequentially.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper apply computer image processing and pattern recognizition methods to solve the problem of auto
classification and counting of leukocytes (white blood cell) in peripheral blood. In this paper a new leukocyte arithmetic
of five-part based on image process and pattern recognizition is presented, which relized auto classify of leukocyte. The
first aim is detect the leukocytes . A major requirement of the whole system is to classify these leukocytes to 5 classes.
This arithmetic bases on notability mechanism of eyes, process image by sequence, divides up leukocytes and pick up
characters. Using the prior kwonledge of cells and image shape information, this arithmetic divides up the probable
shape of Leukocyte first by a new method based on Chamfer and then gets the detail characters. It can reduce the
mistake judge rate and the calculation greatly. It also has the learning fuction. This paper also presented a new
measurement of karyon's shape which can provide more accurate information. This algorithm has great application value
in clinical blood test .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new model based on C-V model, this model based on intra-region similar and inter-region
dissimilar properties, adjusting the parameters automatically using the region mean values inside and outside the curve,
is suitable for weak edge image segmentation. In the segmentation of the 3-D ultrasonic breast tumor, the segmentation
speed of this model is faster than the C-V model, and the segmentation result is gratifying.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technology of hyperspectral remote sensing, as an advanced technology in remote sensing, has been found wide
application in many fields. However, the massive and high dimension data produce a challenge during its processing and
analysis. Hyperspectral image fusion is rising as a new method which results from this background. The fused image
would have enhanced information which is more understandable and decipherable for object recognition accurately. In
this paper, we propose a novel method for image fusion and enhancement, using Empirical Mode Decomposition
(EMD). EMD is a new data analysis method which expresses the tendency of signals at different scales by decomposing
any complicated signal into a set of Intrinsic Mode Functions (IMFs). In this method, we decompose images from
different hyperspectral band into their IMFs, and perform image fusion at the decomposition level. Based on an
empirical understanding of the nature of the IMF, we devise adaptive weighting schemes which emphasize features from
different band image, thereby increasing the information and visual content of the fused image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration provides the ability to geometrically align one dataset with another. It is a basic task in a great variety
of biomedical imaging applications. This paper introduced a novel three-dimensional registration method for Magnetic
Resonance Image (MRI) and Paxinos-Watson Atlas of rat brain. For the purpose of adapting to a large range and
non-linear deformation between MRI and atlas in higher registration accuracy, based on the segmentation of rat brain, we
chose the principle components analysis (PCA) automatically performing the linear registration, and then, a level set
based nonlinear registration correcting some small distortions. We implemented this registration method in a rat brain 3D
reconstruction and analysis system. Experiments have demonstrated that this method can be successfully applied to
registering the low resolution and noise affection MRI with
Paxinos-Watson Atlas of rat brain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very
important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in
the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary
preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling
is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised
Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery
and breast reconstruction plastic surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parallel Processing of Images and Optimization Techniques
Multi-mode Airborne Digital Camera System (MADC) was developed by Institute of Remote Sensing Applications and
Shanghai Institute of Technical Physics in 2006. Several finished aerial experiments have already demonstrated that the
system has good performance for aerial photography. But the image smear which leads by position change and forward
motion of aircraft has adverse effect on the images quality. So finding an effective way to realize image smear
compensation is a key technique to improve and develop unceasingly for MADC system. We have designed the external
image smear compensation module and written the special image processing soft to compensate the image smear. Some
experiments and simulations in laboratory or on the ground have shown that the two ways for image smear compensation
are both useful for getting better aerial remote sensing images by MADC system. Aerial experiments will be
implemented to verify these methods further.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The hyperspectral imager is capable of collecting hundreds of images corresponding to different wavelength channels for
the observed area simultaneously, which make it possible to discriminate man-made objects from natural background.
However, the price paid for the wealthy information is the enormous amounts of data, usually hundreds of Gigabytes per
day. Turning the huge volume data into useful information and knowledge in real time is critical for geoscientists. In this
paper, the proposed parallel Gaussian-Markov random field (Para-GMRF) anomaly detection algorithm is an attempt of
applying parallel computing technology to solve the problem. Based on the locality of GMRF algorithm, we partition the
3-D hyperspectral image cube in spatial domain and distribute data blocks to multiple computers for concurrent
detection. Meanwhile, to achieve load balance, a work pool scheduler is designed for task assignment. The Para-GMRF
algorithm is organized in master-slave architecture, coded in C programming language using message passing interface
(MPI) library and tested on a Beowulf cluster. Experimental results show that Para-GMRF algorithm successfully
conquers the challenge and can be used in time sensitive areas, such as environmental monitoring and battlefield
reconnaissance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An improved modified approach to compute the inverse discrete cosine transform (IDCT) is proposed based on B. G. Lee
algorithm. We replace the multiplication operator in original B. G. Lee algorithm with addition and shift operators and
looking-up table to implement the fix-point computation. Due to the absence of the multiplication operator, this modified
algorithm will take less time to complete the same computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fuzzy C-Means clustering is one of the most perfective and widely used algorithms based on objective function for
unsupervised classification. Considering the spatial relationship of pixels when it is used in remote sensing imagery,
Neighbor-based FCM algorithm is put forward with the method of modifying the value of fuzzy membership degrees
with the neighbor information during the clustering iterations. We use dominant class, if it can be determined in a fixed
neighbor region, or the weighted parameters based on the distance of neighbors to perfect the membership degrees of
central pixel. Then parallel implement for the algorithm is also proposed by taking account into the communication
complexity and the spatial relationship for image partition. In the end, the experimental data indicate the efficiency of the
algorithm in decreasing the amount of clustering iterations and increasing the classified precision; the parallel algorithm
also achieves the satisfied linear speedup.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly
quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After
reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet
transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is
developed, showing that under the requirement of real time processing, the compression ratio can be very high, while
with no significant loss of target signal in restored radar image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to transform image contrast and enhance vision effect the enhancement processing of image is need in the
system of object recognition and automatic target tracking. The methods of image enhancement are more, for example,
linearity, line segment, nonlinear, histogram equalization, histogram normalization, power exponent, local gray-level
statistic feature and so on. Through analyzing gradient characteristic of image gray-level, in this paper the method of
image enhancement is presented based on the local statistic arithmetic of neighborhood grads even. Experiment results
show the correctness and utility of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DPGrid system is based on distributed parallel computation technology. The framework of hardware including cluster
computers, disk array storages, GBPS Ethernet switch and workstations. The parallel computation algorithms included
data storage, message transport, task matching and load balance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a practical and cost-efficient fingerprint recognition system model is proposed. It completes the functions
of capturing fingerprint image, data transmission and fingerprint recognition. This system consists of six modules:
Management Module (including TMS320VC5502 DSP and memories), Fingerprint Sensor Module (used to collect
fingerprint image), Output Module (the interface to control electronic lock), Human-Machine Communication Module
(seven-segment LED and keyboard), Debugger Interface Module (JTAG), Power Manager and Power Switchover
Module. Unlike other fingerprint recognition systems, this system takes TI C5502 as core processor. It is a high-performance,
low-power and fixed-point DSP and the whole system power can be supplied by batteries. The whole
system can work more than 10000 times with batteries. In addition, a Power Switch Module, which can automatic switch
the ways of power supply between wall adapter and batteries, is proposed in this paper. Furthermore, some software
optimization makes this system practical. The design not only simplifies system's structure and reduces the cost of
hardware, but also decreases the consumption of system power and resources. So, this hardware system can be used in
practical applications, such as portable identification device, fingerprint lock etc. This system is mainly designed for
fingerprint lock in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper designs the circular Gabor filters incorporating into the human visual characteristics, and the concept of
mutual information entropy in rough set is introduced to evaluate the effect of the features extracted from different filters
on clustering, redundant features are got rid of, Experimental results indicate that the proposed algorithm outperforms conventional
approaches in terms of both objective measurements and visual evaluation in texture segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional optical flow estimation methods have two drawbacks: firstly, flow estimation is not accurate enough on
border of the target which result in the blurring there; secondly, with the increasing of the speed of the object motion, the
estimation error of brightness constancy assumption will be also increased. Focusing on the above two points, an
improved optical flow estimation method is presented in this paper.
To alleviate flow constraint errors, we employed a re-weighted least-squares method to suppress unreliable flow
constraints, thus leading to robust estimation of optical flow. In addition, a coarse-to-fine adjustment scheme is proposed
to refine the optical flow estimation especially for large image motions. We also proposed an algorithm for target
segmentation of image sequences based on clustering in the feature vector space. Experimental results on some synthetic
and real image sequences showed that, the proposed algorithm has favorable performance comparing with the existed
methods in terms of accuracy and computation cost. Furthermore, the segmentation results based on the proposed
method can be obtained in the case of complicated background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Particle swarm optimization (PSO) is a population-based stochastic optimization technique. It shares many similarities
with evolutionary computation techniques such as Genetic Algorithms (GA). But compared with GA, it has simpler
model, fewer parameters, higher intelligence, faster computation, which makes it attractive to some researchers. This
paper presents a new particle swarm optimization based on uniform design and inertia mutation (UMPSO). It uses
uniform designs (UD) to initialize particles, which makes some particles stay at or near the position where the global
optimal solution stays with more probability. So the new PSO can find global optimal solution with more probability and
more speed. Particles can keep diverse through mutating inertia particle with the probability of 1 in the process of
evolution, which makes the new PSO find more precise solution. The results of simulation verify that the new PSO can
find more precise solution with higher speed than the standard one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a kind of robust texture feature invariant to rotation and scale changes, which is the texture energy
associated with a mask generated by particle swarm optimization algorithms. The detail procedure and algorithm to
generate the mask is discussed in the paper. Furthermore, feature extraction experiments on aerial images are done.
Experimental results indicate that the robust feature is effective and PSO-based algorithm is a viable approach for the
"tuned" mask training problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because the aerial digital image updating is more frequent and the data is more mass, the traditional method of image
processing based on serial computing is difficult to meet the needs of high production efficiency and rapid respond. For
improving the efficiency of data processing, the computing must be parallel. The paper discusses the method of parallel
processing of mass aerial digital images based on cluster computer system. It also discusses the method of the quick
generation of mosaic image without ground control point, and introduces the application of mosaic image in rapid
response and aerial survey. The experiment results demonstrate that the parallel computing obviously improves the
efficiency of mass data processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel digital image scheme based on discrete cosine transformation (DCT). The most
major difference between the traditional method and the proposed scheme is that it need not embed the watermark image
into the original public image. Instead, separating the original image into many 8*8 pixels blocks and making DCT to
every block, then extracting the DC(direct current) coefficient of each different block as the feature of the original image,
thus we can gain a ownership map according to the feature of the original image and the watermark image. When piracy
happens, the copyright owner can reveal his right via the ownership mapping and the feature of the suspected image to
extracting the watermarking. In this process, we need not to be in virtue of the original public image. Moreover, the size
of the watermark image is not restricted to be smaller than that of the original image. The experiment results are
performed to demonstrate the robustness of the proposed scheme against several common attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new remote sensing image processing system's algorithm module has been introduced in this paper, which is coded
with Visual C++ 6.0 program language and can process large volume of remote sensing image. At the same time, adopted
key technologies in algorithm module are given. Two defects of American remote sensing image processing system
called ERDAS has been put forward in image filter algorithm and the storage of pixel values that are out of data type
range. In author's system two optimized methods has been implemented in these two aspects. By contrasted with
ERDAS IMAGINE System, the two methods had been proved to be effective in image analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive optimization watermarking algorithm based on Genetic Algorithm (GA) and discrete wavelet transform
(DWT) is proposed in this paper. The core of this algorithm is the fitness function optimization model for digital
watermarking based on GA. The embedding intensity for digital watermarking can be modified adaptively, and the
algorithm can effectively ensure the imperceptibility of watermarking while the robustness is ensured. The optimization
model research may provide a new idea for anti-coalition attacks of digital watermarking algorithm. The paper has
fulfilled many experiments, including the embedding and extracting experiments of watermarking, the influence
experiments by the weighting factor, the experiments of embedding same watermarking to the different cover image, the
experiments of embedding different watermarking to the same cover image, the comparative analysis experiments
between this optimization algorithm and human visual system (HVS) algorithm and etc. The simulation results and the
further analysis show the effectiveness and advantage of the new algorithm, which also has versatility and expandability.
And meanwhile it has better ability of anti-coalition attacks. Moreover, the robustness and security of watermarking
algorithm are improved by scrambling transformation and chaotic encryption while preprocessing the watermarking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Beginning with the mathematics model of the spacecrafts remote rendezvous, we research the problem of space
maneuver vehicle (SMV) remote rendezvous on-orbit target basing on Lambert orbit maneuver method. To find a
minimum velocity impulse transfer orbit, we set up object functions when the SMV's initial orbit parameters are known
or unknown. We use an advanced intelligent optimal technology, particle swarm optimal technology, to get the object
function's extremum. So we can obtain the SMV's proper initial orbit parameters and transfer occasion. On this
condition the SMV needs little velocity impulse to approach the target spacecraft. Last, the simulation results show that
we can get a transfer orbit to rapidly approaching on-orbit target which needs very little velocity impulse by using this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays wavelet transform has been one of the most effective transform means in the realm of image processing,
especially the biorthogonal 9/7 wavelet filters proposed by Daubechies, which have good performance in image
compression. This paper deeply studied the implementation and optimization technologies of 9/7 wavelet lifting scheme
based on the DSP platform, including carrying out the fixed-point wavelet lifting steps instead of time-consuming
floating-point operation, adopting pipelining technique to improve the iteration procedure, reducing the times of
multiplication calculation by simplifying the normalization operation of two-dimension wavelet transform, and
improving the storage format and sequence of wavelet coefficients to reduce the memory consumption. Experiment
results have shown that these implementation and optimization technologies can improve the wavelet lifting algorithm's
efficiency more than 30 times, which establish a technique foundation for successfully developing real-time remote
sensing image compression system in future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discrete Cosine Transform (DCT) is widely applied in image and video compression. This paper presented the software and hardware co-design method based on SystemC. As a case of study, a two dimension (2D) DCT Algorithm was implemented on Programmable Gate Arrays (FPGAs) chip. The short simulation time and verification process greatly increases the design efficiency of SystemC, making the product designed by SystemC more quickly into the market. The design effect using SystemC is compared between the expertise hardware designer and the software designer with little hardware knowledge. The result shows SystemC is an excellent and high efficiency hardware design method for an
expertise hardware designer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Datasets of tens of gigabytes are becoming common in computational and experimental science. Providing remote
visualization of these large datasets with adequate levels of quality and interactivity is an extremely challenging task,
particularly for scientists who collaborate in widely distributed locations and their primary access to visualization
resources is a desktop computer. This paper describes a remote visualization system for large-scale terrain rendering
based on parallel streaming pipeline architecture. The visualization pipeline is divided in a client-server paradigm to take
advantage of the powerful computing and storage resources on the dedicated computers. The two key components of this
framework are: view-dependent simplification of the terrain mesh; and a scheme for delivering a minimally necessary
subset of triangle strips to any user requesting an interactive visualization session. To verify the effectiveness of
proposed schemes and data structures, the prototype system was implemented on China next-generation Internet
backbone. Approximate 60GB size image resources for flight simulation were stored centrally in Wuhan, whereas
scientists geographically dispersed in Beijing and Shanghai could manipulate and visualize these large 3D datasets in an
efficient and flexible way, furthermore, the need for data replication to local desktops was eliminated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a new automatic method to detect the outline of buildings based on height and Intensity from
Airborne LiDAR system. The main idea of the method is to detect outline of buildings using SVM (Support vector
machine) training knowledge obtained from the artificial interpretation of the training data. Support vector machines
(SVMs) are a set of related supervised learning methods used for classification and regression. Experiments using real
data are presented and show the feasibility of the suggested approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image matting is a process of extracting a foreground object from a complex background. This paper proposes a robust
interactive image matting approach. The method requires only a few user interactions in the form of drawing a rectangle
and a few strokes to indicate background and foreground. We consider the constraints of accuracy and continuity for the
estimated alpha values together to find the optimal matte by iteratively energy optimization. Different from existing
sampling-based natural image matting methods which use only intensity information from statistic sampling of known
foreground and background pixels to estimate the unknown pixels. We consider the distribution of the known pixels in
color, texture and spatial spaces, and build a more robust statistical model. At each iteration, the statistical model is
updated according to previous results of matting. Furthermore an accuracy function of sampling is proposed. These
manipulations make the sampling of foreground and background pixels more accurate and thus improve the performance
of the matting processing. Experiments show that compared with previous approaches, our method is more efficient to
extract high quality matte for texture-rich images and difficult images in which foreground and background have very
similar colors, while requiring a surprising small amount of user interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared (IR) image synthesis for ocean scene has become more and more important nowadays, especially for remote
sensing and military application. Although a number of works present ready-to-use simulations, those techniques cover
only a few possible ways of water interacting with the environment. And the detail calculation of ocean temperature is
rarely considered by previous investigators. With the advance of programmable features of graphic card, many
algorithms previously limited to offline processing have become feasible for real-time usage. In this paper, we propose
an efficient algorithm for real-time rendering of infrared ocean scene using the newest features of programmable
graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface
tessellation, sophisticated balance equation of thermal balance for ocean surface, and GPU-based rendering for infrared
ocean scene. Finally some results of infrared image are shown, which are in good accordance with real images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method to obtain the IR images without the infrared thermo-imager is proposed. It is the clou that inverting the
3D scene of the target, which have the same geometry and material information as the objects in the real world, to IR
scene according to the working principal of the thermo-imager. The superficial temperatures of all the objects in the 3D
scene calculated beforehand, represent the total received radiation power by the simulated thermo-imager. Thus the
temperatures on condition of the internal working parameters of the simulated thermo-imager, can be worked out. The IR
scene then is formed after converting these temperatures to the value of colors according to the relationship between the
temperature and the value of color. The IR image in any angle or distance of views, eventually was obtained. The
experimental results show that this method, that is, inverting the 3D scene to obtain the IR images, is not only in good
effectiveness but also are in high efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An efficient connected component labeling algorithm for multi-value image is proposed in this paper. The algorithm is
simple and inerratic suitable for hardware design. A one-dimensional array is used to store equivalence pairs. Record
organization of equivalence table is advantageously to find the minimum equivalent label, and can shrink time on
processing equivalence table. A pipelined architecture of the algorithm is described to enhance system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discrete Fourier transform (DFT) is an important tool in digital signal processing. We have proposed an approach to
performing DFT via linear sums of discrete moments. In this paper, based on the previous work, we present a new
method of performing fast Fourier transform without multiplications by performing appropriate bit operations and shift
operations in binary system, which can be implemented by integer additions of fixed points. The systolic implementation
is a demonstration of the locality of dataflow in the algorithms and hence it implies an easy and potential hardware/VLSI
realization. The approach is also applicable to DFT inverses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a fast global motion estimation method is proposed for video coding. This method can accommodate not
only a translational motion model but also a polynomial motion model. It speeds up the procedure of the global motion
estimation (GME) by pre-analyzing the characteristics of the block. At the first stage, the smooth region blocks which
contribute less to the GME are filtered by using a threshold method based on image intensity. Next, a threshold method
based on the discrepancy of the motion vectors is used to exclude the foreground blocks from the GME. From the
experimental results, we can conclude that the proposed fast global motion estimation method manages to speed up the
processing of estimating the motion vector field while maintaining the coding performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe the method of random-function based combination of algorithm to improve the quality of
variable-coefficient error-diffusion. To remove the visual artifacts, we propose a variable-direction E-D algorithm which
can generate none artifact images on the intensity of V-C-E-D's artifacts area. Using random-function and a set of
thresholds to control the selection of the algorithm. Experiments were done on the critical level to get the optimal
thresholds. The result of the algorithm combination is an artifact-free halftoning in the full range of intensities. Artifacts
of V-C-E-D and V-D-E-D are all removed. Fourier spectra analysis of the selected key density further supports this
conclusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Constructing virtual international strategy environment needs many kinds of information, such as economy, politic,
military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification,
recombination and analysis management system with high efficiency as the foundation and component of military
strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy
intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis
information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
2-D Convolution is a simple mathematical operation which is fundamental to many common image processing operators.
Using FPGA to implement the convolver can greatly reduce the DSP's heavy burden in signal processing. But with the
limit resource the FPGA can implement a convolver with small 2-D kernel. In this paper, An FIFO type line delayer is
presented to serve as the data buffer for convolution to reduce the data fetching operation. A finite state machine is
applied to control the reuse of multipliers and adders arrays. With these two techniques, a resource limited FPGA can be
used to implement a larger kernel convolver which is commonly used in image process systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A nonlinear regularization method is presented for the restoration of aero-optical degraded images, in which two frames
of short-exposure images are used to construct a series of equations to estimate the discrete values of the stochastic
turbulent point spread functions (PSF). In order to overcome the interference of noise, an optimization algorithm to
estimate the PSF values based on nonlinear regularizations in which a priori knowledge of the PSF values being
non-negative and spatial smoothing is incorporated into the process of estimation is proposed. A series of experiments
have been performed to test the proposed algorithm, which show that the proposed nonlinear regularization optimization
method is advanced and effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep space exploration has an important effect in science and national economy. Since the rambler used in deep space
exploration is controlled by ground station, its slow response rate and walking speed will be a main trouble in deep space
exploration. The technique of computer vision can be applied to autonomous navigation of a rambler, but as far as the
present case is concerned, two cameras must be used simultaneously to record the image of the vision field. Through
matching of two images, the height in every discrete point in vision field can then be determined according to some
algorithm. However, the calculation of matching is so complex that the walking speed of the rambler has to be limited.
This paper deals with the improvement of autonomous navigation technique in order that the rambler can walk rapidly.
To that end, a new technique of rambler automatic navigation, combing laser with camera to substitute the function of
image matching, is proposed in this paper: The laser beam scans the road inside the vision field of the camera and
simultaneously, the camera takes photographs one after another. An algorithm is given in this paper to obtain the
deepness data of spots in the field. A simulation example given in this paper demonstrates the feasibility of the technique
and correctness of its algorithm and shows that this technique with its algorithm can decrease detection time obviously
and identify the terrain of the vision field rapidly. So far, the rambler used in deep space exploration is controlled by
ground station, its slow response rate and walking speed will be a main trouble in deep space exploration. The new
vision system proposed in this paper capture the information about vision field by means the geometry relationship
formed by laser, camera and laser spots and accordingly, can manipulate images fleetly. Combining with image reconstruction, this technique will enhance the speed of the automatic navigation greatly and find extensive application in deep space exploration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Referring to 3D measurement methods like grating projection measurement, which make 3D output data arranged in
parallel lines, the topic brings forward an algorithm to reconstruct surface that represented in triangle geometry model
more efficiently, easily and automatically than those for 3D scattered points do. The algorithm select pairs of matched
points from every 2 neighbor lines dynamically, each point from one line. The line made up of a pair of matched points
is called skeleton line. Those skeleton lines can be expected as parallel as possible. Then the 4 end points in 2 neighbor
Skeleton lines can construct 1 quadrangle, also 2 good-characteristic triangles as elements of triangle geometry model.
Also the method is improved to avoid surface gap problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new labeling theory and method are proposed for the two-dimensional line drawing with hidden-part-draw of a three-dimensional
manifold curved surface object with trihedral vertices. Some rules for labeling line drawing with
hidden-part-draw are established. There are 69 kinds of junctions for line drawing with hidden-part-draw, including 8 kinds of
Y-junctions, 16 kinds of W-junctions, 11 kinds of S-junctions and 34 kinds of V-junctions. Our labeling theory and
method can discriminate between correct and incorrect line drawings with hidden-part-draw for manifold curved surface
object, and handle line drawings with hidden-part-draw for complicated objects that consist of manifold planar and
curved surface ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spectral dependence between neighboring pixels in the remotely sensed image is useful for the discrimination of
land use/cover types, but it is neglected in most classical classification algorithms. Variogram-based method is popular in
exploiting the spatial information of spectrum in the remotely sensed images. Although this method has been utilized in
many ways, it still tends to misclassify the pixels near boundaries in practice and therefore leads to boundary-blurring
problem. A weighted semivariogram (WSV) method is proposed to solve that problem in this paper, and the results also
show good performance in improving the boundary classification accuracy of remotely sensed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camera parameters' redundancy and actions on imaging process are analyzed based on central perspective projection
model with nonlinear lens distortion. By assigning some parameters' values or their relations in advance, seven kinds of
simplified camera models are presented. The simplified models' availability is validated by simulated data and
engineering applications. By using the simplified camera models, methods and arithmetics of videogrammetry can be
simplified without precision losses. The calculation becomes faster and stabler. The solving condition requirements are
reduced. These characteristics make the precision-reserved simplified camera models availible for engineering
applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.