PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 892001 (2013) https://doi.org/10.1117/12.2045607
This PDF file contains the front matter associated with SPIE Proceedings Volume 8920, including the Title Page, Copyright information, Table of Contents, Invited Panel Discussion, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parallel Processing of Images and Optimization and Medical Imaging Processing
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 892002 (2013) https://doi.org/10.1117/12.2032186
The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in
engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed.
Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve
the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational
cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which
include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted
for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and
its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality,
hardware realizability and high efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 892003 (2013) https://doi.org/10.1117/12.2031050
Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images
taken at different viewpoint or different time from the same scene. However, its large computational complexity has
been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which
consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture
for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and
matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720
images. Its processing speed can meet the demand of most real-life computer vision applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 892004 (2013) https://doi.org/10.1117/12.2031006
In order to implement real-time detection of hedgehopping target in large view-field infrared (LVIR) image, the
paper proposes a fast algorithm flow to extract the target region of interest (ROI). The ground building region was
rejected quickly and target ROI was segmented roughly through the background classification. Then the background
image containing target ROI was matched with previous frame based on a mean removal normalized product correlation
(MRNPC) similarity measure function. Finally, the target motion area was extracted by inter-frame difference in time
domain. According to the proposed algorithm flow, this paper designs the high-speed real-time signal processing
hardware platform based on FPGA + DSP, and also presents a new parallel processing strategy that called function-level
and task-level, which could parallel process LVIR image by multi-core and multi-task. Experimental results show that
the algorithm can extract low altitude aero target with complex background in large view effectively, and the new design
hardware platform could implement real time processing of the IR image with 50000x288 pixels per second in large
view-field infrared search system (LVIRSS).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 892005 (2013) https://doi.org/10.1117/12.2031144
This paper presents a hardware-efficient design for the one-dimensional (1-D) discrete Fourier transform (DFT). Once
the 1-D DFT is formulated as the cyclic convolution form, the first-order moments-based structure can be used as the
basic computing unit for the DFT computation, which only contains a control module, a statistical module and an
accumulation module. The whole calculation process only contains shift operations and additions, with no need for
multipliers and large memory. Compared with the traditional DA-based structure for DFT, the proposed design has better
performance in terms of the area-throughput ratio and the power consumption, especially when the length of DFT is
slightly longer. Similar efficient designs can be obtained for other computations, such as the DCT/IDCT, DST/IDST,
digital filter and correlation, by transforming them into the forms of the first-order moments-based cyclic convolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 892006 (2013) https://doi.org/10.1117/12.2032057
This paper proposes an optimized and efficient matching method based on Particle Swarm Optimization (PSO) for image
matching. PSO is an efficient intelligent algorithm in image matching. It is a kind of stochastic optimized algorithm
developed by Eberhart and Kennedy in 1995. In this paper, the application of PSO is focused on image matching in 3
dimensions with variant angles. The ordinary template matching for the 3 dimensions image matching involves large
computational complexity. PSO has been improved in the aspect of self-adaption for convergence. Combining PSO with
the individual intelligence, the computation and error rate have been significantly reduced. An extended part of PSO
algorithm called multi-swarms is introduced. The multi-swarms PSO (MPSO) is applied to the multi-targets matching in
the high dimension space. The performance of MPSO is satisfactory due to the interaction between different swarms
such as repulsion and convergence. The Experiments results show that Particle Swarm Optimization Algorithm is much
faster in the image matching tasks. MPSO has a good performance in multi-targets matching which involves huge
computation complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 892007 (2013) https://doi.org/10.1117/12.2031007
This paper introduces and evaluates a new algorithm for the computation of type-III discrete Hartley transforms (DHT)
of length N = 2n. The length-N type-III discrete Hartley transforms can be decomposed into several length-16 type-III discrete Hartley transforms based on the radix-2 fast algorithm, and the length-16 type-III discrete Hartley transforms can be computed by first- order moments. It can save a lot of arithmetic operations and the computational complexity of the algorithms is lower than some existing methods. Moreover, this algorithm can be easily implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 892008 (2013) https://doi.org/10.1117/12.2030383
Differential Evolution (DE) is a simple yet efficient stochastic algorithm for solving real world problems. However,
the performance of DE is sensitive to the mutation and crossover strategies and their associated parameters. In this
paper, a kind of scale factor generating scheme within the process of search is proposed, named MSFDE, to enhance
the performance of DE. In this method, the scale factor is a D dimensional matrix which component is a random
number for each difference vector during the iteration. The proposed scheme has been evaluated on a test-suite of 25
benchmark functions provided by CEC 2005 special session on real parameter optimization. The results of the
experiments indicate that MDVDE is competitive to classical DE and some other variants on different strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 892009 (2013) https://doi.org/10.1117/12.2041569
High-quality photorealistic rendering of 3D modeling needs powerful computing systems. On this demand highly
efficient management of cluster resources develops fast to exert advantages. This paper is absorbed in the aim of how to
improve the efficiency of 3D rendering tasks in cluster. It focuses research on a dynamic feedback load balance (DFLB)
algorithm, the work principle of load sharing facility (LSF) and optimization of external scheduler plug-in. The
algorithm can be applied into match and allocation phase of a scheduling cycle. Candidate hosts is prepared in sequence
in match phase. And the scheduler makes allocation decisions for each job in allocation phase. With the dynamic
mechanism, new weight is assigned to each candidate host for rearrangement. The most suitable one will be dispatched
for rendering. A new plugin module of this algorithm has been designed and integrated into the internal scheduler.
Simulation experiments demonstrate the ability of improved plugin module is superior to the default one for rendering
tasks. It can help avoid load imbalance among servers, increase system throughput and improve system utilization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200A (2013) https://doi.org/10.1117/12.2030307
Now many image super-resolution methods suppose that the optical flows between images should be
computed accurately. But really it is very difficult to get them and the models of imaging systems are
unknown almost. Thurs perturbation errors always occur in the image super-resolution model. The
paper proposes an improved image super-resolution algorithm based on total least squares method. The
average image based on images is used as regularized penalty for posteriori probability model. The
paper presents the improved Rayleigh quotient format for energy objective function. Then a conjugate
gradient algorithm is used to minimize the modified Rayleigh quotient function. The method can
minimize two the errors from the sampled low-resolution images and in that perturbation system matrix
of high-resolution reconstruction. The test results showed that the algorithm is stable for the
perturbation system matrix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200B (2013) https://doi.org/10.1117/12.2035681
In some cases, there are some special requirements for the vehicle routing problem. Personnel or goods geographically
scattered, should be delivered simultaneously to an assigned place by a fleet of vehicles as soon as possible. In this case
the objective is to minimize the distance of the longest route among all sub-routes. An improved genetic algorithm was
adopted to solve these problems. Each customer has a unique integer identifier and the chromosome is defined as a string
of integers. Initial routes are constructed randomly, and then standard proportional selection incorporating elitist is
chosen to guarantee the best member survives. New crossover and 2-exchange mutation is adopted to increase the
diversity of group. The algorithm was implemented and tested on some instances. The results demonstrate the
effectiveness of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200C (2013) https://doi.org/10.1117/12.2031337
A particle-inspired Monte Carlo tree estimation method is proposed to avoid repeating similar simulation and handle the
depletion problem in particle filter. Under the inspiration of particles, the method divides the state-space recursively in a
top-down manner to form a tree structure that each node in the tree is corresponding to a sub-space. Particles are
allocated to the corresponding terminal node during the procedure. Certain size of minimal sub-space or piece is
specified to terminate the dividing. Each piece is corresponding to a leaf-node of the tree structure and the prediction
probability density in it is approximated by the proportion of its particles in total particles. Instead of importance
sampling for each particle, the method takes uniformly random measurements to compute the posterior probability
density in each piece. As a result, the method is applied to growth model and has better performance in high SNR
environments compared with the Sampling Importance Resampling method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200D (2013) https://doi.org/10.1117/12.2031463
The autoregressive modeling image interpolation scheme is noticeably closer to ideal interpolation aiming at obtaining a
high-resolution (HR) image from its low-resolution (LR) version than conventional methods. The basic idea is to first
estimate the covariance of HR image from the covariance of the LR image and then adjust the covariance coefficients of
HR image according to a feedback mechanism that takes into account the mutual influence between the estimated
missing pixels in a local window. In spite of its impressive performance, the time-consuming computation is usually the
bottleneck of the method when it is applied in time-critical scenario. Graphics Processing Units (GPUs) are attractive
candidates to expedite the computation process. In this paper, an efficient GPU-based massively parallel version of the
autoregressive modeling image interpolation scheme was proposed. Because all pixels which need to be interpolated
have no dependence, each estimated pixel is assigned to independent thread in our parallel interpolation scheme.
Experimental results show that we reached a speedup of 21.2x when I/O transfer time was taken into account, with
respect to the original single-threaded C CPU code with the -O2 compiling optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200E (2013) https://doi.org/10.1117/12.2031238
ImpulseC is based on the C language which can describe highly parallel and multi-process
applications. It also generates a underlying hardware description for the dedicated process. To
improve the famous bi-cubic interpolation algorithm, we design the bi-cubic convolution template
algorithms with better computing performance and higher efficiency. The results of simulation show
that the interpolation method not only improves the interpolation accuracy and image quality, but
also preferably retains the texture of the image. Based on ImpulseC hardware design tools, we can
make use of the compiler features to further parallelize the algorithm so that it is more conducive to
the hardware implementation. Based on the Xilinx Spartan3 of XC3S4000 chip, our method
achieves the real-time interpolation at the rate of 50fps. The FPGA experimental results show that
the stream of output images after interpolation is robust and real-time. The summary shows that the
allocation of hardware resources is reasonable. Compared with the existing hand-written HDL code,
it has the advantages of parallel speedup. Our method provides a novel idea from C to FPGA-based
embedded hardware system for software engineers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200F (2013) https://doi.org/10.1117/12.2031313
In recent years, Low-rank matrix recovery from corrupted noise matrix has attracted interests as a very effective method
in high-dimensional data. And its fast algorithm has become a research focus. This paper we first review the basic theory
and typical accelerated algorithms. All these methods are proposed to mitigating the computational burden, such as the
iteration count before convergence, especially the frequent large-scale Singular Value Decomposition (SVD). For better
convergence, we employ the Augmented Lagrange Multipliers to solve the optimization problem. Recent the endeavors
have focused on smaller-scale SVD, especially the method based on submatrix. Finally, we present numerical
experiments on large-scale date.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200G (2013) https://doi.org/10.1117/12.2031555
During printing quality inspection, the inspection of color error is an important content. However, the RGB color
space is device-dependent, usually RGB color captured from CCD camera must be transformed into CIELAB color
space, which is perceptually uniform and device-independent. To cope with the problem, a Markov chain Monte Carlo
(MCMC) based algorithms for the RGB to the CIELAB color space transformation is proposed in this paper. Firstly, the
modeling color targets and testing color targets is established, respectively used in modeling and performance testing
process. Secondly, we derive a Bayesian model for estimation the coefficients of a polynomial, which can be used to
describe the relation between RGB and CIELAB color space. Thirdly, a Markov chain is set up base on Gibbs sampling
algorithm (one of the MCMC algorithm) to estimate the coefficients of polynomial. Finally, the color difference of
testing color targets is computed for evaluating the performance of the proposed method. The experimental results
showed that the nonlinear polynomial regression based on MCMC algorithm is effective, whose performance is similar
to the least square approach and can accurately model the RGB to the CIELAB color space conversion and guarantee the
color error evaluation for printing quality inspection system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200H (2013) https://doi.org/10.1117/12.2030264
The microvasculature network of retina plays an important role in the study and diagnosis of retinal diseases (age-related
macular degeneration and diabetic retinopathy for example). Although it is possible to noninvasively acquire
high-resolution retinal images with modern retinal imaging technologies, non-uniform illumination, the low contrast of
thin vessels and the background noises all make it difficult for diagnosis. In this paper, we introduce a novel retinal
vessel extraction algorithm based on gradient vector flow and matched filtering to segment retinal vessels with different
likelihood. Firstly, we use isotropic Gaussian kernel and adaptive histogram equalization to smooth and enhance the
retinal images respectively. Secondly, a multi-scale matched filtering method is adopted to extract the retinal vessels.
Then, the gradient vector flow algorithm is introduced to locate the edge of the retinal vessels. Finally, we combine the
results of matched filtering method and gradient vector flow algorithm to extract the vessels at different likelihood levels.
The experiments demonstrate that our algorithm is efficient and the intensities of vessel images exactly represent the
likelihood of the vessels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200I (2013) https://doi.org/10.1117/12.2032179
Images acquired in free breathing using contrast enhanced ultrasound exhibit a periodic motion that needs to be
compensated for if a further accurate quantification of the hepatic perfusion analysis is to be executed. In this work, we
present an algorithm to compensate the respiratory motion by effectively combining the PCA (Principal Component
Analysis) method and block matching method. The respiratory kinetics of the ultrasound hepatic perfusion image
sequences was firstly extracted using the PCA method. Then, the optimal phase of the obtained respiratory kinetics was
detected after normalizing the motion amplitude and determining the image subsequences of the original image
sequences. The image subsequences were registered by the block matching method using cross-correlation as the
similarity. Finally, the motion-compensated contrast images can be acquired by using the position mapping and the
algorithm was evaluated by comparing the TICs extracted from the original image sequences and compensated image
subsequences. Quantitative comparisons demonstrated that the average fitting error estimated of ROIs (region of interest)
was reduced from 10.9278 ± 6.2756 to 5.1644 ± 3.3431 after compensating.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200J (2013) https://doi.org/10.1117/12.2031406
Due to the curve of the coronary artery and the overlap, cross between its branches, some of its
information is lost in the 3D-2D imaging process, which may leads to the inaccuracy in reconstructing
three-dimensional vascular tree structure from angiographic images. In this paper, a new
three-dimensional reconstruction method using overlap detection for 3-D projection is proposed to
improve this problem, and experiments proves that the method can raise the accuracy of the
reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200K (2013) https://doi.org/10.1117/12.2030912
Functional magnetic resonance imaging (fMRI) is an advanced non-invasive data acquisition technique to investigate the
neural activity in human brain. In addition to localize the functional brain regions that is activated by specific cognitive
task, fMRI can also be utilized to measure the task-related functional interactions among the active regions of interest
(ROI) in the brain. Among the variety of analysis tools proposed for modeling the connectivity of brain regions, Granger
causality analysis (GCA) measure the directions of information interactions by looking for the lagged effect among the
brain regions. In this study, we use fMRI and Granger Causality analysis to investigate the effective connectivity of brain
network induced by viewing several kinds of expressional faces. We focus on four kinds of facial expression stimuli:
fearful, angry, happy and neutral faces. Five face selective regions of interest are localized and the effective connectivity
within these regions is measured for the expressional faces. Our result based on 8 subjects showed that there is
significant effective connectivity from STS to amygdala, from amygdala to OFA, aFFA and pFFA, from STS to aFFA
and from pFFA to aFFA. This result suggested that there is an information flow from the STS to the amygdala when
perusing expressional faces. This emotional expressional information flow that is conveyed by STS and amygdala, flow
back to the face selective regions in occipital-temporal lobes, which constructed a emotional face processing network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200L (2013) https://doi.org/10.1117/12.2032054
Digital pathological image retrieval plays an important role in computer-aided diagnosis for breast cancer. The retrieval
results of an unknown pathological image, which are generally previous cases with diagnostic information, can provide
doctors with assistance and reference. In this paper, we develop a novel pathological image retrieval method for breast
cancer, which is based on stain component and probabilistic latent semantic analysis (pLSA) model. Specifically, the
method firstly utilizes color deconvolution to gain the representation of different stain components for cell nuclei and
cytoplasm, and then block Gabor features are conducted on cell nuclei, which is used to construct the codebook.
Furthermore, the connection between the words of the codebook and the latent topics among images are modeled by
pLSA. Therefore, each image can be represented by the topics and also the high-level semantic concepts of image can be
described. Experiments on the pathological image database for breast cancer demonstrate the effectiveness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200M (2013) https://doi.org/10.1117/12.2029040
Image mosaic has widely applications value in the fields of medical image analysis, and it is a technology that carries on
the spatial matching to a series of image which are overlapped with each other, and finally builds a seamless and high
quality image which has high resolution and big eyeshot. In this paper, the method of grayscale cutting pseudo-color
enhancement was firstly used to complete the mapping transformation from gray to the pseudo-color, and to extract SIFT
features from the images. And then by making use of a similar measure of NCC (normalized cross correlation -
Normalized cross-correlation), the method of RANSAC (Random Sample Consensus) was used to exclude the pseudofeature
points right in order to complete the exact match of feature points. Finally, seamless mosaic and color fusion
were completed by using wavelet multi-decomposition. The experiment shows that the method we used can effectively
improve the precision and automation of the medical image mosaic, and provide an effective technical approach for
automatic medical image mosaic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200N (2013) https://doi.org/10.1117/12.2032850
With the development of computer aided navigation system, more and more tissues shall be reconstructed to provide
more useful information for surgical pathway planning. In this study, we aimed to propose a registration framework for
different reconstructed tissues from multi-modalities based on some fiducial points on lateral ventricles. A male patient
with brain lesion was admitted and his brain scans were performed by different modalities. Then, the different brain
tissues were segmented in different modality with relevant suitable algorithms. Marching cubes were calculated for three
dimensional reconstructions, and then the rendered tissues were imported to a common coordinate system for
registration. Four pairs of fiducial markers were selected to calculate the rotation and translation matrix using
least-square measure method. The registration results were satisfied in a glioblastoma surgery planning as it provides the
spatial relationship between tumors and surrounding fibers as well as vessels. Hence, our framework is of potential value
for clinicians to plan surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200O (2013) https://doi.org/10.1117/12.2031538
Carotid atherosclerosis is the major cause of ischemic stroke, a leading cause of mortality and disability.
Morphology and structure features of carotid plaques are the keys to identify plaques and monitoring the disease.
Manually segmentation on the ultrasonic images to get the best-fitted actual size of the carotid plaques based on
physicians personal experience, namely "gold standard", is a important step in the study of plaque size. However, it
is difficult to qualitatively measure the segmentation error caused by the operator's subjective factors. In order to
reduce the subjective factors, and the uncertainty factors of quantification, the experiments in this paper were
carried out.
In this study, we firstly designed a carotid artery phantom, and then use three different beam-forming algorithms of
medical ultrasound to simulate the phantom. Finally obtained plaques areas were analyzed through manual
segmentation on simulation images. We could (1) directly evaluate the different beam-forming algorithms for the
ultrasound imaging simulation on the effect of carotid artery; (2) also analyze the sensitivity of detection on
different size of plaques; (3) indirectly reflect the accuracy of the manual segmentation base on segmentation
results the evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200P (2013) https://doi.org/10.1117/12.2031033
The paper proposed a segmentation method combining both local and global threshold techniques to efficiently segment
the cell images. Firstly, the image would be divided into several parts, and the Otsu operation would be used to each of
them to detect details. Secondly, main body of the objects would be filtered out by a global threshold algorithm. Finally,
based on the previous steps, more advanced segmentation outcomes can be achieved. The experimental results show that
this algorithm made better performance at detail recognition, such as the cell antennas, which should be very helpful and
important in the medical area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2013: Parallel Processing of Images and Optimization and Medical Imaging Processing, 89200Q (2013) https://doi.org/10.1117/12.2031509
As an important optical molecular imaging technique, bioluminescence tomography (BLT) offers an inexpensive and
sensitive means for non-invasively imaging a variety of physiological and pathological activities at cellular and
molecular levels in living small animals. The key problem of BLT is to recover the distribution of the internal
bioluminescence sources from limited measurements on the surface. Considering the sparsity of the light source
distribution, we directly formulate the inverse problem of BLT into an l0-norm minimization model and present a
smoothed l0-norm (SL0) based reconstruction algorithm. By approximating the discontinuous l0 norm with a suitable
continuous function, the SL0 norm method solves the problem of intractable computational load of the minimal l0 search as well as high sensitivity of l0-norm to noise. Numerical experiments on a mouse atlas demonstrate that the proposed
SL0 norm based reconstruction method can obtain whole domain reconstruction without any a priori knowledge of the
source permissible region, yielding almost the same reconstruction results to those of l1 norm methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.