PurposeContour interpolation is an important tool for expediting manual segmentation of anatomical structures. The process allows users to manually contour on discontinuous slices and then automatically fill in the gaps, therefore saving time and efforts. The most used conventional shape-based interpolation (SBI) algorithm, which operates on shape information, often performs suboptimally near the superior and inferior borders of organs and for the gastrointestinal structures. In this study, we present a generic deep learning solution to improve the robustness and accuracy for contour interpolation, especially for these historically difficult cases.ApproachA generic deep contour interpolation model was developed and trained using 16,796 publicly available cases from 5 different data libraries, covering 15 organs. The network inputs were a 128 × 128 × 5 image patch and the two-dimensional contour masks for the top and bottom slices of the patch. The outputs were the organ masks for the three middle slices. The performance was evaluated on both dice scores and distance-to-agreement (DTA) values.ResultsThe deep contour interpolation model achieved a dice score of 0.95 ± 0.05 and a mean DTA value of 1.09 ± 2.30 mm, averaged on 3167 testing cases of all 15 organs. In a comparison, the results by the conventional SBI method were 0.94 ± 0.08 and 1.50 ± 3.63 mm, respectively. For the difficult cases, the dice score and DTA value were 0.91 ± 0.09 and 1.68 ± 2.28 mm by the deep interpolator, compared with 0.86 ± 0.13 and 3.43 ± 5.89 mm by SBI. The t-test results confirmed that the performance improvements were statistically significant (p < 0.05) for all cases in dice scores and for small organs and difficult cases in DTA values. Ablation studies were also performed.ConclusionsA deep learning method was developed to enhance the process of contour interpolation. It could be useful for expediting the tasks of manual segmentation of organs and structures in the medical images.
The use of LIDAR (Light Imaging, Detection and Ranging) data for detailed terrain mapping and object recognition is becoming increasingly common. While the rendering of LIDAR imagery is expressive, there is a need for a comprehensive performance metric that presents the quality of the LIDAR image. A metric or scale for quantifying the interpretability of LIDAR point clouds would be extremely valuable to support image chain optimization, sensor design, tasking and collection management, and other operational needs. For many imaging modalities, including visible Electro-optical (EO) imagery, thermal infrared, and synthetic aperture radar, the National Imagery Interpretability Ratings Scale (NIIRS) has been a useful standard. In this paper, we explore methods for developing a comparable metric for LIDAR. The approach leverages the general image quality equation (IQE) and constructs a LIDAR quality metric based on the empirical properties of the point cloud data. We present the rationale and the construction of the metric, illustrating the properties with both measured and synthetic data.
The detection of farmland in synthetic aperture radar (SAR) images is useful to compute agriculture distribution in mountainous regions. The SAR technology is helpful to government agencies compiling much needed information for agricultural assessment of need-based data. We propose a texture signature to detect farmland in SAR. The texture signature is extracted from the texture pixels of the SAR image through the fuzzy c-means, where each texture pixel is a vector whose elements are the convolution value of the filters of the normalized Gaussian derivatives and SAR images at a spatial position. Then, we use the texture signatures to detect farmland in SAR images through the earth mover’s distance method. In the end, we propose a different approach to compute both the true positive rate and the false positive rate of receiver operating characteristic (ROC) curve. We use the area under the curve of ROC to achieve the best sample and the best threshold which realizes the best detection. The experiment results also show the best performance of the detection.
Large LiDAR (Light Detection And Ranging) data sets are used to create depth mapping of objects and geographic
areas. The suitability of image compression methods for these large LiDAR data sets was explored, analyzed and
optimized. Our research interprets LiDAR data as intensity based "depth images", and uses k-means clustering, reindexing
and JPEG2000 to compress the data. The first step in our method applies the k-means clustering algorithm to
an intensity image creating a small index table, an index map and residual image. Next we use methods from previous
research to re-index the index map to optimize compression when using JPEG2000. And lastly we compress both the reindexed
map and residual image using JPEG2000, exploring the use of both lossless and lossy compression.
Experimental results show that in general we can compress data to 23% of the original size losslessly and even further
allowing for small amounts of loss.
In this project, we propose to develop a prototype system that can automatically reconstruct 3D scenes of the interior of a
building, cave or other structure using ground-based LIDAR scanning technology. We develop a user-friendly real-time
visualization software package that will allow the users to interactively visualize, navigate and walk through the room
from different view angles, zoom in and out, etc.
This paper aims at analyzing gender differences in the 3D shapes of lateral ventricles, which will provide reference for
the analysis of brain abnormalities related to neurological disorders. Previous studies mostly focused on volume analysis,
and the main challenge in shape analysis is the required step of establishing shape correspondence among individual
shapes. We developed a simple and efficient method based on anatomical landmarks. 14 females and 10 males with
matching ages participated in this study. 3D ventricle models were segmented from MR images by a semiautomatic
method. Six anatomically meaningful landmarks were identified by detecting the maximum curvature point in a small
neighborhood of a manually clicked point on the 3D model. Thin-plate spline was used to transform a randomly selected
template shape to each of the rest shape instances, and the point correspondence was established according to Euclidean
distance and surface normal. All shapes were spatially aligned by Generalized Procrustes Analysis. Hotelling T2 twosample
metric was used to compare the ventricle shapes between males and females, and False Discovery Rate
estimation was used to correct for the multiple comparison. The results revealed significant differences in the anterior
horn of the right ventricle.
KEYWORDS: Shape analysis, Detection and tracking algorithms, Control systems, 3D acquisition, 3D modeling, Brain, Spherical lenses, Magnetic resonance imaging, Molybdenum, Neuroimaging
Statistical shape analysis of brain structures has gained increasing interest from neuroimaging community because it
can precisely locate shape differences between healthy and pathological structures. The most difficult and crucial
problem is establishing shape correspondence among individual 3D shapes. This paper proposes a new algorithm for
3D shape correspondence. A set of landmarks are sampled on a template shape, and initial correspondence is
established between the template and the target shape based on the similarity of locations and normal directions. The
landmarks on the target are then refined by iterative thin plate spline. The algorithm is simple and fast, and no
spherical mapping is needed. We apply our method to the statistical shape analysis of the corpus callosum (CC) in
phenylketonuria (PKU), and significant local shape differences between the patients and the controls are found in the
most anterior and posterior aspects of the corpus callosum.
KEYWORDS: Shape analysis, 3D modeling, Control systems, Brain, Brain mapping, Image segmentation, Magnetic resonance imaging, Sensors, Data modeling, Visualization
A number of studies have documented that autism has a neurobiological basis, but the anatomical extent of these
neurobiological abnormalities is largely unknown. In this study, we aimed at analyzing highly localized shape
abnormalities of the corpus callosum in a homogeneous group of autism children. Thirty patients with essential autism
and twenty-four controls participated in this study. 2D contours of the corpus callosum were extracted from MR images
by a semiautomatic segmentation method, and the 3D model was constructed by stacking the contours. The resulting 3D
model had two openings at the ends, thus a new conformal parameterization for high genus surfaces was applied in our
shape analysis work, which mapped each surface onto a planar domain. Surface matching among different individual
meshes was achieved by re-triangulating each mesh according to a template surface. Statistical shape analysis was used
to compare the 3D shapes point by point between patients with autism and their controls. The results revealed significant
abnormalities in the anterior most and anterior body in essential autism group.
Recent advances in imaging technologies, such as Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) and Diffusion Tensor Imaging (DTI) have accelerated brain research in many aspects. In order to better understand the synergy of the many processes involved in normal brain function, integrated modeling and analysis of MRI, PET, and DTI is highly desirable. Unfortunately, the current state-of-art computational tools fall short in offering a comprehensive computational framework that is accurate and mathematically rigorous. In this paper we present a framework which is based on conformal parameterization of a brain from high-resolution structural MRI data to a canonical spherical domain. This model allows natural integration of information from co-registered PET as well as DTI data and lays the foundation for a quantitative analysis of the relationship between diverse data sets. Consequently, the system can be designed to provide a software environment able to facilitate statistical detection of abnormal functional brain patterns in patients with a large number of neurological disorders.
We propose a new implicit surface polygonalization algorithm based on front propagation. The algorithm starts from a simple seed (e.g. a triangle) that can be automatically initialized, and always enlarges its boundary contour outwards along its tangent direction suggested by the underlying volume data. Our algorithm can conduct mesh optimization and Laplacian smoothing on-the-fly and generate meshes of much higher quality than the Marching-cubes algorithm. Experimental results on both real and synthetic volumetric datasets are shown to demonstrate the robustness and effectiveness of the new algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.