Shape, scale, orientation and position, the physical features associated with white matter DTI tracts, can, either individually or in combination, be used to define feature spaces. Recent work by Mani et al.1 describes a Riemannian framework in which these joint feature spaces are considered. In this paper, we use the tools and metrics defined within this mathematical framework to study morphological changes due to disease progression. We look at sections of the anterior corpus callosum, which describes a deep arc along the mid-sagittal plane, and show how multiple sclerosis and normal control populations have different joint shape-orientation signatures.
In this paper we present a method for clustering and classification of acoustic color data based on statistical
analysis of functions using square-root velocity functions (SVRF). The convenience of the SVRF is that it transforms
the Fisher-Rao metric into the standard L2 metric. As a result, a formal distance can be calculated using
geodesic paths. Moreover, this method allows optimal deformations between acoustic color data to be computed
for any two targets allowing for robustness to measurement error. Using the SVRF formulation statistical models
can then be constructed using principal component analysis to model the functional variation of acoustic color
data. Empirical results demonstrate the utility of functional data analysis for improving performance results in
pattern recognition using acoustic color data.
The use of joint shape analysis of multiple anatomical parts is a promising area of research with applications in
medical diagnostics, growth evaluations, and disease characterizations. In this paper, we consider several features
(shapes, orientations, scales, and locations) associated with anatomical parts and develop probability models that
capture interactions between these features and across objects. The shape component is based on elastic shape
analysis of continuous boundary curves. The proposed model is a second order model that considers principal
coefficients in tangent spaces of joint manifolds as multivariate normal random variables. Additionally, it models
interactions across objects using area-interaction processes. Using given observations of four anatomical parts:
caudate, hippocampus, putamen and thalamus, on one side of the brain, we first estimate the model parameters
and then generate random samples from them using the Metropolis-Hastings algorithm. The plausibility of these
random samples validates the proposed models.
We investigate the use of a recent technique for shape analysis of brain substructures in identifying learning disabilities
in third-grade children. This Riemannian technique provides a quantification of differences in shapes of parameterized
surfaces, using a distance that is invariant to rigid motions and re-parameterizations. Additionally, it provides an optimal
registration across surfaces for improved matching and comparisons. We utilize an efficient gradient based method to
obtain the optimal re-parameterizations of surfaces. In this study we consider 20 different substructures in the human
brain and correlate the differences in their shapes with abnormalities manifested in deficiency of mathematical skills in
106 subjects. The selection of these structures is motivated in part by the past links between their shapes and cognitive
skills, albeit in broader contexts. We have studied the use of both individual substructures and multiple structures jointly
for disease classification. Using a leave-one-out nearest neighbor classifier, we obtained a 62.3% classification rate based
on the shape of the left hippocampus. The use of multiple structures resulted in an improved classification rate of 71.4%.
We propose a novel Riemannian framework for analyzing orientation distribution functions (ODFs) in HARDI
data sets, for use in comparing, interpolating, averaging, and denoising ODFs. A recently used Fisher-Rao
metric does not provide physically feasible solutions, and we suggest a modification that removes orientations
from ODFs and treats them as separate variables. This way a comparison of any two ODFs is based on separate
comparisons of their shapes and orientations. Furthermore, this provides an explicit orientation at each voxel
for use in tractography. We demonstrate these ideas by computing geodesics between ODFs and Karcher means of ODFs, for both the original Fisher-Rao and the proposed framework.
This paper develops a framework for predicting IR images of a target, in a partially observed thermal state, using known geometry and past IR images. The thermal states of the target are represented via scalar temperature fields. The prediction task becomes that of estimating the unobserved parts of the field, using the observed parts and the past patterns. The estimation is performed using regression models for relating the temperature variables, at different points on the target's surface, across different thermal states. A linear regression model is applied and some preliminary experimental results are presented using a laboratory target and a hand-held IR camera. Extensions to piecewise-linear and nonlinear models are proposed.
Automated target recognition (ATR) is a problem of great importance in a wide variety of applications: from military target recognition to recognizing flow-patterns in fluid- dynamics to anatomical shape-studies. The basic goal is to utilize observations (images, signals) from remote sensors (such as videos, radars, MRI or PET) to identify the objects being observed. In a statistical framework, probability distributions on parameters representing the object unknowns are derived an analyzed to compute inferences (please refer to [1] for a detailed introduction). An important challenge in ATR is to determine efficient mathematical models for the tremendous variability of object appearance which lend themselves to reasonable inferences. This variation may be due to differences in object shapes, sensor-mechanisms or scene- backgrounds. To build models for object variabilities, we employ deformable templates. In brief, the object occurrences are described through their typical representatives (called templates) and transformations/deformations which particularize the templates to the observed objects. Within this pattern-theoretic framework, ATR becomes a problem of selecting appropriate templates and estimating deformations. For an object (alpha) (epsilon) A, let I(alpha ) denote a template (for example triangulated CAD-surface) and let s (epsilon) S be a particular transformation, then denote the transformed template by sI(alpha ). Figure 1 shows instances of the template for a T62 tank at several different orientations. For the purpose of object classification, the unknown transformation s is considered a nuisance parameter, leading to a classical formulation of Bayesian hypothesis- testing in presence of unknown, random nuisance parameters. S may not be a vector-space, but it often has a group structure. For rigid objects, the variation in translation and rotation can be modeled through the action of special Euclidean group SE(n). For flexible objects, such as anatomical shapes, higher-dimensional groups such as a diffeomorphisms are utilized.
Tracking of target pose is important for ATR in situations where there is a relative motion between the targets and the sensor(s). Taking a Bayesian approach, we formulate the problem of jointly tracking the target positions and orientations as a problem in nonlinear filtering. Combining pertinent ideas form importance sampling and sequential methods, we apply an iterative Monte Carlo approach to solve for MMSE solutions. This tracking algorithm is demonstrated for tracking individual targets in a simulated environment.
We have been studying information theoretic measures, entropy and mutual information, as performance metrics for object recognition given a standard suite of sensors. Our work has focused on performance analysis for the pose estimation of ground-based objects viewed remotely via a standard sensor suite. Target pose is described by a single angle of rotation using a Lie group parameterization: O (epsilon) SO(2), the group of 2 X 2 rotation matrices. Variability in the data due to the sensor by which the scene is observed is statistically characterized via the data likelihood function. Taking a Bayesian approach, the inference is based on the posterior density, constructed as the product of the data likelihood and the prior density for object pose. Given multiple observations of the scene, sensor fusion is automatic in the joint likelihood component of the posterior density. The Bayesian approach is consistent with the source-channel formulation of the object recognition problem, in which parameters describing the sources (objects) in the scene must be inferred from the output (observation) of the remote sensing channel. In this formulation, mutual information is a natural performance measure. In this paper we consider the asymptotic behavior of these information measures as the signal to noise ratio (SNR) tends to infinity. We focus on the posterior entropy of the object rotation angle conditioning on image data. We consider single and multiple sensor scenarios and present quadratic approximations to the posterior entropy. Our results indicate that for broad ranges of SNR, low dimensional posterior densities in object recognition estimation scenarios are accurately modeled asymptotically.
Estimating pose and location of a detected target is an integral part of the target recognition process. In this paper we address the problem of estimating these parameters using images collected via stationary or moving sensors. Taking a Bayesian approach, we define a posterior on the special Euclidean group, which models the target orientation and position, and an optimal estimator in the minimum mean squared error sense. In addition, we derive an achievable lower bound on the estimation errors, independent of an algorithm, and analyze this bound by varying the sensor noise. This bound provides a tool for studying the algorithmic performance versus resources trade-off in multi- sensor, multi-frame applications.
In this paper, we describe the automatic damage assessment process we developed to assess the extent of battle damage based on laser rdar (Ladar) images taken before and after a strike. The process is composed of three modules, the image registration module, the damage isolation module, and the damage assessment module. In the image registration module, distortion in the raw range data was first compensated to obtain the actual heights of the objects. Then, Ladar images taken before and after the strike were aligned according to their pixel intensities. In the damage isolation module, changes between the two sets of images were compared to isolate the locations of the actual damage. Factors such as sensor noise, sensor perspective difference, debris, and movement of vehicles, which all contribute to changes in the images, were automatically discounted in this module. The approximate location of the damage, if it existed, was passed to the damage assessment module to determine the extent of the damage using the region growing technique. This process enables fast and accurate evaluation of a strike with as little human supervision as possible.
In the work described herein, Bayesian Pattern Theory is used to formulate the overall ATR problem as the optimization of a single objective function over the parameters to be estimated. Thus, all image understanding operations are then realized naturally, automatically, and consistently as byproducts of a large-scale stochastic optimization process. The work begins with a derivation of the Bayesian cost function by deriving a posterior probability distribution on the space of pose parameters and solves the optimization problem with respect to this posterior. Two noise models were considered in the derivation of the cost function: the first is the commonly used Gaussian model, and the second, considering that a SAR image is complex, is a Rician model. In order to test the robustness of the algorithm with respect to target types and adverse background conditions, four cases were constructed: Case (1) Gaussian noise was used and a Gaussian noise model was used in classification. Case (2) Rician noise was used and a Gaussian noise model was used in classification, Case (3) Rician noise was used and a Rician noise model was used in classification, and Case (4) MSTAR clutter was used. For each cases, we compute the probability of detection as a function of SNR. We obtained very good results for Case (1), however, the results at very low SNR may be unrealistic because the Gaussian noise assumptions are not accurate. As expected, the results for Case (2) were poor while the results for Case (3) were good. Compared to Case (1) the Case (3) results are more reliable because of a representative Rician noise model. The results for Case (4) were also good. These results were also independently confirmed by Bayes error analysis.
There are two basic unknowns in ATR: (1) target types, and (2) parametric representations of their occurrence (such as pose, location, thermal profile etc). This paper addresses the question: what metrics can be used: (1) for optimization in parameter space, and (2) for analyzing target recognition performance?
Our work has focused on deformable template representations of geometric variability in automatic target recognition (ATR). Within this framework we have proposed the generation of conditional mean estimates of pose of ground-based targets remotely sensed via forward-looking IR radar (FLIR) systems. Using the rotation group parameterization of the orientation space and a Bayesian estimation framework, conditional mean estimators are defined on the rotation group with minimum mean squared error (MMSE) performance bounds calculated following. This paper focuses on the accommodation of thermodynamic variation. Our new approach relaxes assumptions of the target's underlying thermodynamic state, expanding thermodynamic state as a scalar field. Estimation within the deformable template setting poses geometric and thermodynamic variation as a joint inference. MMSE pose estimators for geometric variation are derived, demonstrating the 'cost' of accommodating thermodynamic variability. Performance is quantitatively examined, and simulations are presented.
Previously we presented a jump-diffusion based random sampling algorithm for generating conditional mean estimates of scene representations for the tracking and recongition of maneuvering airborne targets. These representations include target positions and orientations along their trajectories and the target type associated with each trajectory. Taking a Bayesian approach, a posterior measure is defined on the parameter space by combining sensor models with a sophisticated prior based on nonlinear airplane dynamics. The jump-diffusion algorithm constructs a Markov process which visits the elements of the parameter space with frequencies proportional to the posterior probability. It consititutes both the infinitesimal, local search via a sample path continuous diffusion transform and the larger, global steps through discrete jump moves. The jump moves involve the addition and deletion of elements from the scene configuration or changes in the target type assoviated with each target trajectory. One such move results in target detection by the addition of a track seed to the inference set. This provides initial track data for the tracking/recognition algorithm to estimate linear graph structures representing tracks using the other jump moves and the diffusion process, as described in our earlier work. Target detection ideally involves a continuous research over a continuum of the observation space. In this work we conclude that for practical implemenations the search space must be discretized with lattice granularity comparable to sensor resolution, and discuss how fast Fourier transforms are utilized for efficient calcuation of sufficient statistics given our array models. Some results are also presented from our implementation on a networked system including a massively parallel machine architecture and a silicon graphics onyx workstation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.