Medical image registration over the past many years has been dominated by techniques which rely on expert an- notations, while not taking advantage of the unlabelled data. Deep unsupervised architectures utilized this widely available unlabelled data to model the anatomically induced patterns in a dataset. Deformable Auto-encoder (DAE), an unsupervised group-wise registration technique, has been used to generate a deformed reconstruction of an input image, which also subsequently generates a global template to capture the deformations in a medical dataset. DAEs however have significant weakness in propagating global information over range long dependencies, which may affect the registration performance on quantitative and qualitative measures. Our proposed method captures valuable knowledge over the whole spatial dimension using an attention mechanism. We present Deformable Auto-encoder Attention Relu Network (DA-AR-Net), which is an exquisite integration of the Attention Relu(Arelu), an attention based activation function into the DAE framework. A detachment of the template image from the deformation field is achieved by encoding the spatial information into two separate latent code representation. Each latent code is followed by a separate decoder network, while only a single encoder is used for feature encoding. Our DA-AR-Net is formalized after an extensive and systematic search across various hyper-parameters - initial setting of learnable parameters of Arelu, the appropriate positioning of Arelu, latent code dimensions, and batch size. Our best architecture shows a significant improvement of 42% on MSE score over previous DAEs and 32% reduction is attained while generating visually sharp global templates.
KEYWORDS: Image registration, Medical imaging, Data modeling, Computer programming, 3D modeling, Neuroimaging, Magnetic resonance imaging, Image restoration, Brain, 3D scanning
Robust groupwise registration methods are important for the analysis of large medical image datasets. We build upon the concept of deforming autoencoders that decouples shape and appearance to represent anatomical variability in a robust and plausible manner. In this work we propose a deep learning model that is trained to generate templates and deformation fields. It employs a joint encoder block which provides latent representations for both shape and appearance and is followed by two independent shape and appearance decoder paths. The model achieves image reconstruction by warping the template provided by the appearance decoder with the estimated warping field provided by the shape encoder. By restricting the embedding to a low-dimensional latent code, we are able to obtain meaningful deformable templates. Our objective function ensures smooth and realistic deformation fields. It contains an invertibility loss term, which is novel for deforming autoencoders and induces backward consistency. This should ensure that warping the reconstructed image with the deformation field ideally results in in the template. In addition, warping the template with the reversed deformation field should ideally produce the reconstructed image. We demonstrate the potential of our approach for application to two- and three-dimensional medical image data by training and evaluating it on labeled MRI brain scans. We show that adding the inverse consistency penalty to the objective function leads to improved and more robust registration results. When evaluated on unseen data with expert labels for accuracy estimation our three-dimensional model achieves substantially increased Dice scores by 5 percentage points.
Dynamic Contrast Enhanced MRI (DCE-MRI) is being increasingly used as a method for studying the tumor
vasculature. It is also used as a biomarker to evaluate the response to anti-angiogenic therapies and the efficacy of a
therapy. The uptake of contrast in the tissue is analyzed using pharmacokinetic models for understanding the perfusion
characteristics and cell structure, which are indicative of tumor proliferation. However, in most of these 4D acquisitions
the time required for the complete scan are quite long as sufficient time must be allowed for the passage of contrast
medium from the vasculature to the tumor interstitium and subsequent extraction. Patient motion during such long scans
is one of the major challenges that hamper automated and robust quantification. A system that could automatically detect
if motion has occurred during the acquisition would be extremely beneficial. Patient motion observed during such 4D
acquisitions are often rapid shifts, probably due to involuntary actions such as coughing, sneezing, peristalsis, or jerk due
to discomfort. The detection of such abrupt motion would help to decide on a course of action for correction for motion
such as eliminating time frames affected by motion from analysis, or employing a registration algorithm, or even
considering the exam us unanalyzable. In this paper a new technique is proposed for effective detection of motion in 4D
medical scans by determination of the variation in the signal characteristics from multiple regions of interest across time.
This approach offers a robust, powerful, yet simple technique to detect motion.
This paper presents a feasibility and evaluation study for using 2D ultrasound in conjunction with our statistical deformable bone model in the scope of computer-assisted surgery (CAS). The final aim is to provide the surgeon with an enhanced 3D visualization for surgical navigation in orthopaedic surgery without the need for preoperative CT or MRI scans. We unified our earlier work to combine several automatic methods for statistical bone shape prediction from a sparse set of surface points, and ultrasound segmentation and calibration to provide the intended rapid and accurate visualization. We compared the use of a tracked digitizing pointer to ultrasound to acquire landmarks and bone surface points for the estimation of two cast proximal femurs, where two users performed the experiments 5-6 times per scenario. The concept of CT-based error introduced in the paper is used to give an approximate quantitative value to the best hoped-for prediction error, or lower-bound error, for a given anatomy. The conclusions of this work were that the pointer-based approach produced good results, and although the ultrasound-based approach performed considerably worse on average, there were several cases where the results were comparable to the pointer-based approach. It was determined that the primary factor for poor ultrasound performance was the inaccurate localization of the three initial landmarks, which are used for the statistical shape model.
KEYWORDS: Error analysis, Data modeling, 3D modeling, Reconstruction algorithms, Surgery, 3D image enhancement, Bone, Mahalanobis distance, 3D image processing, Databases
Constructing anatomical shape from sparse information is a challenging task. A priori information is often required to handle this otherwise ill-posed problem. In this paper, the problem is formulated as a three-stage optimal estimation process using an a priori dense surface point distribution model (DS-PDM). The dense surface point distribution model itself is constructed from an already-aligned training shape set using Loop subdivision. It provides a dense and smooth description of all a priori training shapes. Its application in anatomical shape reconstruction facilitates all three stages as follows. The first stage, registration, is to iteratively estimate the scale and the 6-dimensional (6D) rigid registration transformation between the mean shape of DS-PDM and the input points using the iterative closest point (ICP) algorithm. Due to the dense description of the mean shape, a simple point-to-point distance is used to speed up the searching for closest point pairs. The second stage, morphing, optimally and robustly estimates a dense patient-specific template surface from DS-PDM using Mahalanobis distance based regularization. The estimated dense patient-specific template surface is then fed to the third stage, deformation, which uses a newly formularized kernel-based regularization to further reduce the reconstruction error. The proposed method is especially useful for accurate and stable surface reconstruction from sparse information when only a small number of a priori training shapes are available. It has been successfully tested on anatomical shape reconstruction of femoral heads using only dozens of sparse points, yielding very promising results.
KEYWORDS: 3D modeling, Data modeling, Surgery, Statistical modeling, Principal component analysis, Bone, Mahalanobis distance, 3D visualizations, Visualization, Medical imaging
The use of three dimensional models in planning and navigating computer assisted surgeries is now well established. These models provide intuitive visualization to the surgeons contributing to significantly better surgical outcomes. Models obtained from specifically acquired CT scans have the disadvantage that they induce high radiation dose to the patient. In this paper we propose a novel and stable method to construct a patient-specific model that provides an appropriate intra-operative 3D visualization without the need for a pre or intra-operative imaging. Patient specific data
consists of digitized landmarks and surface points that are obtained
intra-operatively. The 3D model is reconstructed by fitting a statistical deformable model to the minimal sparse digitized data. The statistical model is constructed using Principal Component Analysis from training objects. Our morphing scheme efficiently and accurately computes a Mahalanobis distance weighted least square fit of the deformable model to the 3D data model by solving a linear equation system. Relaxing the Mahalanobis distance term as additional points are incorporated enables our method to handle small and large sets of digitized points efficiently. Our novel incorporation of M-estimator based weighting of the digitized points
enables us to effectively reject outliers and compute stable models. Normalization of the input model data and the digitized points
makes our method size invariant and hence applicable directly to any
anatomical shape. The method also allows incorporation of non-spatial data such as patient height and weight. The predominant applications are hip and knee surgeries.
KEYWORDS: Bone, Principal component analysis, 3D modeling, Statistical modeling, Visual process modeling, Visualization, Shape analysis, Data modeling, Surgery, Space operations
This paper addresses the problem of extrapolating extremely sparse three-dimensional set of digitized landmarks
and bone surface points to obtain a complete surface representation. The extrapolation is done using a statistical
principal component analysis (PCA) shape model similar to earlier approaches by Fleute et al. This extrapolation
procedure called Bone-Morphing is highly useful for intra-operative visualization of bone structures in image-free
surgeries. We developed a novel morphing scheme operating directly in the PCA shape space incorporating the
full set of possible variations including additional information such as patient height, weight and age. Shape
information coded by digitized points is iteratively removed from the PCA model. The extrapolated surface is
computed as the most probable surface in the shape space given the data. Interactivity is enhanced, as additional
bone surface points can be incorporated in real-time. The expected accuracy can be visualized at any stage of
the procedure. In a feasibility study, we applied the proposed scheme to the proximal femur structure. 14
CT scans were segmented and a sequence of correspondence establishing methods was employed to compute the
optimal PCA model. Three anatomical landmarks, the femoral notch and the upper and the lower trochanter are
digitized to register the model to the patient anatomy. Our experiments show that the overall shape information
can be captured fairly accurately by a small number of control points. The added advantage is that it is fast,
highly interactive and needs only a small number of points to be digitized intra-operatively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.