We present a novel approach for handling complex information of lesion segmentation in CT follow-up studies. The backbone of our approach is the computation of a longitudinal tumor tree. We perform deep learning based segmentation of all lesions for each time point in CT follow-up studies. Subsequently, follow-up images are registered to establish correspondence between the studies and trace tumors among time points, yielding tree-like relations. The tumor tree encodes the complexity of the individual disease progression. In addition, we present novel descriptive statistics and tools for correlating tumor volumes and RECIST diameters to analyze significance of various markers.
Inflammatory white matter brain lesions are a key pathological finding in patients suffering from multiple sclerosis (MS). Image based quantification of different characteristics of these lesions has become an elemental bio-marker in both diagnosis as well as therapy monitoring during treatment of these patients. Whilst it has been shown that the lesion load at a single point in time is only of limited value with respect to explaining clinical symptoms of the patients, a more robust estimate of disease activity can be observed by analyzing the evolution of lesions over time. Here, we propose a system for automated monitoring of temporal lesion evolution in MS. We describe an approach for analysis of lesion correspondence, along with a pipeline for fully automated computation of this model. The pipeline consists of a U-Net based lesion segmentation, a non-linear image registration between multiple studies, computation of temporal lesion correspondences, and finally an analysis module for extracting and visualizing quantitative parameters from the model.
While deep learning based methods for medical deformable image registration have recently shown significant advances in both speed and accuracy, methods for use in radio therapy are still rarely proposed due to several challenges such as low contrast and artifacts in cone beam CT (CBCT) images or extreme deformations. The aim of image registration in radio therapy is to align a baseline CT and low-dose CBCT images, which allows contours to be propagated and applied doses to be tracked over time. To this end, we present a novel deep learning method for multi-modal deformable CT-CBCT registration. We train a CNN in weakly supervised manner, aiming to optimize an edge-based image similarity and a deformation regularizer including a penalty for local changes of topology and foldings. Additionally, we measure the alignment of given segmentations, facing the problem of extreme deformations. Our method receives only CT and a CBCT images as input and uses groundtruth segmentations exclusively during training. Furthermore, our method is not dependent on the availability of difficult to access ground-truth deformation vector fields. We train and evaluate our method on follow-up image pairs of the pelvis and compare our results to conventional iterative registration algorithms. Our experiments show that the registration accuracy of our deep learning based approach is superior to iterative registration without additional guidance by segmentations and nearly as good as iterative structure guided registration that requires ground-truth segmentations. Furthermore, our deep learning based method runs approximately 100 times faster than the iterative methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.