Purpose To accurately segment organs from 3D CT image volumes using a 2D, multi-channel SegNet model consisting of a deep Convolutional Neural Network (CNN) encoder-decoder architecture. Method We trained a SegNet model on the extended cardiac-torso (XCAT) dataset, which was previously constructed based on patient Chest–Abdomen–Pelvis (CAP) Computed Tomography (CT) studies from 50 Duke patients. Each study consists of one low-resolution (5-mm section thickness) 3D CT image volume and its corresponding 3D, manually labeled volume. To improve modeling on such small sample size regime, we performed median frequency class balancing weighting in the loss function of the SegNet, data normalization adjusting for intensity coverage of CT volumes, data transformation to harmonize voxel resolution, CT section extrapolation to virtually increase the number of transverse sections available as inputs to the 2D multi-channel model, and data augmentation to simulate mildly rotated volumes. To assess model performance, we calculated Dice coefficients on a held-out test set, as well as qualitative evaluation of segmentation on high-resolution CTs. Further, we incorporated 50 patients high-resolution CTs with manually-labeled kidney segmentation masks for the purpose of quantitatively evaluating the performance of our XCAT trained segmentation model. The entire study was conducted from raw, identifiable data within the Duke Protected Analytics Computing Environment (PACE). Result We achieved median Dice coefficients over 0.8 for most organs and structures on XCAT test instances and observed good performance on additional images without manual segmentation labels, qualitatively evaluated by Duke Radiology experts. Moreover, we achieved 0.89 median Dice Coefficients for kidneys on high-resolution CTs. Conclusion 2D, multi-channel models like SegNet are effective for organ segmentations of 3D CT image volumes, achieving high segmentation accuracies.
Many researchers in the field of machine learning have addressed the problem of detecting anomalies within Computed Tomography (CT) scans. Training these machine learning algorithms requires a dataset of CT scans with identified anomalies (labels), usually, in specific organs. This represents a problem, since it requires experts to review thousands of images in order to create labels for these data. We aim to decrease human burden at labeling CT scans by developing a model that identifies anomalies within plain-text-based reports that then could be further used as a method to create labels for models based on CT scans. This study contains more than 4800 CT reports from Duke Health System, for which we aim to identify organ specific abnormalities. We propose an iterative active learning approach that consists of building a machine learning model to classify CT reports by abnormalities in different organs and then improving it by actively adding reports sequentially. At each iteration, clinical experts review the report that provides the model with highest expected information gain. This process is done in real time by using a web interface. Then, this datum is used by the model to improve its performance. We evaluated the performance of our method for abnormalities in kidneys and lungs. When starting with a model trained on 99 reports, the results show the model achieves an Area Under the Curve (AUC) score of 0.93 on the test set after adding 130 actively labeled reports to the model from an unlabeled pool of 4,000. This suggests that a set of labeled CT scans can be obtained with significantly reduced human work by combining machine learning techniques and clinical experts' knowledge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.