This study investigates the effect of radiation dose reduction of a renal perfusion CT protocol on quantitative imaging features for patients of different sizes. Our findings indicate that the impact of dose reduction is significantly different between patients of different sizes for standard deviation, entropy, and GLCM joint average at all dose levels evaluated, and for mean at the lowest dose level evaluated (p < .001). These results suggest that a size-based scanning protocol may be needed to provide quantitative results that are robust with respect to patient size.
Purpose: Convolutional Neural Networks (CNN) are frequently used for organ segmentation in medical imaging. Many CNNs, however, struggle for acceptance into clinical practice because of mis-segmentations that are obvious to radiologists or other experts. We propose using a Cognitive AI framework that applies anatomical knowledge and machine reasoning to check and improve CNN segmentations and avoid obvious mis-segmentations. Methods: We used the open-source SimpleMind framework that allows post-processing and machine reasoning to be applied to CNN segmentation results. Within this framework, a 3D-UNet CNN was trained on 212 contrast-enhanced kidney CT scans. From an anatomical knowledge base, SimpleMind derived the following Cognitive AI post-processing steps: (1) identification of the abdomen, (2) segmentation of the spine by searching for bone in the posterior region of the abdomen, (3) CNN output separation into right and left kidneys using the spine as reference, (4) refinement of individual kidney segmentations using thresholds of volume, HU, and voxel count, which are designed to eliminate disconnected false positives, (5) morphological opening to reduce connected false positives bordering the kidneys’ segmentations. On a test set of 53 scans, using reference annotations, Dice Coefficient (DC), Hausdorff Distance (HD) and Average Symmetric Surface Distances (ASSD) were computed and compared for the CNN and post-processed outputs. Results: Post-processing with Cognitive AI reduced the kidney segmentation HD from 46.4±55.8 mm to 30.7±17.6 mm, with the decrease in variance being statistically significant (p = 0.0296). DC and ASSD metrics were also improved. Conclusions: This initial work demonstrates that the incorporation of anatomical knowledge using Cognitive AI techniques can improve the segmentation accuracy of CNNs. The CNN provides very good overall kidney segmentation performance, and in cases where segmentation errors occur the post processing was able to improve performance.
Kidneys are most easily segmented by convolutional neural networks (CNN) on contrast enhanced CT (CECT) images, but their segmentation accuracy may be reduced when only non-contrast CT (NCCT) images are available. The purpose of this work was to investigate the improvement in segmentation accuracy when implementing a generative adversarial network (GAN) to create virtual contrast enhanced (vCECT) images from non-contrast inputs. A 2D cycleGAN model, incorporating an additional idempotent loss function to restrict the GAN from making unnecessary modifications to data already in the translated domain, was trained to generate virtual contrast enhanced images on 286 paired non-contrast and contrast enhanced inputs. A 3D CNN trained on contrast enhanced images was applied to segment the kidneys in a test set of 20 paired non-contrast and contrast enhanced images. The non-contrast images were converted to virtual contrast enhanced images, then kidneys in both image conditions were segmented by the CNN. Segmentation results were compared to analyst annotations on non-contrast images visually and by Dice Coefficient (DC). Segmentation on virtual contrast enhanced images were more complete with fewer extraneous detections compared to non-contrast images in 16/20 cases. Mean(±SD) DC was 0.88(±0.80), 0.90(±0.03), and 0.95(±0.05) for non-contrast, virtual contrast enhanced, and real contrast enhanced, respectively. Virtual contrast enhancement visually improved segmentation quality, poor performing cases had their performance improved resulting in an overall reduction in DC variation, and the minimum DC increased from 0.65 to 0.85. This work provides preliminary results demonstrating the potential effectiveness of using a GAN for virtual contrast enhancement to improve CNN-based kidney segmentation on non-contrast images.
We propose an AI-human interactive pipeline to accelerate medical image annotation of large data sets. This pipeline continuously iterates on three steps. First, an AI system provides initial automated annotations to image analysts. Second, the analysts edit the annotations. Third, the AI system is upgraded with analysts’ feedback, thus enabling more efficient annotation. To develop this pipeline, we propose an AI system and upgraded workflow that is focused on reducing the annotation time while maintaining accuracy. We demonstrated the ability of the feedback loop to accelerate the task of prostate MRI segmentation. With the initial iterations on small batch sizes, the annotation time was reduced substantially.
KEYWORDS: Performance modeling, Data modeling, Principal component analysis, Prostate, Magnetic resonance imaging, Statistical modeling, Biopsy, Statistical analysis, Data centers, Cancer
Purpose: Prostate cancer (PCa) is the most common solid organ cancer and second leading cause of death in men. Multiparametric magnetic resonance imaging (mpMRI) enables detection of the most aggressive, clinically significant PCa (csPCa) tumors that require further treatment. A suspicious region of interest (ROI) detected on mpMRI is now assigned a Prostate Imaging-Reporting and Data System (PIRADS) score to standardize interpretation of mpMRI for PCa detection. However, there is significant inter-reader variability among radiologists in PIRADS score assignment and a minimal input semi-automated artificial intelligence (AI) system is proposed to harmonize PIRADS scores with mpMRI data.
Approach: The proposed deep learning model (the seed point model) uses a simulated single-click seed point as input to annotate the lesion on mpMRI. This approach is in contrast to typical medical AI-based approaches that require annotation of the complete lesion. The mpMRI data from 617 patients used in this study were prospectively collected at a major tertiary U.S. medical center. The model was trained and validated to classify whether an mpMRI image had a lesion with a PIRADS score greater than or equal to PIRADS 4.
Results: The model yielded an average receiver-operator characteristic (ROC) area under the curve (ROC-AUC) of 0.704 over a 10-fold cross-validation, which is significantly higher than the previously published benchmark.
Conclusions: The proposed model could aid in PIRADS scoring of mpMRI, providing second reads to promote quality as well as offering expertise in environments that lack a radiologist with training in prostate mpMRI interpretation. The model could help identify tumors with a higher PIRADS for better clinical management and treatment of PCa patients at an early stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.