Data scarcity and data imbalance are two major challenges in training deep learning models on medical images, such as brain tumor MRI data. The recent advancements in generative artificial intelligence have opened new possibilities for synthetically generating MRI data, including brain tumor MRI scans. This approach can be a potential solution to mitigate the data scarcity problem and enhance training data availability. This work focused on adapting the 2D latent diffusion models to generate 3D multi-contrast brain tumor MRI data with a tumor mask as the condition. The framework comprises two components: a 3D autoencoder model for perceptual compression and a conditional 3D Diffusion Probabilistic Model (DPM) for generating high-quality and diverse multi-contrast brain tumor MRI samples, guided by a conditional tumor mask. Unlike existing works that focused on generating either 2D multi-contrast or 3D single-contrast MRI samples, our models generate multi-contrast 3D MRI samples. We also integrated a conditional module within the UNet backbone of the DPM to capture the semantic class-dependent data distribution driven by the provided tumor mask to generate MRI brain tumor samples based on a specific brain tumor mask. We trained our models using two brain tumor datasets: The Cancer Genome Atlas (TCGA) public dataset and an internal dataset from the University of Texas Southwestern Medical Center (UTSW). The models were able to generate high-quality 3D multi-contrast brain tumor MRI samples with the tumor location aligned by the input condition mask. The quality of the generated images was evaluated using the Fréchet Inception Distance (FID) score. This work has the potential to mitigate the scarcity of brain tumor data and improve the performance of deep learning models involving brain tumor MRI data.
The quality of brain MRI volumes is often compromised by motion artifacts arising from intricate respiratory patterns and involuntary head movements, manifesting as blurring and ghosting that markedly degrade imaging quality. In this study, we introduce an innovative approach employing a 3D deep learning framework to restore brain MR volumes afflicted by motion artifacts. The framework integrates a densely connected 3D U-net architecture augmented by generative adversarial network (GAN)-informed training with a novel volumetric reconstruction loss function tailored to 3D GAN to enhance the quality of the volumes. Our methodology is substantiated through comprehensive experimentation involving a diverse set of motion artifact-affected MR volumes. The generated high-quality MR volumes have similar volumetric signatures comparable to motion-free MR volumes after motion correction. This underscores the significant potential of harnessing this 3D deep learning system to aid in the rectification of motion artifacts in brain MR volumes, highlighting a promising avenue for advanced clinical applications.
Magnetic resonance imaging (MRI) has potential benefits in understanding fetal and placental complications in pregnancy. An accurate segmentation of the uterine cavity and placenta can help facilitate fast and automated analyses of placenta accreta spectrum and other pregnancy complications. In this study, we trained a deep neural network for fully automatic segmentation of the uterine cavity and placenta from MR images of pregnant women with and without placental abnormalities. The two datasets were axial MRI data of 241 pregnant women, among whom, 101 patients also had sagittal MRI data. Our trained model was able to perform fully automatic 3D segmentation of MR image volumes and achieved an average Dice similarity coefficient (DSC) of 92% for uterine cavity and of 82% for placenta on the sagittal dataset and an average DSC of 87% for uterine cavity and of 82% for placenta on the axial dataset. Use of our automatic segmentation method is the first step in designing an analyticstool for to assess the risk of pregnant women with placenta accreta spectrum.
Magnetic resonance imaging (MRI) has gained popularity in the field of prenatal imaging due to the ability to provide high quality images of soft tissue. In this paper, we presented a novel method for extracting different textural and morphological features of the placenta from MRI volumes using topographical mapping. We proposed polar and planar topographical mapping methods to produce common placental features from a unique point of observation. The features extracted from the images included the entire placenta surface, as well as the thickness, intensity, and entropy maps displayed in a convenient two-dimensional format. The topography-based images may be useful for clinical placental assessments as well as computer-assisted diagnosis, and prediction of potential pregnancy complications.
In severe cases, placenta accreta spectrum (PAS) requires emergency hysterectomy, endangering the life of both mother and fetus. Early prediction may reduce complications and aid in management decisions in these high-risk pregnancies. In this work, we developed a novel convolutional network architecture to combine MRI volumes, radiomic features, and custom feature maps to predict PAS severe enough to result in hysterectomy after fetal delivery in pregnant women. We trained, optimized, and evaluated the networks using data from 241 patients, in groups of 157, 24, and 60 for training, validation, and testing, respectively. We found the network using all three paths produced the best performance, with an AUC of 87.8, accuracy 83.3%,sensitivity of 85.0, and specificity of 82.5. This deep learning algorithm, deployed in clinical settings, may identify women at risk before birth, resulting in improved patient outcomes.
Magnetic resonance imaging (MRI) is useful for the detection of abnormalities affecting maternal and fetal health. In this study, we used a fully convolutional neural network for simultaneous segmentation of the uterine cavity and placenta on MR images. We trained the network with MR images of 181 patients, with 157 for training and 24 for validation. The segmentation performance of the algorithm was evaluated using MR images of 60 additional patients that were not involved in training. The average Dice similarity coefficients achieved for the uterine cavity and placenta were 92% and 80%, respectively. The algorithm could estimate the volume of the uterine cavity and placenta with average errors of less than 1.1% compared to manual estimations. Automated segmentation, when incorporated into clinical use, has the potential to quantify, standardize, and improve placental assessment, resulting in improved outcomes for mothers and fetuses.
In women with placenta accreta spectrum (PAS), patient management may involve cesarean hysterectomy at delivery. Magnetic resonance imaging (MRI) has been used for further evaluation of PAS and surgical planning. This work tackles two prediction problems: predicting presence of PAS and predicting hysterectomy using MR images of pregnant patients. First, we extracted approximately 2,500 radiomic features from MR images with two regions of interest: the placenta and the uterus. In addition to analyzing two regions of interest, we dilated the placenta and uterus masks by 5, 10, 15, and 20 mm to gain insights from the myometrium, where the uterus and placenta overlap in the case of PAS. This study cohort includes 241 pregnant women. Of these women, 89 underwent hysterectomy while 152 did not; 141 with suspected PAS, and 100 without suspected PAS. We obtained an accuracy of 0.88 for predicting hysterectomy and an accuracy of 0.92 for classifying suspected PAS. The radiomic analysis tool is further validated, it can be useful for aiding clinicians in decision making on the care of pregnant women.
PurposeDeep learning has shown promise for predicting the molecular profiles of gliomas using MR images. Prior to clinical implementation, ensuring robustness to real-world problems, such as patient motion, is crucial. The purpose of this study is to perform a preliminary evaluation on the effects of simulated motion artifact on glioma marker classifier performance and determine if motion correction can restore classification accuracies.ApproachT2w images and molecular information were retrieved from the TCIA and TCGA databases. Simulated motion was added in the k-space domain along the phase encoding direction. Classifier performance for IDH mutation, 1p/19q co-deletion, and MGMT methylation was assessed over the range of 0% to 100% corrupted k-space lines. Rudimentary motion correction networks were trained on the motion-corrupted images. The performance of the three glioma marker classifiers was then evaluated on the motion-corrected images.ResultsGlioma marker classifier performance decreased markedly with increasing motion corruption. Applying motion correction effectively restored classification accuracy for even the most motion-corrupted images. Motion correction of uncorrupted images exceeded the original performance of the network.ConclusionsRobust motion correction can facilitate highly accurate deep learning MRI-based molecular marker classification, rivaling invasive tissue-based characterization methods. Motion correction may be able to increase classification accuracy even in the absence of a visible artifact, representing a new strategy for boosting classifier performance.
KEYWORDS: Image segmentation, Magnetic resonance imaging, Uterus, 3D image processing, 3D modeling, Data modeling, Solids, Image processing algorithms and systems, Fetus, Convolutional neural networks
Purpose: Magnetic resonance imaging has been recently used to examine the abnormalities of the placenta during pregnancy. Segmentation of the placenta and uterine cavity allows quantitative measures and further analyses of the organs. The objective of this study is to develop a segmentation method with minimal user interaction.
Approach: We developed a fully convolutional neural network (CNN) for simultaneous segmentation of the uterine cavity and placenta in three dimensions (3D) while a minimal operator interaction was incorporated for training and testing of the network. The user interaction guided the network to localize the placenta more accurately. In the experiments, we trained two CNNs, one using 70 normal training cases and the other using 129 training cases including normal cases as well as cases with suspected placenta accreta spectrum (PAS). We evaluated the performance of the segmentation algorithms on two test sets: one with 20 normal cases and the other with 50 images from both normal women and women with suspected PAS.
Results: For the normal test data, the average Dice similarity coefficient (DSC) was 92% and 82% for the uterine cavity and placenta, respectively. For the combination of normal and abnormal cases, the DSC was 88% and 83% for the uterine cavity and placenta, respectively. The 3D segmentation algorithm estimated the volume of the normal and abnormal uterine cavity and placenta with average volume estimation errors of 4% and 9%, respectively.
Conclusions: The deep learning-based segmentation method provides a useful tool for volume estimation and analysis of the placenta and uterus cavity in human placental imaging.
A Deep-Learning (DL) based segmentation tool was applied to a new magnetic resonance imaging dataset of pregnant women with suspected Placenta Accreta Spectrum (PAS). Radiomic features from DL segmentation were compared to those from expert manual segmentation via intraclass correlation coefficients (ICC) to assess reproducibility. An additional imaging marker quantifying the placental location within the uterus (PLU) was included. Features with an ICC < 0.7 were used to build logistic regression models to predict hysterectomy. Of 2059 features, 781 (37.9%) had ICC <0.7. AUC was 0.69 (95% CI 0.63-0.74) for manually segmented data and 0.78 (95% CI 0.73-0.83) for DL segmented data.
KEYWORDS: Image segmentation, Magnetic resonance imaging, Uterus, 3D image processing, Convolutional neural networks, Fetus, 3D modeling, Image processing algorithms and systems
Segmentation of the uterine cavity and placenta in fetal magnetic resonance (MR) imaging is useful for the detection of abnormalities that affect maternal and fetal health. In this study, we used a fully convolutional neural network for 3D segmentation of the uterine cavity and placenta while a minimal operator interaction was incorporated for training and testing the network. The user interaction guided the network to localize the placenta more accurately. We trained the network with 70 training and 10 validation MRI cases and evaluated the algorithm segmentation performance using 20 cases. The average Dice similarity coefficient was 92% and 82% for the uterine cavity and placenta, respectively. The algorithm could estimate the volume of the uterine cavity and placenta with average errors of 2% and 9%, respectively. The results demonstrate that the deep learning-based segmentation and volume estimation is possible and can potentially be useful for clinical applications of human placental imaging.
Isocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose an automated pipeline for noninvasively predicting IDH status using deep learning and T2-weighted (T2w) magnetic resonance (MR) images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MR images and genomic data were obtained from The Cancer Imaging Archive dataset for 260 subjects (120 high-grade and 140 low-grade gliomas). A fully automated two-dimensional densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects using fivefold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated, and IDH wild type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.