This study demonstrates that a variant of a Siamese neural network architecture is more effective at classifying highdimensional radiomic features (extracted from T2 MRI images) than traditional models, such as a Support Vector Machine or Discriminant Analysis. Ninety-nine female patients, between the ages of 20 and 48, were imaged with T2 MRI. Using biopsy pathology, the patients were separated into two groups: those with breast cancer (N=55) and those with GLM (N=44). Lesions were segmented by a trained radiologist and the ROIs were used for radiomic feature extraction. The radiomic features include 536 published features from Aerts et al., along with 20 features recurrent quantification analysis features. A Student T-Test was used to select features found to be statistically significant between the two patient groups. These features were then used to train a Siamese neural network. The label given to test features was the label of whichever class the test features with the highest percentile similarity within the training group. Within the two highest-dimensional feature sets, the Siamese network produced an AUC of 0.853 and 0.894, respectively. This is compared to best non-Siamese model, Discriminant Analysis, which produced an AUC of 0.823 and 0.836 for the two respective feature sets. However, when it came to the lower-dimensional recurrent features and the top-20 most significant features from Aerts et al., the Siamese network performed on-par or worse than the competing models. The proposed Siamese neural network architecture can outperform competing other models in high-dimensional, low-sample size spaces with regards to tabular data.
Breast cancer in young women is commonly aggressive, in part because the proportion of high-grade, triple-negative (TN) tumor is too high. There are certain limitations in the detection of biopsies or surgical specimens which only select part of tumor sample tissue and ignore the possible heterogeneity of tumors. In clinical practice, MRI is used for the diagnosis of breast cancer. MRI-based radiomics is a developing approach that may provide not only the diagnostic value for breast cancer but also the predictive or prognostic associations between the images and biological characteristics. In this work, we used radiomics methods to analyze MR images of breast cancer in 53 young women, and correlated the radiomics data with molecular subtypes. The results indicated a significant difference between TN type and non-TN type of breast cancer in young women on the radiomics features based on T2-weighted MR images. This may be helpful for the identification of TN type and guiding the therapeutic strategies.
Segmentation of the prostate in magnetic resonance (MR) images has many applications in image-guided treatment planning and procedures such as biopsy and focal therapy. However, manual delineation of the prostate boundary is a time-consuming task with high inter-observer variation. In this study, we proposed a semiautomated, three-dimensional (3D) prostate segmentation technique for T2-weighted MR images based on shape and texture analysis. The prostate gland shape is usually globular with a smoothly curved surface that could be accurately modeled and reconstructed if the locations of a limited number of well-distributed surface points are known. For a training image set, we used an inter-subject correspondence between the prostate surface points to model the prostate shape variation based on a statistical point distribution modeling. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. To segment a new image, we used the learned prostate shape and texture characteristics to search for the prostate border close to an initially estimated prostate surface. We used 23 MR images for training, and 14 images for testing the algorithm performance. We compared the results to two sets of experts’ manual reference segmentations. The measured mean ± standard deviation of error values for the whole gland were 1.4 ± 0.4 mm, 8.5 ± 2.0 mm, and 86 ± 3% in terms of mean absolute distance (MAD), Hausdorff distance (HDist), and Dice similarity coefficient (DSC). The average measured differences between the two experts on the same datasets were 1.5 mm (MAD), 9.0 mm (HDist), and 83% (DSC). The proposed algorithm illustrated a fast, accurate, and robust performance for 3D prostate segmentation. The accuracy of the algorithm is within the inter-expert variability observed in manual segmentation and comparable to the best performance results reported in the literature.
Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of 85.0±3.8% as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.