Molecular ultrasound imaging is used to image the expression of specific proteins on the surface of blood vessels using the conjugated microbubbles (MBs) that can bind to the targeted proteins, which makes MBs ideal for imaging the protein expressed on blood vessels. However, how to optimally apply MBs in an ultrasound imaging system to detect and quantify the targeted protein expression needs further investigation. To address this issue, objective of this study is to investigate feasibility of developing and applying a new quantitative imaging marker to quantify the expression of protein markers on the surface of cancer cells. To obtain a numeric value proportional to the amount of MBs that bind to the target protein, a standard method for quantification of MBs is applying a destructive pulse, which bursts most of the bubbles in the region of interest. The difference between the signal intensity before and after destruction is used to measure the differential targeted enhancement (dTE). In addition, a dynamic kinetic model is applied to fit the timeintensity curves and a structural similarity model with three metrics is used to detect the differences between images. Study results show that the elevated dTE signals in images acquired from the targeted (MBTar) and isotype (MBIso) are significantly different (p<0.05). Quantitative image features are also successfully computed from the kinetic model and structural similarity model, which provide potential to identify new quantitative image markers that can more accurately differentiate the targeted microbubble status.
Since sensitivity of mammography is limited in detecting subtle cancer, we propose to develop a new computer-aided detection (CAD) scheme to generate a new quantitative image marker to predict risk of having mammography-occult cancer that can be detected by breast MRI. The study is based on the hypothesis that overall breast density and bilateral asymmetry of breast density between left and right breasts associate with higher risk of developing breast cancer in shortterm. Thus, a new CAD scheme is developed to process images and analyze bilateral asymmetry of mammographic density and tissue structure. From the computed image features, a machine learning model is trained to generate an image marker or likelihood score to predict risk of having mammography-occult tumors. In this presentation, we report two cases in which screening mammograms are rated as BIRADS 2 by radiologists. Two women do not qualify for breast MRI screening due to the lower risk scores predicted by existing epidemiology risk models. CAD scheme analyzes mammograms of these two cases and produces high risk scores of having mammography-occult tumors. After applying breast MRI screening, two mammography-occult tumors are detected. Biopsy results confirm one invasive ductal carcinoma (grade 3) and one high-risk tumor of solitary breast papillomas, which needs to be removed by surgery. This study demonstrates potential advantages of applying CAD-generated image marker to detect abnormality or predict cancer risk that are missed or overlooked by radiologists. It can thus increase efficacy of using MRI as an adjunct tool to mammography to detect more subtle cancers.
Applying computer-aided detection (CAD) generated quantitative image markers has demonstrated significant advantages than using subjectively qualitative assessment in supporting translational clinical research. However, although many advanced CAD schemes have been developed, due to heterogeneity of medical images, achieving high scientific rigor of “black-box” type CAD schemes trained using small datasets remains a big challenge. In order to support and facilitate research effort and progress of physician researchers using quantitative imaging markers, we investigated and tested an interactive approach by developing CAD schemes with interactive functions and visual-aid tools. Thus, unlike fully automated CAD schemes, our interactive CAD tools allow users to visually inspect image segmentation results and provide instruction to correct segmentation errors if needed. Based on users’ instruction, CAD scheme automatically correct segmentation errors, recompute image features and generate machine learning-based prediction scores. We have installed three interactive CAD tools in clinical imaging reading facilities to date, which support and facilitate oncologists to acquire image markers to predict progression-free survival of ovarian cancer patients undergoing angiogenesis chemotherapies, and neurologists to compute image markers and prediction scores to assess prognosis of patients diagnosed with aneurysmal subarachnoid hemorrhage and acute ischemic stroke. Using these ICAD tools, clinical researchers have conducted several translational clinical studies by analyzing several diverse study cohorts, which have resulted in publishing seven peer-reviewed papers in clinical journals in the last three years. Additionally, feedbacks from physician researchers also indicate their increased confidence in using new quantitative image markers and help medical imaging researchers further improve or optimize interactive CAD tools.
Radiomics and deep transfer learning have been attracting broad research interest in developing and optimizing CAD schemes of medical images. However, these two technologies are typically applied in different studies using different image datasets. Advantages or potential limitations of applying these two technologies in CAD applications have not been well investigated. This study aims to compare and assess these two technologies in classifying breast lesions. A retrospective dataset including 2,778 digital mammograms is assembled in which 1,452 images depict malignant lesions and 1,326 images depict benign lesions. Two CAD schemes are developed to classify breast lesions. First, one scheme is applied to segment lesions and compute radiomics features, while another scheme applies a pre-trained residual net architecture (ResNet50) as a transfer learning model to extract automated features. Next, the same principal component algorithm (PCA) is used to process both initially computed radiomics and automated features to create optimal feature vectors by eliminating redundant features. Then, several support vector machine (SVM)-based classifiers are built using the optimized radiomics or automated features. Each SVM model is trained and tested using a 10-fold cross-validation method. Classification performance is evaluated using area under ROC curve (AUC). Two SVMs trained using radiomics and automated features yield AUC of 0.77±0.02 and 0.85±0.02, respectively. In addition, SVM trained using the fused radiomics and automated features does not yield significantly higher AUC. This study indicates that (1) using deep transfer learning yields higher classification performance, and (2) radiomics and automated features contain highly correlated information in lesion classification.
Computer-aided detection and/or diagnosis (CAD) schemes typically include machine learning classifiers trained using handcrafted features. The objective of this study is to investigate the feasibility of identifying and applying a new quantitative imaging marker to predict survival of gastric cancer patients. A retrospective dataset including CT images of 403 patients is assembled. Among them, 162 patients have more than 5-year survival. A CAD scheme is applied to segment gastric tumors depicted in multiple CT image slices. After gray-level normalization of each segmented tumor region to reduce image value fluctuation, we used a special feature selection library of a publicly available Pyradiomics software to compute 103 features. To identify an optimal approach to predict patient survival, we investigate two logistic regression model (LRM) generated imaging markers. The first one fuses image features computed from one CT slice and the second one fuses the weighted average image features computed from multiple CT slices. Two LRMs are trained and tested using a leave-one-case-out cross-validation method. Using the LRM-generated prediction scores, receiving operating characteristics (ROC) curves are computed and the area under ROC curve (AUC) is used as index to evaluate performance in predicting patients’ survival. Study results show that the case prediction-based AUC values are 0.70 and 0.72 for two LRM-generated image markers fused with image features computed from a single CT slide and multiple CT slices, respectively. This study demonstrates that (1) radiomics features computed from CT images carry valuable discriminatory information to predict survival of gastric cancer patients and (2) fusion of quasi-3D image features yields higher prediction accuracy than using simple 2D image features.
Computer-aided detection and/or diagnosis schemes typically include machine learning classifiers trained using either handcrafted features or deep learning model-generated automated features. The objective of this study is to investigate a new method to effectively select optimal feature vectors from an extremely large automated feature pool and the feasibility of improving the performance of a machine learning classifier trained using the fused handcrafted and automated feature sets. We assembled a retrospective image dataset involving 1,535 mammograms in which 740 and 795 images depict malignant and benign lesions, respectively. For each image, a region of interest (ROI) around the center of the lesion is extracted. First, 40 handcrafted features are computed. Two automated feature set are extracted from a VGG16 network pretrained using the ImageNet dataset. The first automated feature set is extracted using pseudo color images created by stacking the original image, a bilateral filtered image, and a histogram equalized image. The second automated feature set is created by stacking the original image in three channels. Two fused feature sets are then created by fusing the handcrafted feature set with each automated feature set, respectively. Five linear support vector machines are then trained using a 10- fold cross-validation method. The classification accuracy and AUC of the SVMs trained using the fused feature sets performs significantly better than using handcrafted or automated features alone (p<0.05). Study results demonstrate that handcrafted and automated features contain complimentary information so that fusion together create classifiers with improved performance in classifying breast lesions as malignant or benign.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.