Elbow fractures are one of the most common fracture types. Diagnoses on elbow fractures often need the help of radiographic imaging to be read and analyzed by a specialized radiologist with years of training. Thanks to the recent advances of deep learning, a model that can classify and detect different types of bone fractures needs only hours of training and has shown promising results. However, most existing deep learning models are purely data-driven, lacking incorporation of known domain knowledge from human experts. In this work, we propose a novel deep learning method to diagnose elbow fracture from elbow X-ray images by integrating domain-specific medical knowledge into a curriculum learning framework. In our method, the training data are permutated by sampling without replacement at the beginning of each training epoch. The sampling probability of each training sample is guided by a scoring criterion constructed based on clinically known knowledge from human experts, where the scoring indicates the diagnosis difficultness of different elbow fracture subtypes. We also propose an algorithm that updates the sampling probabilities at each epoch, which is applicable to other sampling-based curriculum learning frameworks. We design an experiment with 1865 elbow X-ray images for a fracture/normal binary classification task and compare our proposed method to a baseline method and a previous method using multiple metrics. Our results show that the proposed method achieves the highest classification performance. Also, our proposed probability update algorithm boosts the performance of the previous method.
KEYWORDS: Digital breast tomosynthesis, 3D modeling, Tumor growth modeling, Performance modeling, Breast cancer, Data modeling, Breast, Algorithm development, Binary data, 3D image processing
Artificial intelligence (AI) algorithms, especially deep learning methods have proven to be successful in many medical imaging applications. Computerized breast cancer image analysis can improve diagnosis accuracy. Digital Breast Tomosynthesis (DBT) imaging is a new modality and more advantageous compared to classical digital mammography (DM). Therefore, development of new deep learning algorithms compatible with DBT modality are potent to improve DBT imaging reading time efficiency and increase accuracy for breast cancer diagnosis when used as additional tool for radiologists. In this work, we aimed to build a 3D deep learning model to distinguish malignancy and benign breasts using DBT images. We also investigated effects of different loss functions in our deep learning models. We implemented and evaluated our method on a large data set of 546 patients (205 malignancy and 341 benign). Our results showed that different loss functions lead to an influence on the models performance in our classification tasks, and specific loss function may be selected or customized to adjust a specific performance metric for concrete applications.
KEYWORDS: Digital breast tomosynthesis, 3D modeling, Tumor growth modeling, Breast cancer, Data modeling, 3D image processing, Artificial intelligence, Radiology, Tumors, Performance modeling
Digital mammography (DM) was the most common image guided diagnostic tool in breast cancer detection up till recent years. However, digital breast tomosynthesis (DBT) imaging, which presents more accurate results than DM, is going to replace DM in clinical practice. As in many medical image processing applications, Artificial Intelligence (AI) has been shown promising in reducing radiologists reading time with enhanced cancer diagnostic accuracy. In this paper, we implemented a 3D network using deep learning algorithms to detect breast cancer malignancy using DBT craniocaudal (CC) view images. We created a multi-sub-volume approach, in which the most representative slice (MRS) for malignancy scans is manually selected/defined by expert radiologists. We specifically compared the effects on different selections of the MRS by two radiologists and the resulting model performance variations. The results indicate that our scheme is relatively robust for all three experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.