KEYWORDS: Digital breast tomosynthesis, CT reconstruction, Breast, Convolutional neural networks, X-rays, X-ray computed tomography, Neural networks, Monte Carlo methods, Mammography
Iodine contrast-enhanced spectral mammography (CEM) combines an iodinated contrast agent, such as one used for a typical CT scan, with mammography imaging. The contrast enhancement improves the ability to visualize some cancers, and so it has been proposed as a costeffective and robust alternative to magnetic resonance imaging (MRI) for breast cancer imaging, especially in dense breasts. However, one drawback is poor quantification of contrast agent due to the two-dimensional projection in mammogram images. Digital breast tomosynthesis (DBT) is a pseudo-three-dimensional (3D) imaging modality that uses limited angle tomography. DBT typically exhibits high in-plane resolution, with poor out-of-plane resolution. This out-of-plane blur in DBT distorts the reconstructed lesion and can degrade lesion quantification and volume estimation. This work will explore whether convolutional neural networks (CNN) can be trained to predict a full angle CT reconstruction of the lesion from a limited angle DBT input image of the lesion. Various networks were trained to perform this image restoration using a large number of Monte-Carlo simulated lesion volumes-of-interest (VOI) from DBT and breast CT reconstructions. Our preliminary results show that the output images from the trained neural networks yield a more accurate values in terms of lesion quantification and volume estimation than those estimated from their DBT counterparts.
The presence of round cystic and solid mass lesions identified at mammogram screenings account for a large number of recalls. These recalls can cause undue patient anxiety and increased healthcare costs. Since cystic masses are nearly always benign, accurate classification of these lesions would be allow a significant reduction in recalls. This classification is very difficult using conventional mammogram screening data, but this study explores the possibility of performing the task on dual-energy full field digital mammography (FFDM) data. Since clinical data of this type is not readily available, realistic simulated data with different sources of variation are used. With this data, a deep convolutional neural network (CNN) was trained and evaluated. It achieved an AUC of 0.980 and 42% specificity at the 99% sensitivity level. These promising results should motivate further development of such imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.