Dual-Energy CT (DECT) has risen to prominence as a valuable instrument in diagnostic imaging, boasting a range of clinical applications. Contrast-DECT (C-DECT) is particularly useful in clinical by generating iodine density map, which could benefit radiation oncologists in treatment planning process. However, DECT scanners are not widely equipped among the radiation therapy centers. Moreover, side effects from iodine agents restrict the use of DECT iodine contrast imaging for all patients. The purpose of this work is to generate synthetic C-DECT images based on non-contrast single-energy CT (SECT) via deep learning (DL) method. 108 head-and-neck cancer patients’ images were retrospectively investigated in this work. All patients were scanned with non-contrast SECT and contrast DECT protocols. A conditional Denoising Diffusion Probalistic Model (DDPM) was implemented to generate synthetic High-energy CT (H-CT) and Low-energy CT (L-CT). The training and application dataset was separated strictly, 100 patients’ data were used as the training dataset and the rest eight patients’ data were used as the application dataset. The performance of the proposed method was evaluated with three quantitative metrics including Mean Absolute Error (MAE), Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR). For H-CT and L-CT, the quantitative evaluation results of MAE, SSIM and PSNR are 19.15±2.23 (HU) and 23.34±3.45 (HU), 0.74±0.13 and 0.75±0.19, 28.13±2.83 (dB) and 28.18±3.55 (dB), respectively. This approach holds potential significance for radiation therapy facilities lacking DECT scanners, as well as for specific patients who may not be suitable candidates for iodine agent injection.
KEYWORDS: Iodine, Diffusion, Computed tomography, Education and training, Tissues, Visual process modeling, Denoising, Deep learning, Data modeling, Systems modeling
Iodine maps can be obtained from contrast enhanced dual-energy compute tomography (DECT) scans to emphasize iodine contrast agent uptake in cancer patients’ tissues, which benefits radiation oncologists in the treatment planning process. However, DECT scanners are not widely equipped among the radiation therapy centers. Furthermore, certain patients, i.e., either with iodine allergies or renal dysfunction, are not suitable for iodine contrast DECT scans. The purpose of this work is to generate synthetic iodine maps based on non-contrast single-energy CT (SECT) images via deep learning (DL) method. 130 head-and-neck patients’ images were retrospectively investigated in this work. All patients were scanned with non-contrast SECT and contrast DECT protocols. The ground truth iodine maps were generated from contrast DECT scans using vender software. A denoising diffusion probabilistic model (DDPM) was implemented to generate synthetic iodine maps. The training and application datasets were kept strictly separate, containing data from 100 and 8 patients respectively. A CycleGAN was implemented as a reference method to assess the proposed DDPM method. The accuracy of the proposed DDPM was evaluated using three quantitative metrics: Mean absolute error (MAE) (19.31±3.38 HU), structural similarity index (0.79±0.13) and peak signal-to-noise ratio (22.25±4.23dB) respectively. Compared to the reference method, the proposed method demonstrated superior performance, which was further corroborated by paired two-tailed t-tests, across these metrics. To our best knowledge, this work is the first of its kind to demonstrate the capability to provide synthetic iodine maps based on SECT via DDPM method.
Typical radiation therapy for head-and-neck cancer patients lasts for more than a month. Anatomical variations often occur along the treatment course due to the tumor shrinkage and weight loss, particularly for head and neck (HN) cancer patients. To maintain the accuracy of radiotherapy beam delivery, weekly quality assurance (QA) CT is sometime acquired to monitor patients’ anatomical changes, and re-plan the treatment if needed. However, the re-plan is a labor intensive and time-consuming process, thus, decisions of re-plan are always made cautiously. In this study, we aim to develop a deep learning-based method for automated segmentation of multi-organ from CT head and neck (HN) images to rapidly evaluate the anatomical variations. Our proposed method, named detecting and boosting network, consists of one pre-trained fully convolutional one stage objection detector (FCOS) and two learnable subnetworks, i.e., hierarchical block and mask head. FCOS is used to extract informative features from CT and locate the volume-of-interest (VOIs) of multiple organs. Hierarchical block is used to enhance the feature contrast around organ boundary and thus improve the ability of organ classification. Mask head then segment organ from the refined feature map within the VOIs. We conducted a five-fold cross-validation on 35 patients’ cases who have multiple weekly CT scans (over 100 QACTs) during their radiotherapy. The 11 organs were segmented and compared with manual contours using several segmentation measurements. The mean Dice similarity coefficient (DSC) values of 0.82, 0.82, and 0.81 were achieved along the treatment course for all the organs. These results demonstrate the feasibility and efficacy of our proposed method for multi-OAR segmentation from HN CT, which can be used for rapid evaluate the anatomical variations in HN radiation therapy.
This work presents a learning-based method to synthesize dual energy CT (DECT) images from conventional single energy CT (SECT). The proposed method uses a residual attention generative adversarial network. Residual blocks with attention gates were used to force the model to focus on the difference between DECT maps and SECT images. To evaluate the accuracy of the method, we retrospectively investigated 20 headand-neck cancer patients with both DECT and SECT scans available. The high and low energy CT images acquired from DECT acted as learning targets in the training process for SECT datasets and were evaluated against results from the proposed method using a leave-one-out cross-validation strategy. The synthesized DECT images showed an average mean absolute error around 30 Hounsfield Unit (HU) across the wholebody volume. These results strongly indicate the high accuracy of synthesized DECT image by our machinelearning-based method.
Proton radiation therapy has shown highly conformal distribution of prescribed dose in target with outstanding normal tissue sparing stemming from its steep dose gradient at the distal end of the beam. However, the uncertainty in everyday patient setup can lead to a discrepancy between treatment dose distribution and the planning dose distribution. Conebeam CT (CBCT) can be acquired daily before treatment to evaluate such inter-fraction setup error, while a further evaluation on resulted dose distribution error is currently not available. In this study, we developed a novel deep-learning based method to predict the relative stopping power maps from daily CBCT images to allow for online dose calculation in a step towards adaptive proton radiation therapy. 20 head-and-neck patients with CT and CBCT images are included for training and testing. Our CBCT RSP results were evaluated with RSP maps created from CT images as the ground truth. Among all the 20 patients, the averaged mean absolute error between CT-based and CBCT-based RSP was 0.04±0.02, the averaged mean error was -0.01±0.03 and the averaged normalized correlation coefficient was 0.97±0.01. The proposed method provides sufficiently accurate RSP map generation from CBCT images, possibly allowing for CBCT-guided adaptive treatment planning for proton radiation therapy.
By exploiting the energy dependence of photoelectric and Compton interactions, dual-energy CT (DECT) can be used to derive a number of parameters based on physical properties, such as relative stopping power map (RSPM). The accuracy of dual-energy CT (DECT)-derived parametric maps relies on image noise levels and the severity of artifacts. Suboptimal image quality may degrade the accuracy of physics-based mapping techniques and affect subsequent processing for clinical applications. In this study, we propose a deep-learning-based method to accurately generate relative stopping power map (RSPM) based on the virtual monoenergetic images as an alternative to physics-based dual-energy approaches. For the training target of our deep-learning model, we manually segmented head-and-neck DECT images into brain, bone, fat, soft-tissue, lung and air, and then assigned different RSP values into the corresponding tissue types to generate a reference RSPM. We proposed to integrate a residual block concept into a cycle-consistent generative adversarial network (cycleGAN) framework to learn the nonlinear mapping between DECT 70keV/140keV monoenergetic image pairs and reference RSPM. We evaluated the proposed method with 18 head-and-neck cancer patients. Mean absolute error (MAE) and mean error (ME) were used to quantify the differences between the generated and reference RSPM. The average MAE between generated and reference RSPM was 3.1±0.4 % and the average ME was 1.5±0.5 % for all patients. Compared to the physics-based method, the proposed method could significantly improve RSPM accuracy and had comparable computational efficiency after training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.