Tuning parameters in a reconstruction model is of central importance to iterative CT reconstruction, since it critically affects the resulting image quality. Manual parameter tuning is not only tedious, but becomes impractical when there exits a number of parameters. In this paper, we develop a novel deep reinforcement learning (DRL) framework to train a parameter-tuning policy network (PTPN) to automatically adjust parameters in a human-like manner. A quality assessment network (QAN) is trained together with PTPN to learn how to judge CT image quality, serving as a reward function to guide the reinforcement learning. We demonstrate our idea in an iterative CT reconstruction problem with pixel-wise total-variation regularization. Experimental results demonstrates the effectiveness of both PTPN and QAN, in terms of tuning parameter and evaluating image quality, respectively.
Deformation-driven CBCT reconstruction techniques can generate accurate and high-quality CBCTs from deforming prior CTs using sparse-view cone-beam projections. The solved deformation-vector-fields (DVFs) also propagate tumor contours from prior CTs, which allows automatic localization of low-contrast liver tumors on CBCTs. To solve the DVFs, the deformation-driven techniques generate digitally-reconstructed-radiographs (DRRs) from the deformed image to compare with acquired cone-beam projections, and use their intensity mismatch as a metric to evaluate and optimize the DVFs. To boost the deformation accuracy at low-contrast liver tumor regions where limited intensity information exists, we incorporated biomechanical modeling into the deformation-driven CBCT reconstruction process. Biomechanical modeling solves the deformation on the basis of material geometric and elastic properties, enabling accurate deformation in a low-contrast context. Moreover, real clinical cone-beam projections contain amplified scatter and noise than DRRs. These degrading signals are complex, non-linear in nature and can reduce the accuracy of deformation-driven CBCT reconstruction. Conventional correction methods towards these signals like linear fitting lead to over-simplification and sub-optimal results. To address this issue, this study applied deep learning to derive an intensity mapping scheme between cone-beam projections and DRRs for cone-beam projection intensity correction prior to CBCT reconstructions. Evaluated by 10 liver imaging sets, the proposed technique achieved accurate liver CBCT reconstruction and localized the tumors to an accuracy of ~1 mm, with average DICE coefficient over 0.8. Incorporating biomechanical modeling and deep learning, the deformation-driven technique allows accurate liver CBCT reconstruction from sparse-view projections, and accurate deformation of low-contrast areas for automatic tumor localization.
Cervical tumor segmentation on 3D 18FDG PET images is a challenging task due to the proximity between cervix and bladder. Since bladder has high capacity of 18FDG tracers, bladder intensity is similar to cervical tumor intensity in the PET image. This inhibits traditional segmentation methods based on intensity variation of the image to achieve high accuracy. We propose a supervised machine learning method that integrates a convolutional neural network (CNN) with prior information of cervical tumor. In the proposed prior information constraint CNN (PIC-CNN) algorithm, we first construct a CNN to weaken the bladder intensity value in the image. Based on the roundness of cervical tumor and relative positioning information between bladder and cervix, we obtain the final segmentation result from the output of the network by an auto-thresholding method. We evaluate the performance of the proposed PIC-CNN method on PET images from 50 cervical cancer patients whose cervix and bladder are abutting. The PIC-CNN method achieves a mean DSC value of 0.84 while transfer learning method based on fully convolutional neural networks (FCN) achieves 0.77 DSC. In addition, traditional segmentation methods such as automatic threshold and region-growing method only achieve 0.59 and 0.52 DSC values, respectively. The proposed method provides a more accurate way for segmenting cervical tumor in 3D PET image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.