Deep learning technology has been utilized in computed tomography, but, it needs centralized dataset to train the neural networks. To solve it, federated learning has been proposed, which collaborates the data from different local medical institutions with privacy-preserving decentralized strategy. However, lots of unpaired data is not included in the local models training and directly aggregating the parameters would degrade the performance of the updated global model. In order to deal with the issues, we present a semi-supervised and semi-centralized federated learning method to promote the performance of the learned global model. Specifically, each local model is trained with an unsupervised strategy locally at a fixed round. After that, the parameters of the local models are shared to aggregate on the server to update the global model. Then, the global model is further trained with a standard dataset, which contains well paired training samples to stabilize and standardize the global model. Finally, the global model is distributed to local models for the next training step. For shorten, we call the presented federated learning method as “3SC-FL”. Experiments demonstrate the presented 3SC-FL outperforms the compared methods, qualitatively and quantitatively.
Federated learning method shows great potential in computed tomography imaging field by utilizing a decentralized strategy with data privacy-preserving for local medical institutions. However, directly aggregating the parameters of each local model would degrade the generalization performance of the updated global model. In addition, well paired centralized training datasets can be collected in real world, which are not included in the current federated learning methods. To address the issue, we present a semi-centralized federated learning method to promote the generalization performance of the learned global model. Specifically, each local model is firstly trained locally at a fixed round, then, the parameters are aggregated on server to initialized the global model. After that, the global model is further trained with a standard dataset on the server, which contains well paired training samples to stabilize and standardize the global model. For shorten, we call the presented semi-centralized federated learning method as “SC-FL”. Experimental results on different local datasets demonstrate the presented SC-FL outperforms the competing methods.
Dual Energy CT (DECT) has ability to characterize different materials and quantify the densities or proportions of different contrast agents. However, the basis images decomposition is an ill-posed problem and the traditional model-based and image-domain direct inversion methods always suffer from serious degradation of the signal-to-noise ratios (SNRs). To this issue, we propose a new strategy by combining model-based and learning-based methods, which suppresses noise in the material images after direct inverse, and design a semi-supervised framework, Adaptive Semi-supervised Learning Material Estimation Network (ASLME-Net), to balance the detail structure preservation and noise suppression when fed little paired data in training stage of the deep learning. Specifically, the ASLME-Net contains two sub-networks, i.e., supervise sub-network and unsupervised sub-network. The supervised sub-network aims at capturing key features learned by with the labeled data, and the unsupervised sub-network adaptively learns the transferred feature distribution from supervised sub-network with Kullback-Leibler (KL) divergence. Experiment shows that the presented method can suppress the noise propagation in decomposition and yield qualitatively and quantitatively accurate results during the process of material decomposition. To this issue, we propose a new strategy by combining model-based and learning-based methods, which suppresses noise in the material images after direct inverse, and design a semi-supervised framework, Adaptive Semi-supervised Learning Material Estimation Network (ASLME-Net), to balance the detail structure preservation and noise suppression when fed little paired data in training stage of the deep learning.
Inspired by the deep learning techniques, data-driven methods have been developed to promote image quality and material decomposition accuracy in dual energy computed tomography (DECT) imaging. Most of these data-driven DECT imaging methods exploit the image priors within large amount of training data to learn the mapping function from the noisy DECT images to the desired high-quality material images in a supervised manner. Meanwhile, these supervised DECT imaging methods only estimate the multiple material images directly from the network but the material decomposition mechanism is not included in the network, and they fail to consider the unlabeled noisy DECT images to further improve the performance. In this work, to address these issues, we propose a novel Weak-supervised learning Multi-material Decomposition Network with self-attention mechanism (WMD-Net) to estimate multiple material images from DECT images with the combination of labeled and unlabeled DECT images accurately and effectively. Specifically, in the proposed WMD-Net, the labeled DECT images are used to estimate the three material images in a supervised sub-network, and the unlabeled DECT images are used to construct the unsupervised sub-network with the benefit of material decomposition mechanism. Finally, the two sub-networks are introduced into the proposed WMD-Net method. The proposed WMD-Net method is validated and evaluated through the synthesized clinical data, and the experimental results demonstrate that the proposed WMD-Net method can estimate more accurate material images than the other competing methods in terms of noise-induced artifacts reduction and structure details preservation.
Supervised deep learning (DL) methods have been widely developed to remove noise-induced artifacts and promote image quality in the low-dose CT imaging task via good mapping capabilities. These supervised DL methods are usually trained based on a large amount of low- and normal-dose sinogram/image pairs and the reconstruction performance of such supervised DL methods heavily depends on the quality of reference training images. In the CT imaging, it is challenging to collect lots of high quality reference training images in practice due to the risk of high radiation dose to patients. Moreover, the medium quality or even low quality reference training images (i.e., the low quality labeled data with some noise-induced artifacts) might be collected for the supervised DL networks training, which would degrade the reconstruction performance of the network. To address this issue, in this work, we propose an effective noise-conscious explicit weighting network (NEW-Net) for low-dose CT imaging wherein the CT images with noise-induced artifacts are treated as labeled data in the network training. Specifically, the proposed NEW-Net consists of two sub-networks, i.e., noise estimation sub- network, and noise-conscious weighting sub-network. The noise estimation sub-network produces the noise map from the low-quality training data to estimate the noise-conscious weights, which determines the contribution of the label data, i.e., small weights go along with the low quality label data with severe nosie-induced artifacts, and large weights go along with high quality label data with a few noise-induced artifacts. Then the estimated weights are used to condition the training data to train the noise-conscious weighting sub-network to eliminate the effects of low quality label data and promote the reconstruction performance and stability of the proposed NEW-Net method. The Mayo clinic data are utilized to validate and evaluate the reconstruction performance of the proposed NEW-Net method. And the experimental results demonstrate that the proposed NEW-Net method outperforms the other competing methods in the case of low quality training data, in terms of noise-induced artifacts and structure detail preservation both qualitatively and quantitatively.
Computed tomography (CT) is a widely used medical imaging modality which is capable of displaying the fine details of human body. In clinics, the CT images need to highlight different desired details or structures with different filter kernels and different display windows. To achieve this goal, in this work, we proposed a deep learning based ”All-in-One” (DAIO) combined visualization strategy for high-performance disease screening in the disease screening task. Specifically, the presented DAIO method takes into consideration of both kernel conversion and display window mapping in the deep learning network. First, the sharp kernel, smooth kernel reconstructed images and lung mask are collected for network training. Then, the structure is adaptively transferred to the kernel style through local kernel conversion to make the image have higher diagnostic value. Finally, the dynamic range of the image is compressed to a limited gray level by the mapping operator based on the traditional window settings. Moreover, to promote the structure details enhancement, we introduce a weighted mean filtering loss function. In the experiment, nine of the ten full dose patients cases from the Mayo clinic dataset are utilized to train the presented DAIO method, and one patient case from the Mayo clinic dataset are used for test. Results shows that the proposed DAIO method can merge multiple kernels and multiple window settings into a single one for the disease screening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.