MR to CT image synthesis plays an important role in medical image analysis, and its applications included, but not limited to PET-MR attenuation correction and MR only radiation therapy planning. Recently, deep learning-based image synthesis techniques have achieved much success. However, most of the current methods require large scales of paired data from two different modalities, which greatly limits their usage as in some situation paired data is infeasible to obtain. Some efforts have been proposed to relax this constraint such as cycle-consistent adversarial networks (Cycle-GAN). However, the cycle consistency loss is an indirect structural similarity constraint of input and synthesized images, and it can lead to inferior synthesized results. To overcome this challenge, a novel correlation coefficient loss is proposed to directly enforce the structural similarity between MR and synthesized CT image, which can not only improve the representation capability of the network but also guarantee the structure consistency between MR and synthesized CT images. In addition, to overcome the problem of big variance in whole-body mapping, we use the multi-view adversarial learning scheme to combine the complementary information along different directions to provide more robust synthesized results. Experimental results demonstrate that our method can achieve better MR to CT synthesis results both qualitatively and quantitatively with unpaired MR and CT images compared with state-of-the-art methods.
|