Paper
10 March 2020 FunSyn-Net: enhanced residual variational auto-encoder and image-to-image translation network for fundus image synthesis
Author Affiliations +
Abstract
Medical imaging datasets typically do not contain many training images and are usually not sufficient for training deep learning networks. We propose a deep residual variational auto-encoder and a generative adversarial network based approach that can generate a synthetic retinal fundus image dataset with corresponding blood vessel annotations. In terms of structural statistics comparison of real and artificial our model performed better than existing methods. The generated blood vessel structures achieved a structural similarity value of 0.74 and the artificial dataset achieved a sensitivity of 0.84 and specific city of 0.97 for the blood vessel segmentation task. The successful application of generative models for the generation of synthetic medical data will not only help to mitigate the small dataset problem but will also address the privacy concerns associated with such medical datasets.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Sourya Sengupta, Akshay Athwale, Tanmay Gulati, John Zelek, and Vasudevan Lakshminarayanan "FunSyn-Net: enhanced residual variational auto-encoder and image-to-image translation network for fundus image synthesis", Proc. SPIE 11313, Medical Imaging 2020: Image Processing, 113132M (10 March 2020); https://doi.org/10.1117/12.2549869
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Blood vessels

Image segmentation

Medical imaging

Retina

Back to Top