In this study, the deep learning technology was used to grade the severity of glaucoma depicted on color fundus images. We retrospectively collected a dataset of 5,978 fundus images acquired on different subjects and their glaucoma severities were annotated as none, mild, moderate, or severe, respectively, by the consensus of two experienced ophthalmologists. These images were preprocessed to generate global and local regions of interest (ROIs), namely the global field-of-view images and the local disc region images. These ROIs were separately fed into eight classical convolutional neural networks (CNNs) (i.e., VGG16, VGG19, ResNet, DenseNet, InceptionV3, InceptionResNet, Xception, and NASNetMobile) for classification purposes. Experimental results demonstrated that the available CNNs, except VGG16 and VGG19, achieved average quadratic kappa scores of 80.36% and 78.22% when trained from scratch on global and local ROIs, and 85.29% and 82.72% when fine-tuned using the imagenet weights, respectively. VGG16 and VGG19 achieved reasonable accuracy when trained from scratch, but they failed when using imagenet weights for both global and local ROIs. Among these CNNs, DenseNet had the highest classification accuracy (i.e., 75.50%) based on pre-trained weights when using global images, as compared to 65.50% when using local optic disc images.
Accurate segmentation of the optic disc (OD) depicted on color fundus images plays an important role in the early detection and quantitative diagnosis of retinal diseases, such as glaucoma and optic atrophy. In this study, we proposed a coarse-to-fine deep learning framework on the basis of a classical convolutional neural network (CNN), known as the Unet model, for extracting the optic disc from fundus images. This network was trained separately on fundus images and their vessel density maps, leading to two coarse segmentation results from the entire images. We combined the results using an overlap strategy to identify a local image patch (disc candidate region), which was then fed into the U-net model for further segmentation. Our experiments demonstrated that the developed framework achieved an average intersection over union (IoU) and a dice similarity coefficient (DSC) of 89.1% and 93.9%, respectively, based on a total of 2,978 test images from our collected dataset and six public datasets, as compared to 87.4% and 92.5% obtained by only using the sole U-net model. This suggests that the proposed method can provide better segmentation performances and have the potential for population based disease screening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.