Quantum machine learning by superposition and entanglement, for disease categorization utilizing OCT images will be discussed in this paper. To the best of our knowledge, this is the first application of the quantum computing element in a neural network model for classifying ophthalmological disease. The model was built and tested with PennyLane (PennyLane.ai), an open-source software tool based on the concept of quantum differentiable programming. The model training circuit functioning was tested on an IBM 5 qubits system “ibmq_belem” and a 32 qubits simulator “ibmq. qasm_simulator”. A hybrid quantum and classical model with a 2 qubit QNode converted layer with operations such as Angle Embedding, BasicEntanglerLayers and measurements were the internal operations of the qlayer. Drusen, Choroidal neovascularization (CNV), and Diabetic macular edema (DME) OCT images formed the abnormal/disease class. The model was trained using 414 normal and 504 abnormal labelled OCT scans and the validation used 97 and 205 OCT scans. The resulting model had an accuracy of 0.95 in this preliminary 2-class classifier. This study aims to develop a 4-class classifier with 4 qubits and explore the potential of quantum computing for disease categorization. A preliminary performance analysis of quantum Machine Learning, the steps involved, and operational details will be discussed.
One of the leading causes of irreversible vision loss is Diabetic Retinopathy (DR). The International Clinical Diabetic Retinopathy scale (ICDRS) provides grading criteria for DR. Deep Convolutional Neural Networks (DCNNs) have high performance in DR grading in terms of classification evaluation metrics; however, these metrics are not sufficient for evaluation. The eXplainable Artificial Intelligence (XAI) methodology provides insight into the decisions made by networks by producing sparce, generic heat maps highlighting the most critical DR features. XAI also could not satisfy clinical criteria due to the lack of explanation on the number and types of lesions. Hence, we propose a computational tool box that provides lesion-based explanation according to grading system criteria for determining severity levels. According to ICDRS, DR has 10 major lesions and 4 severity levels. Experienced clinicians annotated 143 DR fundus images and we developed a toolbox containing 9 lesion-specified segmentation networks. Networks should detect lesions with high annotation resolution and then compute DR severity grade according to ICDRS. The network that was employed in this study is the optimized version of Holistically Nested Edge Detection Network (HEDNet). Using this model, the lesions such as hard exudates (Ex), cotton wool spots (CWS), microaneurysms (MA), intraretinal haemorrhages (IHE) and vitreous preretinal haemorrhages (VPHE) were properly detected but the prediction of lesions such as venous beading (VB), neovascularization (NV), intraretinal microvascular abnormalities (IRMA) and fibrous proliferation (FP) had low specificity. Consequently,this will affect the value of grading which uses the segmented masks of all contributing lesions.
Deep learning methods for ophthalmic diagnosis have shown considerable success in tasks like segmentation and classification. However, their widespread application is limited due to the models being opaque and vulnerable to making a wrong decision in complicated cases. Explainability methods show the features that a system used to make prediction while uncertainty awareness is the ability of a system to highlight when it is not sure about the decision. This is one of the first studies using uncertainty and explanations for informed clinical decision making. We perform uncertainty analysis of a deep learning model for diagnosis of four retinal diseases - age-related macular degeneration (AMD), central serous retinopathy (CSR), diabetic retinopathy (DR), and macular hole (MH) using images from a publicly available (OCTID) dataset. Monte Carlo (MC) dropout is used at the test time to generate a distribution of parameters and the predictions approximate the predictive posterior of a Bayesian model. A threshold is computed using the distribution and uncertain cases can be referred to the ophthalmologist thus avoiding an erroneous diagnosis. The features learned by the model are visualized using a proven attribution method from a previous study. The effects of uncertainty on model performance and the relationship between uncertainty and explainability are discussed in terms of clinical significance. The uncertainty information along with the heatmaps make the system more trustworthy for use in clinical settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.