|
1.IntroductionOptical coherence tomography (OCT) is a powerful noninvasive imaging technique that has become a standard clinical tool in ophthalmology, providing a direct and accurate visualization of the retina and its layered structure.1 This information helps to detect and monitor a variety of retinal diseases such as age-related macular degeneration, diabetic retinopathy, and retinal detachments. OCT shows particular promise in detecting glaucoma, the second leading cause of blindness worldwide, which is a progressive neuropathy characterized by the thinning of retinal nerve fiber layer and changes in the structural and functional integrity of the optic nerve head (ONH).2 It has been recently demonstrated that tissue around the ONH is deformed during cardiac pulsations,3 and that such movement is different in healthy and glaucoma patients.4 As the vast majority of research projects based on OCT rely on static images made up of several averaged frames, these findings underline the relevance of comprehensive dynamic studies for which the development of novel imaging strategies as well as analysis methods is also required. An essential part of OCT data processing is segmentation, and one of its most challenging problems is the design of a system that works properly in clinical applications, i.e., the development of robust methods capable of dealing with pathological cases, where the structure of the retina can change drastically. Several methods have been proposed to segment the vitreal-retinal boundary on macular OCT images;5–7 however, those models cannot simply be applied to the ONH region owing to the significant anatomical differences. Nerve head geometries vary considerably across the patient population, and appearance variation is also introduced by the angle of the scan through the ONH. This could be the reason why, to date, very little has been reported on the automatic analysis of 2-D ONH images. Nowadays, volumetric scanning centered at the ONH is becoming more common; however, the segmentation method employed in these cases is based on a three-dimensional graph search approach that makes use of regional consecutive B-scans, and the algorithm is trained on the planimetry from color stereo photos to determine the rim and cup regions that can be then colocalized on the fundus image and finally translated to the OCT scan.8,9 More recently, a study performed with a software developed for retinal analysis of Straus OCT macular images with computer-aided grading methodology, OCTRIMA,10–12 has included the measurement of the cup-to-disc ratio as complementary information extracted from the central image of a volumetric scan.13 Notwithstanding such advances, currently there are no commercial OCT devices that allow image sequences of volumetric scans acquired and saved at the velocity needed to perform dynamic studies. Active contour algorithms are a good choice to achieve accurate border delimitation from where the inner limiting membrane (ILM) profile can be easily obtained.14,15 Although it is possible to define a fixed rectangular initial contour that encloses the whole retina and the main parameters can be optimized for a set of similar images, the number of iterations may vary depending on the size and shape of the ONH, with a computation time in the order of tens of seconds for a single image,16,17 making this kind of method inappropriate for large image sets. In order to investigate the pulsatile dynamics of the ONH and surrounding tissue, it becomes necessary to have an image acquisition rate several times higher than the heart rate. OCT images are inherently noisy, and if we add the fact that in this case only a few consecutive B-scans can be merged for the final image, the result is a collection of images with poor quality, making most simple edge-detection algorithms unsuitable for accurate retinal identification. Furthermore, retinal deformation is in the order of dozens of microns, making inaccurate delineations a source of error for adequate assessment of the dynamic behavior of the tissue. To overcome such limitations, we have developed a simple method for the automatic ILM segmentation in video rate OCT ONH line scan images, based on morphological operations. The resulting ILM profiles are then used to measure neuroretinal tissue displacements due to blood-flow pulsatility. 2.MethodsThe proposed approach consists of two stages: segmentation and analysis. The first seeks to delineate the peripapillary retina and the ONH accurately in low-signal OCT images, while the second uses the segmented profiles to quantify retinal deformation. All routines described in the following have been coded in Matlab (The MathWorks, Inc.) and are available upon request. 2.1.Data CollectionIn order to accurately measure displacements of the ocular fundus during the cardiac cycle in real time, we made use of a SD-OCT device (Spectralis OCT Plus, Heidelberg Engineering, Germany), the software of which has been custom-modified to record image series in real time, allowing the export of raw images in linear scale. OCT line scans of the same region of the ONH were acquired at 20 Hz for 20 s, during which subjects were asked to look at a fixation light. Since the device is equipped with an active eye tracker that halts acquisition to avoid movement artifacts, subjects are allowed to blink if necessary. Each time series has 401 images () of high resolution ( axial) acquired over a 15 deg field of view (5 mm approx.) that covers cup, rim, and peripapillary retina, scanned at 45 deg with respect to the fovea to disc axis (Fig. 1). Volunteers covering the glaucoma spectrum, i.e., from glaucoma suspects to advance glaucoma, were recruited at the Maisonneuve-Rosemont Hospital’s Ophthalmology Clinic. Institutional Review Board and Ethics Committee approval was obtained for the study, and all participants gave their informed consent. The research protocol follows the tenets of the Declaration of Helsinki. In order to optimize the parameters of our segmentation pipeline, we presented a set of 25 images that include healthy and several stages of glaucomatous eyes randomly selected from the database, with Spectralis quality scores ranging from 10 to 40, to five observers. Images were segmented varying the values of the main parameters of the algorithm, described below, and raters, masked to parameters and values, were asked to choose the best segmentations for each “unknown” parameter, and the most voted values were selected and kept fixed for all images. 2.2.PreprocessingAs a first step, a 2-D median filter of size is applied for rough speckle noise removal, then images are submitted to a two-step contrast enhancement technique to suppress features that may lead to erroneous boundary detection. First, intensity exponentiation () is applied to eliminate floaters near the cup and attachment points in case of vitreopapillary detachments. Afterwards, exponentiation compensation is required to compensate inhomogeneities due to retinal blood vessels18 and to sharpen the edges (Fig. 2). In our particular case, since we have previously applied a median filter and intensity exponentiation () to the images, when the exponent value is set to 3, the increment of speckle noise that could be expected according to Ref. 14 is not observed, proving to give the best results for edge detection. 2.3.Inner Limiting Membrane SegmentationIn order to separate the inner layer of the retina from the vitreous, another median filter () is applied. As the image has been previously submitted to contrast enhancement, a threshold of 10% of the maximum intensity is enough to discriminate between the retina and the remaining noise near the edges and is then used to generate the corresponding binary image. An increase of the threshold value results in the loss of the vitreal–retinal boundary information in most cases. The next step is to remove rough edges and bridges by morphological closing with a squared structuring element of 3 pixels width [Fig. 3(a)]. Pixel labeling of eight connected objects is then used to remove the remaining spurious pixels in the vitreous, usually corresponding to highly reflective floaters, by only keeping objects with more than 1000 elements. The ILM is set to the most anterior foreground pixel of each A-scan. To exclude misallocated points at the edges of the cup, distances in the axial direction between adjacent contour points are calculated , and we search for distances of that are equal to or greater than . When a negative targeted value is followed by a positive one and separated by maximum 10 points (transversal direction ) then such interval is removed and replaced with a spline-interpolating curve. Figure 3(b) shows the final profile. 2.4.Deformation AnalysisIn order to measure displacements of the neuroretinal tissue due to pulsatility, in addition to segmentation, an analysis algorithm has been developed to track the position of specific retinal regions in time (frame to frame) and to quantify tissue deformation. For alignment, we have used the Matlab intensity-based automatic image registration function with mean-squared error metric and one-plus-evolutionary optimizer. The registration is limited to rigid transformations (translation and rotation), taking as a reference the first image of the series. Once the segmentation has been performed, the result is a set of 401 ILM profiles per time series. With the purpose of discarding the curves automatically where the ILM delineation has failed, all the profiles are shifted vertically so that their minima coincide [Fig. 4(a)], and the average and standard deviation (S.D.) of the depth () values at each A-scan position are calculated. Outliers are then tagged in two steps. First, we identify outlier pixels in each one of the profiles, defined as those points located at a distance of 3 S.D. or more from the average value in each A-scan. Next, the S.D. of the number of outlier pixels per curve is determined, and profiles with more outliers than such S.D. are discarded [Fig. 4(b)]. In general, the number of discarded profiles per time series is usually between 0 and 10 but not higher than 15. The peripapillary retinal deformation can now be determined from the time series by quantifying changes in the distance measured from the prelaminar tissue to a selected region on the retina. Since the major blood vessels cross the optic nerve through the retina on the nasal region, we have decided to investigate the temporal side, where vessels are less dense, selecting three regions of (, , ) starting at the Bruch’s membrane opening, as shown in Fig. 5(a). As the bottom of the cup of all profiles is already located at , the distance that we want to measure is simply , and deformation is then calculated as the S.D. of such distances averaged within each region [Fig. 5(b)]. It is important to emphasize that although the deformation value obtained per time series could be on the order of the axial resolution of the device, the actual range of measured distances in the time series is between five and ten times larger. As a summary, Fig. 6 shows a flow diagram of the segmentation and analysis algorithms proposed. 3.Results3.1.Assessment of the Segmentation AlgorithmTo evaluate the performance of the segmentation algorithm, we compared our method with two other established methods: a modified Canny filter19 (MCF) and a hierarchic approach20 (HA) briefly described below. For each method, the parameters were chosen to give the best results. MCF: the preprocessing steps include median filtering with a squared mask of 13 pixels width and intensity normalization. Then, the first derivative of a Gaussian function multiplied by a Gaussian is used to create two convolution masks to estimate the gradients in both directions (, ), and the resulting gradient image is thresholded with , with and , the maximum and minimum intensity values of the image being processed. Next, linear interpolation is used in a neighborhood to find two pixels that straddle the gradient direction; the central pixel is then defined as an edge point if its gradient magnitude is greater than those of the interpolated points. Finally, the ILM is set as the first nonzero pixel per A-scan. HA: intensity normalization, median filtering with a mask, and image decomposition to a lower resolution with a 16-pixel block using the median constitute the preprocessing. Then, an auxiliary matrix that contains the intensity differences between adjacent depth positions in each A-scan is calculated and employed to define two binary images. The first one contains only the pixels with the highest difference value per A-scan, while a threshold arbitrarily set at 0.03 is applied to the auxiliary matrix to generate the second binary image. These two images are used to identify the borders of the retina, and to correct erroneous recognition, a set of constrictions must be fulfilled. The next steps consist on reducing the decomposition area by half on each iteration to increase accuracy; the positions of the layers determined in the previous iteration are connected with linear interpolation and individual point matching. As final steps, the layers are rescaled, a median filter is applied to remove points misallocated in shadowed areas, and interpolation gives the resulting segmented profile. These three approaches were applied to 25 ONH OCT images randomly chosen from the data set described in Sec. 2.1, with Spectralis quality scores ranging from 11 to 35. In order to validate our algorithm quantitatively, as there is no gold standard for ONH OCT images to compare with, five specialists were asked to delineate the ILM manually for the same set of images. Manual segmentation was performed using a tablet equipped with a stylus. For each image, the average manually segmented ILM was calculated and used to compare the accuracy of every specialist and automated method. Absolute deviation from the mean trace for every A-scan of all images is shown in Fig. 7 as violin plots. Our algorithm shows variability comparable to the specialists’, highlighting its robust performance on low signal images. On the other hand, MCF failed to segment the vitreal-retinal boundary due to high speckle content on the images. Notwithstanding HA showed a more robust performance to noise than MCF, its border detection was considerably affected by shadow effects that reduce visibility of some retinal regions. Although MCF and HA show good results for high-quality images (), where a significant contrast at the vitreal-retinal boundary is observed, its accuracy on low signal images is not always satisfactory. Figure 8 shows some examples of the segmented results along with the original images. The time of analysis, together with satisfactory results, is of crucial importance if we take into account that each time series comprises hundreds of images. The average ILM segmentation time for MCF, HA (dividing individual images on blocks of , , ), and our algorithm is 9.4 s, 2.4 s, and 0.40 s, respectively, calculated over a set of 100 images () analyzed in a PC (3.30 GHz Intel Core i3 processor, 4 GB of RAM). 3.2.Peripapillary Retinal DeformationResults from 29 healthy and 12 glaucoma eyes with functional trabeculectomy blebs were obtained using the proposed segmentation and analysis algorithms. In this case, two time series of the same eye were recorded per subject, at 45 deg (inferotemporal region) and 135 deg (superotemporal) with respect to the fovea to disc axis. These particular angles were chosen as they correspond to the areas of major insult in glaucoma. All eyes were transformed to right-eye format. Recorded time series were excluded due to fixation problems, poor quality images (Spectralis quality for most of the images), and large blood vessels governing the movement of the neuroretinal tissue. The latter exclusion criterion is vital as we could be measuring directly the pulsation of the vessel instead of the tissue displacement due to choroidal pulsatility, and includes time series where the ONH has a blood vessel located at the base of the cup. For the sake of simplicity, peripapillary retinal deformation values per eye at each one of the angles correspond to the average value calculated over the three regions defined along the temporal side (T, T2, and T3) on the time series (see Fig. 5). The healthy group shows higher average deformation for both regions, inferotemporal and superotemporal, compared with and in the glaucoma bleb group. However, statistical difference was only found at the inferotemporal region () as can be seen in Fig. 9. Group differences were analyzed with the Mann–Whitney test. Mean group deformation values are listed in Table 1. Table 1Mean peripapillary retinal deformation per group.a
This preliminary study shows that retinal displacement is different for healthy and treated diseased eyes. A more comprehensive characterization could help us to understand how neuroretinal tissue deformation is related to glaucoma progression. 4.DiscussionElevated intraocular pressure (IOP) remains the primary risk factor for glaucoma. By definition, IOP is the normal force per unit area exerted by the intraocular fluids on the tissues that contain them. The mechanical response is a function of the individual eye’s geometry, anatomy, and mechanical properties of the tissue, factors that contribute to determine the individual’s susceptibility to IOP. Therefore, it is natural to consider that biomechanics also plays an important role in glaucomatous neuropathy, and the key challenge is then to understand how ocular biomechanics are transduced into tissue damage. The ONH is a region of special biomechanical interest because it is a discontinuity in the corneo-scleral shell, and such kind of discontinuities typically gives rise to stress or strain concentrations in mechanical systems.21 Furthermore, recent research based on finite element computational modeling has led to the conclusion that the highest magnitudes of all modes of strain (extension, compression, and shearing) occurred within the neural tissue regions and not within the lamina cribrosa,22 as previously believed. Although the results section presents a preliminary study to show the potential use of our algorithm, it reveals that the actual mechanical response of the neuroretinal tissue is different between healthy and treated diseased eyes. Two of the factors that may explain the finding that glaucomatous eyes with functional trabeculectomy blebs show less deformation are that they have significantly lower IOP as well as thinner retinal nerve fiber layer than the normal group. Our results showed a statistical difference at the inferotemporal side only, which can be explained in part by the fact that the loss of neuroretinal rim starts specifically at this region being the most affected during the evolution of the glaucomatous neuropathy.23 Furthermore, in the glaucoma eyes with blebs, the ocular pulse amplitude is significantly lower than in healthy eyes;24 thus the driving force for ONH deformation was reduced which may be contributing to the smaller observed deformation. However, a more comprehensive characterization needs to be done before reaching any major conclusion. On the other hand, since the elasticity of the sclera is one of the most important determinants of ONH stress and strain,25 we have recently developed a noninvasive technique to determine overall ocular rigidity using depth-enhanced high-frequency OCT macular imaging26 that will be included in a further study, where characterization of the retinal deformation at different stages of the glaucomatous neuropathy will be carried out including a full workup of the eye under study. The approach proposed in this paper has the potential to be used as a new biomechanical descriptor of the eye, helping in the understanding of the relationship between biomechanical properties of the neuroretinal tissue and the insult of the optic nerve, not only in glaucoma but, in principle, in many other eye pathologies. 5.ConclusionWe developed a fully automated algorithm to segment the peripapillary retina and the ONH in video rate OCT images using morphological operations. We have demonstrated that our approach is able to accurately segment the ILM, even when images show a substantial amount of speckle noise or shadowing which is typical of high acquisition rates, proving to be very robust when dealing with low-signal images. The experimental results showed that the algorithm detects the ILM accurately in healthy eyes as well as in several stages of glaucomatous pathology. While the focus of this work was on the segmentation of the ONH, the proposed algorithm can be directly applied to macular OCT images without any modification. Furthermore, the advantages include easy implementation and low computation time. Additionally, we have developed an analysis algorithm that uses the segmentation results to investigate the dynamics of the neuroretinal tissue. As a preliminary study, peripapillary retinal deformation of healthy and glaucomatous eyes with functional trabeculectomy blebs has been characterized, with the first cohort showing larger deformation at the inferotemporal region. These findings could lead to a new way to identify patients with glaucoma and characterize their risk of aggressive ONH damage using a standard clinical test. We strongly believe that the powerful combination of video rate OCT imaging and the proposed segmentation and analysis algorithms is key to the further understanding of ocular biomechanics, and how it is transduced into tissue damage. AcknowledgmentsThe authors acknowledge support from the National Council of Science and Technology of Mexico (CONACyT), the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, the FRQS Research in Vision Network, and the Fond de Recherche en Ophtalmologie de l’Université de Montréal. S.C. holds an FRQS Chercheur Junior 2 award. ReferencesD. Huang et al.,
“Optical coherence tomography,”
Science, 254 1178
(1991). http://dx.doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar
H. A. Quigley,
“Open-Angle Glaucoma,”
N. Engl. J. Med., 328
(15), 1097
–1106
(1993). http://dx.doi.org/10.1056/NEJM199304153281507 Google Scholar
K. Singh et al.,
“Measurement of ocular fundus pulsation in healthy subjects using a novel Fourier-domain optical coherence tomography,”
Invest. Ophthalmol. Vision Sci., 52
(12), 8927
–8932
(2011). http://dx.doi.org/10.1167/iovs.11-7854 Google Scholar
K. Singh et al.,
“Pulsatile movement of the optic nerve head and the peripapillary retina in normal subjects and in glaucoma,”
Invest. Ophthalmol. Vision Sci., 53
(12), 7819
–7824
(2012). http://dx.doi.org/10.1167/iovs.12-9834 Google Scholar
T. Fabritius et al.,
“Automated segmentation of the macula by optical coherence tomography,”
Opt. Express, 17
(18), 15659
–15669
(2009). http://dx.doi.org/10.1364/OE.17.015659 OPEXFF 1094-4087 Google Scholar
M. A. Mayer et al.,
“Retinal nerve fiber layer segmentation on FD-OCT scans of normal subjects and glaucoma patients,”
Biomed. Opt. Express, 1
(5), 1358
–1381
(2010). http://dx.doi.org/10.1364/BOE.1.001358 BOEICL 2156-7085 Google Scholar
S. J. Chiu et al.,
“Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,”
Opt. Express, 18
(18), 19413
–19428
(2010). http://dx.doi.org/10.1364/OE.18.019413 OPEXFF 1094-4087 Google Scholar
M. D. Abramoff et al.,
“Automated segmentation of the cup and rim from spectral domain OCT of the optic nerve head,”
Invest. Ophthalmol. Vision Sci., 50
(12), 5778
–5784
(2009). http://dx.doi.org/10.1167/iovs.09-3790 Google Scholar
K. Lee et al.,
“Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,”
IEEE Trans. Med. Imaging, 29
(1), 159
–168
(2010). http://dx.doi.org/10.1109/TMI.2009.2031324 Google Scholar
D. C. Fernandez, H. M. Salinas and C. A. Puliafito,
“Automated detection of retinal layers structures on optical coherence tomography images,”
Opt. Express, 13
(25), 10200
–10216
(2005). http://dx.doi.org/10.1364/OPEX.13.010200 Google Scholar
D. C. DeBuc et al.,
“Reliability and reproducibility of macular segmentation using a custom-built optical coherence tomography retinal image analysis software,”
J. Biomed. Opt., 14
(6), 064023
(2009). http://dx.doi.org/10.1117/1.3268773 JBOPFO 1083-3668 Google Scholar
D. Cabrera Debuc et al.,
“Improving image segmentation performance and quantitative analysis via computer-aided grading methodology for optical coherence tomography retinal image analysis,”
J. Biomed. Opt., 15
(4), 046015
(2010). http://dx.doi.org/10.1117/1.3470116 JBOPFO 1083-3668 Google Scholar
Y. Wang et al.,
“Quantitative analysis of the intraretinal layers and optic nerve head using ultra-high resolution optical coherence tomography,”
J. Biomed. Opt., 17
(6), 066013
(2012). http://dx.doi.org/10.1117/1.JBO.17.6.066013 JBOPFO 1083-3668 Google Scholar
T. F. Chan and L. A. Vese,
“Active contours without edges,”
IEEE Trans. Image Process., 10
(2), 266
–277
(2001). http://dx.doi.org/10.1109/83.902291 IIPRE4 1057-7149 Google Scholar
C. Li et al.,
“Level set evolution without re-initialization: a new variational formulation,”
in Proc. IEEE Conf. on Computer Vision and Pattern Recognition,
430
–436
(2005). Google Scholar
W. Yu et al.,
“Fast and robust active contours for image segmentation,”
in Proc. IEEE Int. Conf. on Image Processing (ICIP),
(2010). Google Scholar
R. Ksantono, B. Boufama and S. Memar,
“A new efficient active contour model without local initialization for salient object detection,”
EURASIP J. Image Video Process., 2013 40
(2013). http://dx.doi.org/10.1186/1687-5281-2013-40 Google Scholar
M. J. A. Girard et al.,
“Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head,”
Invest. Ophthalmol. Vision Sci., 52
(10), 7738
–7748
(2011). http://dx.doi.org/10.1167/iovs.10-6925 Google Scholar
R. Koprowski and Z. Wrobel,
“Identification of layers in a tomographic image of an eye based on the canny edge detection,”
Information Technologies Biomedicine, 47 232
–239 Springer-Verlag, Berlin, Heidelberg
(2008). Google Scholar
R. Koprowski and Z. Wrobel,
“Hierarchic approach in the analysis of tomographic eye image,”
Computer Recognition System 3, 57 463
–470 Springer-Verlag, Berlin, Heidelberg
(2009). Google Scholar
A. J. Belleza and R. T. Burgoyne,
“The optic nerve head as a biomechanical structure: initial finite element modeling,”
Invest. Ophthalmol. Vision Sci., 41
(10), 2991
–3000
(2000). Google Scholar
I. A. Sigal et al.,
“Predicted extension, compression and shearing of the optic nerve head tissues,”
Exp. Eye Res., 85
(3), 312
–322
(2007). http://dx.doi.org/10.1016/j.exer.2007.05.005 EXERA6 0014-4835 Google Scholar
J. B. Jonas, M. C. Fernandez and J. Sturmer,
“Pattern of glaucomatous neuroretinal rim loss,”
Ophthalmology, 100
(1), 63
–68
(1993). http://dx.doi.org/10.1016/S0161-6420(13)31694-7 OPANEW 0743-751X Google Scholar
C. Breusegem et al.,
“The effect of trabeculectomy on ocular pulse amplitude,”
Invest. Ophthalmol. Vision Sci., 51
(1), 231
–235
(2010). http://dx.doi.org/10.1167/iovs.09-3712 Google Scholar
I. A. Sigal, J. G. Flanagan and C. R. Ethier,
“Factors influencing optic nerve head biomechanics,”
Invest. Ophthalmol. Vision Sci., 46
(11), 4189
–4199
(2005). http://dx.doi.org/10.1167/iovs.05-0541 Google Scholar
L. Beaton et al.,
“Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging,”
Biomed. Opt. Express, 6
(5), 1694
–1706
(2015). http://dx.doi.org/10.1364/BOE.6.001694 BOEICL 2156-7085 Google Scholar
BiographyMaribel Hidalgo-Aguirre received her BSc and MSc degrees with honors in applied physics from the Benemerita Universidad Autonoma de Puebla, Mexico in 2007 and 2010 respectively, with research experience in experimental physics, in the area of nonlinear optics for applications in fields like microscopy and telecommunications. Currently she is a PhD student at the Institute National de la Recherche Scientifique in Canada, working at the Maisonneuve-Rosemont hospital on image analysis for ophthalmology. Julian Gitelman completed his MSc at McGill University studying the molecular mechanisms of learning and memory. He is currently a medical student at McGill. Santiago Costantino received his PhD in ultrafast lasers from the Physics Department of the University of Buenos Aires in 2003. He moved to Canada for his postdoctoral training in microscopy and neuroscience at McGill University. He established his biophotonics lab at the Maisonneuve-Rosemont Hospital, University of Montreal, in 2007. He is now an associate professor and his current research spans microengineering, image analysis and the development of optical tools for vision health. |