Wello Sefer Birra Building 3rd Floor, Addis Ababa, Ethiopia
Biruh Vision

Biruh Vision Laser

Advanced Eye Care Solutions

Deep Learning Estimation of 10-2 and 24-2 Visual Field Metrics based on Thickness Maps from Macula Optical Coherence Tomography

Deep Learning Estimation of 10-2 and 24-2 Visual Field Metrics based on Thickness Maps from Macula Optical Coherence Tomography

Evaluation of a diagnostic technology

Published April 27, 2021
### Purpose To develop deep learning (DL) systems estimating visual function from macula-centered spectral domain optical coherence tomography (SDOCT) images. ### Design Evaluation of a diagnostic technology ### Participants 2,408 10-2 visual field (VF)-SDOCT pairs and 2,999 24-2 VF-SDOCT pairs collected from 645 (1,222 eyes) healthy and glaucoma participants. ### Methods DL models were trained on macula thickness maps from Spectralis macula SDOCT to estimate 10-2 and 24-2 VF mean deviation (MD) and pattern standard deviation (PSD). DL models were trained on thickness maps of the retinal nerve fiber layer (RNFL), ganglion cell layer (GCL), inner plexiform layer (IPL), GCIPL, ganglion cell complex (GCC), and retina (RET) layers. A combined DL model was trained using all layers. Linear regression models of mean layer thicknesses were used for comparison. Main Outcome Measures DL model estimates were evaluated using R2 and mean absolute error (MAE) compared to 10-2 and 24-2 VF measurements. Evaluation used independent test sets. ### Results Combined DL models estimating 10-2 achieved R2 [95% CI] of 0.82 [0.68 – 0.89] for MD and 0.69 [0.55 – 0.81] for PSD, and MAE of 1.9 [1.6 – 2.4] dB for MD and 1.5 [1.2 – 1.9] dB for PSD. This was significantly better than mean thickness estimates for 10-2 MD (0.61 [0.47 – 0.71], 3.0 [2.5 – 3.5] dB) and 10-2 PSD (0.46 [0.31 – 0.60], 2.3 [1.8 – 2.7] dB). Combined DL models estimating 24-2 achieved R2 of 0.79 [0.72 – 0.84] for MD and 0.68 [0.53 – 0.79] for PSD, and MAE of 2.1 [1.8 – 2.5] dB for MD and 1.5 [1.3 – 1.9] dB for PSD. This was significantly better than mean thickness estimates for 24-2 MD (0.41 [0.26 – 0.57], 3.4 [2.7 – 4.5] dB) and 24-2 PSD (0.38 [0.20 – 0.57], 2.4 [2.0 – 2.8] dB). In estimating 10-2 MD, the GCIPL (R 2 =0.79) and IPL (R 2 =0.78) had the highest individual performance. For 24-2 MD, the GCC (R 2 =0.75) and RNFL (R 2 =0.72) had the highest individual performance. Conclusions DL models improved estimates of functional loss from SDOCT imaging. Accurate functional estimates can help clinicians more effectively individualize VF testing to each patient.
Deep Learning Estimation of 10-2 and 24-2 Visual Field Metrics based on Thickness Maps from Macula Optical Coherence Tomography

Deep Learning Estimation of 10-2 and 24-2 Visual Field Metrics based on Thickness Maps from Macula Optical Coherence Tomography

Evaluation of a diagnostic technology

Share