Protocol for Deep Learning-Based Classification of Vitreomacular Adhesion in Diabetic Macular Edema using SD-OCT.
Brughanya Subramanian, A. Q. M Sala Uddin Pathan, Maitreyee Roy, Dhanashree Ratra, Salil S. Kanhere, Matthew P Simunovic, Rajiv Raman
Abstract
This study is a continuation of our previous work published with PLOS One [1] and this
protocol provides a walk-through for designing a deep learning based AI model, that aids in classifying the vitreomacular adhesion noticed in patients with diabetic macular oedema using SD-OCT images.
Steps
Introduction
VMA describes residual adhesion between the vitreous and macula, occurring within the context of an incomplete posterior vitreous detachment. It might not lead to any retinal abnormality but may exert traction on the underlying macula, distorting the retinal architecture and leading to
VMT.[2]] VMA can be broadly classified as either focal (<1500 mm) or broad (≥1500 mm) based on the size of the adhesion.[3]
The researchers so far have concentrated mainly on the outer retina in the OCT scans to look for biomarkers to predict the visual outcomes in patients with DME. The vitreomacular interface has largely been unexplored in connection with its potential to act as a biomarker. Recently, it was shown to be of some importance in DME, where it was seen that in the presence of VMA, the OCT biomarkers such as high reflective dots, bridging processes and inner nuclear layer cysts were more likely to be associated with visual impairment than when not associated with VMA.[1] It is worthwhile to explore this further and define the role of the vitreomacular interface as a biomarker in DME.
The purpose of this protocol is to design an automated deep learning model that can effectively identify the presence of VMA and categorize OCT images into broad VMA, focal VMA, and control groups, which in turn can serve as a reliable and efficient diagnostic tool. This accurate classification has the potential to significantly benefit ophthalmologists in making well-informed decisions, thereby enhancing patient care.
Materials and methods
Study design: Prospective observational study
Sample: We will use the same sample from our previous study to design
the AI model.
Collection method followed
In our previous study we categorized participants into two groups; cases in the presence of VMA (+) and controls in the absence of VMA (-). The presence of VMA in our study was identified according to The International Vitreomacular Traction Study Group classification [4].Where the authors defined VMA according to following criteria 1) Evidence of perifoveal vitreous cortex detachment from retinal surface 2) Macular attachment of the vitreous cortex within a 3mm radius of the fovea 3) No detectable change in foveal contour or underlying retinal tissues.
We followed the same International Vitreomacular Traction Study Group classification to further subdivide VMA (+) group in our study into two groups, either focal (<1500 mm) or broad (≥1500 mm), based on the size of the adhesion. [4]
The medical records of all individuals were reviewed for baseline demographics, including age, gender, duration of DM, DR severity, cardiovascular co morbidities, dyslipidaemia, slit-lamp bio microscopy examination, intraocular pressure, the number and type of anti-VEGF injections
given and dilated fundus evaluation. BCVA was measured with Snellen charts and
converted to the Logarithm of the Minimum Angle of Resolution (Log MAR).
Inclusion and Exclusion criteria
Inclusion and exclusion criteria will remain the same as our prior work.
Inclusion: :
Subjects were included in the study if they met the following criteria: (1) Individuals (18 years or older) with type 2 diabetes mellitus and DME, (2) availability of SD-OCT scans of sufficient quality for
grading, and (3) no other confounding ocular condition that could decrease visual acuity other than DME.
Exclusion:
Subjects were excluded if they exhibit the following: (1) vitreomacular interface abnormalities besides VMA such as ERM, VMT, proliferative membranes, tractional retinal detachment, hazy media,
vitreous haemorrhage and lamellar or full thickness macular hole, (2) pre-existing retinal or macular disease other than DR or DME and (3) SD-OCT images with poor quality that was insufficient for assessment.
Methods
- Data processing
a. The dataset comprises of high quality SD-OCT tiff images from VMA Broad, VMA focal, and Control group cases, ensuring a representative sample of each class.
b. Pre-processing steps involve noise reduction techniques, intensity normalization to address variations in brightness, and cropping to remove irrelevant regions.
c. Anonymization and patient privacy measures are implemented to comply with ethical guidelines and ensure confidentiality.
- Data Annotation and Labelling
a. In our previous work, two graders (optometrist) reviewed each OCT image and assigned appropriate labels by annotating the data and classified the data as, Broad VMA, focal VMA, and
Control group, based on specific diagnostic criteria. In
case of discrepancies the annotations were examined by an expert (retina
specialist).
b. In this study we will be using the pre-annotated data for training the AI model.
- Data Split and Validation
a. Divide the dataset into training and testing subsets to facilitate model development and evaluation.
b. The split ratio is determined, considering factors such as dataset size, class distribution, and the need for robust performance evaluation.
c. Employ imbalanced learning to ensure the dataset contains a proportional representation of VMA Broad, VMA focal, and Control group images, maintaining the original class distribution.
- Deep Learning Model Architecture
a. Select different suitable deep learning architectures for the classification task, considering previous research and the complexity of the problem.
b. Describe the chosen architectures, in detail, including the layers, activation functions, and any specific modifications made for the VMA disease classification task.
c. Capture relevant features from OCT images and facilitate accurate classification using the selected architectures.
- Model Training
a. Train the deep learning model using the labelled training subset of the dataset.
b. Specify the optimization algorithm and its hyper parameters.
c. Determine the batch size, number of epochs, and early stopping criteria to prevent over fitting and achieve optimal training performance.
d. Employ data augmentation techniques to increase the dataset's diversity and enhance model generalization.
- Model Evaluation and Validation
a. Define evaluation metrics, including accuracy, precision, recall, F1-score, and AUC to assess the model's performance.
b. Evaluate the trained model using the testing subset to measure its classification accuracy and generalization ability.
c. Calculate performance metrics for each class separately to analyze the model's effectiveness in identifying VMA Broad, VMA focal, and Control group OCT images.
d. Discuss potential challenges or limitations encountered during the evaluation process and highlight areas for improvement.
- Model Optimization and Fine-Tuning
a. Outline a model optimization procedure based on the initial evaluation results.
b. Employ hyper parameter tuning, architecture modifications, or ensemble methods to improve the model's performance.
c. Validate the optimized model using a separate testing subset to ensure unbiased assessment and verify enhanced classification accuracy.
- Result Analysis and Interpretation
a. Present the classification results obtained by the trained deep learning model for VMA Broad, VMA focal, and Control group OCT images.
b. Provide in-depth analysis and interpretation of the results, identifying patterns, correlations, and potential insights into the classification performance.
c. Compare the performance of the developed models with existing approaches or expert annotations, highlighting the advantages and limitations.
Timeline of the study
The study is planned for a duration of 6 months.
Discussion
OCT has become indispensable in diagnosing various retinal conditions like DR, AMD, ERM and MH with other techniques like fundus photography and fluorescein angiography.[7-9] Deep
learning (DL) a subset of machine learning (ML) is revolutionizing how we approach the diagnosis and management of medical conditions[10]0]Advanced DL methods can effectively identify pathological features, and in recent years, several ML methods have emerged for recognizing OCT images in patients with significant eye disorders, including DR, AMD, ERM, glaucoma and CSCR[11-15]5]
The automatic detection of abnormal signs in retinal OCT images serves as a crucial component in diagnosing retinal pathologies. This capability provides ophthalmologists with valuable insights, aiding them in decision making process. In conclusion, it lays foundation for a more comprehensive and accessible approach to diagnosing and managing retinal diseases.
Study limitations:
-
High-performance pre-processing is required to enhance image quality.
-
Deep Learning (DL) models are considered "black boxes" where it is tough to understand how they make the predictions. This poor understanding of the models creates a trust issue among healthcare professionals and patients.
-
The DL model will be trained with smaller samples. Limited data availability.