Image

Glaucoma Detection Using Deep Learning

Welcome to our Glaucoma Detection Using Deep Learning project based on advanced AI in the healthcare system. It is used to identify Glaucoma in the early stage, which AI achieves in the medical field.


As a beginner, you may not know but glaucoma is a top cause of permanent blindness, which is why it is important to find it at its first stage. In this project, we are using some of the latest AI models like Vision Transformers, CNN, and VGG16 to delve into retinal images and test if the patient might have glaucoma.


Let's see how we are going to work on it!


Project Overview

Glaucoma Detection Using Deep Learning is an application of using advanced neural networks to categorize images of retinal images as either glaucoma positive" or "glaucoma negative. The primary aim is to develop a high-accuracy diagnostic machine for glaucoma detection that uses convolutional neural networks (CNN), Vision Transformer model, and VGG 16 models to help ophthalmologists in the early stages of glaucoma detection.


This project focuses on the stepwise approach to automating glaucoma detection by training deep-learning models on medical images. If AI in the medical field interests you, then this project will demonstrate the impact of machine learning where lives can be saved spoiling the odds of blindness.


Prerequisites

Before we jump into the code, here's what you'll need:

  • An understanding of Python programming and usage of Google Colab

  • Basic knowledge about deep learning and medical images.

  • Comfortable using frameworks like Tensorflow, Keras, Numpy, OpenCV, and Seaborn to handle data and build models and visualize data and performance of models

  • A training and a testing set of retinal images.

Once you organize these tools, we assure you that you will notice how almost all of them can be used in the following step. Also, do not stress if you are not a Python master through the tutorial, you will understand every line of the code!



Approach

The approach for this work consists of developing several deep learning techniques (Vision Transformers, Custom CNN, VGG16), followed by the assessment and visualization of the findings. The major steps involve:

  • Obtaining and preparing data (augmentation, resizing, normalizing)

  • Training and measuring the performance of several architectures

  • Visualizing performance with confusion matrices and accuracy plots


Workflow and Methodology

This project can be divided into the following basic steps:

  • Data Collection: We collected the retinal dataset labeled glaucoma positive or negative from Kaggle.

  • Data preprocess: To improve the model performance and achieve higher accuracy, we applied different preprocessing techniques. First, we augmented the dataset to create a balanced dataset. Then we resized and normalized the images in 0 to 1 pixel values.

  • Model Selection: In this project, there are three models used (Vision Transformer, Custom CNN, and VGG16).

  • Training and Testing: Each of the models has been trained on the preprocessed dataset and later, tested on the dataset that was not used during training.

  • Model Evaluation: The evaluation of the model's performance is done by evaluating accuracy, precision, recall, confusion matrix, etc.

The methodology includes

  • Data Preprocessing: The images are resized, normalized, and augmented to improve the performance of models based on them.

  • Model Training: Each model is trained with 100 epochs to enhance the level of performance.

  • Evaluation: Standard metrics (accuracy, working of confusion matrix) are applied to assess the efficiency of the models.


Data Collection

We collected a dataset containing 1800 retinal images with both glaucoma-positive and glaucoma-negative cases from Kaggle. After data augmentation, images were increased to 3000 images. 80% set aside for training, while 20% for validation.

Data Preparation

Data Preparation Workflow

Resizing Images: All the images were adjusted to a size of 128x128 pixels to ensure uniformity in the input to the model.

Augmentation: Rotation, flipping, and changes in contrast, among others, are employed to increase the diversity of the datasets.


Code Explanation

Step 1:

Mounting Google Drive

This command mounts your Google Drive to the indicated folder path (/content/drive). After this step has been performed, you will need to allow access to your Google Drive account. After the access has been granted, reading and writing files will become straightforward as you can do this straight from your Drive, which is very helpful in loading datasets and saving the results of the models during the course of the project.

from google.colab import drive
drive.mount('/content/drive')

Install the necessary packages

In this code, Keras and MediaPipe are installed. Keras is used to fulfill the purposes of model development. MediaPipe is used to design performance models for multimedia applications. Then TensorFlow Addons extend the core TensorFlow framework by providing additional features, such as new types of optimizers and layers. Next, Keras Applications has pre-trained models and specific layers for Keras for easy transfer learning or feature extraction. Lastly, Einops makes it easy to reshape tensors, such as through reordering in deep learning.

!pip install keras
!pip install 'keras<3.0.0' mediapipe-model-maker
!pip install tensorflow-addons
!pip install keras-applications
!pip install einops

Import the necessary libraries

This code block imports all the required libraries for this project for creating, training, and evaluating models. It also imports image processing libraries like PIL and OpenCV for handling images, and matplotlib and seaborn for data visualization. Scikit-learn utilities facilitate model evaluation using metrics such as confusion matrices.

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, Dropout, LayerNormalization
from tensorflow.keras.layers.experimental.preprocessing import Rescaling # Added this import
import tensorflow_addons as tfa
import os
import keras
import numpy as np
from tqdm import tqdm
from keras.models import Sequential
from keras.callbacks import ModelCheckpoint
from keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras import optimizers
from keras.preprocessing import image
from PIL import Image,ImageOps
import cv2
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix, classification_report
import matplotlib.pyplot as plt
import tensorflow
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, BatchNormalization, ReLU, ELU, Dropout, Conv2D, Dense, MaxPool2D, AvgPool2D, GlobalAvgPool2D, Concatenate
import tensorflow as tf
import tensorflow.keras
from tensorflow.keras import models, layers
from tensorflow.keras.models import Model, model_from_json, Sequential
from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D, SeparableConv2D, UpSampling2D, BatchNormalization, Input, GlobalAveragePooling2D
from tensorflow.keras.regularizers import l2
from tensorflow.keras.optimizers import SGD, RMSprop
from tensorflow.keras.utils import to_categorical

STEP 2:

Data collection and preparation

Load Dataset:

This section of code is mainly focused on arranging the paths of the dataset. It starts by guiding the program to the main folder containing the Glaucoma datasets located on Google Drive. After that, it defines two different paths. One for the training set and another for the validation set.

Code Editor