Image

Skin Cancer Detection Using Deep Learning

Think about it if diagnosing skin cancer could be done by uploading a picture of the skin. In this project, deep learning is used to make that a possibility. Here we have incorporated state-of-the-art approaches of convolutional neural networks to design a system that would be able to classify skin cancer. It helps healthcare professionals detect skin cancer faster and more accurately. With pre-trained models such as EfficientNetB4 and DenseNet121 and basic CNN, we can infuse medical imaging with the ability to diagnose skin cancer and detect it during the early stages.

The potential impact is early detection of skin cancer may save lives. This project is not simply developing lines of code. It's about enhancing healthcare by using advanced technology as the key tool.

Project Overview

In this project, we have tried to make the challenge of creating a skin cancer detection system using deep learning easier. Based on the dataset of skin cancer images, we fine-tune two advanced pre-trained models of EfficientNetB4 and DenseNet121, as well as a basic CNN to classify skin lesions into different types such as melanoma, pigmented benign keratosis, etc. With EfficientNetB4, we obtained an accuracy of over 80%.

The outcome is forecasts that help doctors in the actual process of diagnosing and may help eliminate deadly diseases at an early stage. Whether you’re a young professional interested in healthcare or an everyday internet user wondering how AI can improve human life, this project demonstrates how machine learning can help.

Prerequisites

Before we jump into the code, here’s what you’ll need:

  • An understanding of Python programming and usage of Google Colab
  • Basic knowledge about deep learning and medical images.
  • Comfortable using frameworks like Tensorflow, Keras, Numpy, OpenCV, and Seaborn to handle data and build models and visualize data and performance of models
  • Skin cancer dataset.

Once you organize these tools, you will notice how almost all of them can be used in the following step. Also, do not stress if you are not a Python master—through the tutorial, you will understand every line of the code!

Approach

The approach involves building, training, and evaluating deep learning models on a skin cancer dataset. We use image-processing techniques and deep-learning architectures to classify skin lesions into different types of cancer. By using pre-trained models like DenseNet and EfficientNet, we enhance the performance of the classification system while also improving accuracy.

The major steps involve:

  • Obtaining and preparing data (augmentation, resizing, normalizing)
  • Training and measuring the performance of several architectures
  • Visualizing performance with confusion matrices and accuracy plots

Workflow and Methodology

This project can be divided into the following basic steps:

  • Data Collection: We collected the skin cancer dataset labeled with different cancer names from Kaggle
  • Data preprocess: To improve the model performance and achieve higher accuracy, we applied different preprocessing techniques. First, we augmented the dataset to create a balanced dataset. Then we resized and normalized the images in 0 to 1 pixel values.
  • Model Selection: In this project, there are three models used (Custom CNN, EfficientNetB4, and DenseNet21).
  • Training and Testing: Each of the Models has been trained on the preprocessed dataset and later, tested on the dataset that was not used during training.
  • Model Evaluation: The evaluation of the model's performance is done by evaluating accuracy, precision, recall, confusion matrix, etc.

The methodology includes

  • Data Preprocessing: The images are resized, normalized, and augmented to improve the performance of models based on them.

  • Model Training: Each model is trained with 100 epochs to enhance the level of performance.

  • Evaluation: Standard metrics (accuracy, precision, recall, f1-score, and confusion matrix) are applied to assess the efficiency of the models.

Dataset Collection

The dataset we used had 2,500 images which were scaled through augmentation to 4,500 images. The dataset was divided in the following manner 80/20 which means 80% of the data was used for training the model and 20% each for validation of the model.

Data Preparation

The dataset was pre-processed by resizing the images to a size of 128 * 128 pixels and scaling the pixels to the range 0 to 255. To increase the variability of the dataset, primarily data augmentation techniques were applied.

Data Preparation Workflow

  • Load Dataset from Google Drive
  • Rotation, flipping, and changes in contrast, among others, are employed to increase the diversity of the datasets.
  • Process and Resize as per Standards used in the model. This helps to standardize the input of the models.
  • Further, the collected dataset has to be split into training and validation sets.

Code Explanation

STEP 1:

Mounting Google Drive

This command mounts your Google Drive to the indicated folder path (/content/drive). After this step has been performed, you will need to allow access to your Google Drive account. After the access has been granted, reading and writing files will become straightforward as you can do this straight from your Drive, which is very helpful in loading datasets and saving the results of the models during the project.

from google.colab import drive
drive.mount('/content/drive')

Import the necessary libraries.

This code block imports all the required libraries for this project for creating, training, and evaluating models. It also imports image processing libraries like PIL and OpenCV for handling images, and matplotlib and seaborn for data visualization. Scikit-learn utilities facilitate model evaluation using metrics such as confusion matrices.

import os
import keras
import numpy as np
from tqdm import tqdm
from keras.models import Sequential
from keras.callbacks import ModelCheckpoint
from keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras import optimizers
from keras.preprocessing import image
from PIL import Image,ImageOps
import cv2
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix, classification_report
import matplotlib.pyplot as plt
import tensorflow
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, BatchNormalization, ReLU, ELU, Dropout, Conv2D, Dense, MaxPool2D, AvgPool2D, GlobalAvgPool2D, Concatenate
import tensorflow as tf
import tensorflow.keras
from tensorflow.keras import models, layers
from tensorflow.keras.models import Model, model_from_json, Sequential
from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D, SeparableConv2D, UpSampling2D, BatchNormalization, Input, GlobalAveragePooling2D
from tensorflow.keras.regularizers import l2
from tensorflow.keras.optimizers import SGD, RMSprop
from tensorflow.keras.utils import to_categorical

STEP 2:

Data collection and preparation

Load Dataset

This section of code is mainly focused on arranging the paths of the dataset. It starts by guiding the program to the main folder containing the skin cancer Datasets located on Google Drive. After that, it defines two different paths. One for the training set and another for the validation set.

dataset='/content/drive/MyDrive/Skin_Cancer_Detection/Datasets'
train_folder = os.path.join(dataset,"training")
test_folder = os.path.join(dataset,"validation")

Listing categories

The code sets the size of the images, creates a list to hold the names of different classes, checks which classes are available in the training folder, and then prints those class names. This makes it easier to keep track of and understand the different types of images that will be used for training the model.

img_size = 128
categories = []
for i in os.listdir(train_folder):
    categories.append(i)
print(categories)

This function iterates over different folders containing categories of images. Where it also performs reading and resizing images. Also keeps count of images across all categories and stores the processed images alongside their corresponding class numbers in a list. This takes care of the image preparation needed for training a model afterward.
# Function to process data
def process_data(folder, categories, img_size):
    data = []
    class_counts = {category: 0 for category in categories}
    for c in categories:
        path = os.path.join(folder, c)
        class_num = categories.index(c)
        for img in tqdm(os.listdir(path), desc=f"Processing {c}"):
            try:
                img_array = cv2.imread(os.path.join(path, img))
                img_resized = cv2.resize(img_array, (img_size, img_size))
                data.append([img_resized, class_num])
                class_counts[c] += 1
            except Exception as e:
                pass
        print(f"Class '{c}' has {class_counts[c]} images")
    return data, class_counts

This code is called the process_data function. This function processes all the images in the training folder, resizes them, labels them by category, and then prints the total number of training images processed.

training_data, train_class_counts = process_data(train_folder, categories, img_size)
print(f"Total training data: {len(training_data)}")

This code creates a visual bar chart that displays the count of images in each class for the training data. It helps in visualizing the distribution of data across different categories. This highlights whether it is balanced or skewed.
plt.figure(figsize=(10, 8))
plt.bar(train_class_counts.keys(), train_class_counts.values())
plt.xlabel('Categories')
plt.ylabel('Number of Images')
plt.title('Class Distribution (Training Data)')
plt.xticks(rotation=90, ha='right')
# Add labels to the bars
colors = plt.cm.get_cmap('viridis', len(train_class_counts))
for i, bar in enumerate(plt.gca().patches):
    bar.set_color(colors(i))
plt.tight_layout()
plt.show()

This code calls the process_data function. This function processes all the images in the validation folder, resizes them, labels them by category, and then prints the total number of validation images processed.

validation_data, val_class_counts = process_data(test_folder, categories, img_size)
print(f"Total validation data: {len(validation_data)}")

This code creates a visual bar chart that displays the count of images in each class for the validation data.
plt.figure(figsize=(10, 8))
plt.bar(val_class_counts.keys(), val_class_counts.values())
plt.xlabel('Categories')
plt.ylabel('Number of Images')
plt.title('Class Distribution (Validation Data)')
plt.xticks(rotation=90, ha='right')
# Add labels to the bars
colors = plt.cm.get_cmap('viridis', len(val_class_counts))
for i, bar in enumerate(plt.gca().patches):
    bar.set_color(colors(i))
plt.tight_layout()
plt.show()

This code’s task is to arrange and ready images (X_train) and labels (Y_train) for conducting the training process. It reshapes the images in the required form of 128 x 128 pixels and 3 color channels and creates NumPy arrays ready to be fed to a neural network.
X_train = []
Y_train = []
for img, label in training_data:
    X_train.append(img)
    Y_train.append(label)
X_train = np.array(X_train).astype('float32').reshape(-1, img_size, img_size, 3)
Y_train = np.array(Y_train)
print(f"X_train= {X_train.shape} Y_train= {Y_train.shape}")

This code’s task is to arrange and ready images (X_test) and labels (Y_test) for conducting the validation process. It reshapes the images in the required form of 128 x 128 pixels and 3 color channels and creates NumPy arrays ready to be fed to a neural network.
X_test = []
Y_test = []
for features,label in validation_data:
    X_test.append(features)
    Y_test.append(label)
X_test = np.array(X_test).astype('float32').reshape(-1, img_size, img_size, 3)
Y_test = np.array(Y_test)
print(f"X_test= {X_test.shape} Y_test= {Y_test.shape}")
X_train, X_test = X_train / 255.0, X_test / 255.0

STEP 3:

Visualization

This code randomly selects three images from every category in the training dataset and arranges them in a grid. Each image features its category name as the title. It makes an easy visualization of a sample from each class.

images = []
for img_folder in sorted(os.listdir(train_folder)):
    img_items = os.listdir(train_folder + '/' + img_folder)
    img_selected = np.random.choice(img_items)
    images.append(os.path.join(train_folder,img_folder,img_selected))
fig=plt.figure(1, figsize=(15, 10))
for subplot, image_ in enumerate(images):
    category = image_.split('/')[-2]
    imgs = plt.imread(image_)
    ax = fig.add_subplot(3, 3, subplot + 1)
    ax.set_title(category, pad=10, size=14)
    ax.imshow(imgs)
    ax.axis('off')
plt.tight_layout()

STEP 4:

Model Building

Building a Basic CNN Model

This code builds a simple architecture of a Convolutional Neural Network (CNN) to classify images into 9 different classes.

The first two layers are the convolutional layers, comprising 32 filters and 64 filters receptively. We use ReLU activation for these two levels and use MaxPooling to reduce the size of the shape afterward. After flattening the output it passes through a fully connected layer of 128 neurons with ReLU activation. The final layer uses softmax activation with 9 output neurons for multi-class classification.

The model is built to use the Adam optimizer and sparse categorical cross-entropy loss while monitoring the accuracy of the model.

input_shape = (img_size, img_size, 3)
num_classes = 9
def build_basic_cnn(input_shape, num_classes):
    model = Sequential()
    model.add(Conv2D(32, (3, 3), activation='relu', input_shape=input_shape))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Conv2D(64, (3, 3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Flatten())
    model.add(Dense(128, activation='relu'))
    model.add(Dense(num_classes, activation='softmax'))
    model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
    return model
model = build_basic_cnn(input_shape, num_classes)
model.summary()

This code creates a checkpoint and saves the best version of the model while training depending on the validation accuracy. The model is trained with 100 epochs, with a batch size of 32 using the training data. The checkpoint is set only to basic_cnn.h5 if the loss of the validation decreases and in this way, the best model is saved for further use.
checkpoint = ModelCheckpoint('basic_cnn.h5', monitor='val_accuracy', save_best_only=True, mode='max')
history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=100, batch_size=32, callbacks=[checkpoint])

This code produces two plots side by side. One plot displays the CNN model's accuracy improvement over time. And the other shows the changes in loss. This effectively visualizes the model's performance throughout the training and validation phases.
def plot_history(history, title):
    plt.figure(figsize=(12, 4))
    plt.subplot(1, 2, 1)
    plt.plot(history.history['accuracy'], label='train_accuracy')
    plt.plot(history.history['val_accuracy'], label='val_accuracy')
    plt.xlabel('Epochs')
    plt.ylabel('Accuracy')
    plt.legend()
    plt.title(f'{title} Accuracy Curves')
    plt.subplot(1, 2, 2)
    plt.plot(history.history['loss'], label='train_loss')
    plt.plot(history.history['val_loss'], label='val_loss')
    plt.xlabel('Epochs')
    plt.ylabel('Loss')
    plt.legend()
    plt.title(f'{title} Loss Curves')
    plt.show()
plot_history(history, 'Basic CNN')

Evaluating Model performance

This code evaluates the CNN model’s performance on the training, validation, and test datasets by assessing accuracy and loss for each. It offers a clear insight into the model's performance.

valid_loss, valid_acc = model.evaluate(X_test, Y_test)
train_loss, train_acc= model.evaluate(X_train, Y_train)
print('\nValidation Accuracy:', valid_acc)
print('\nValidation Loss:', valid_loss)
print('\nTrain Accuracy:', train_acc)
print('\nTrain Loss:', train_loss)

This code loads a pre-trained model named basic_cnn.h5 using TensorFlow's load_model function and assigns it to the variable model. It then evaluates the model's performance on the test data (X_test and Y_test), returning the loss and accuracy metrics. The loss2 variable stores the computed loss, while accuracy holds the accuracy value. Finally, it prints the accuracy as a percentage, formatted to two decimal places. This output gives a clear indication of the model's effectiveness on unseen test data.
model = tf.keras.models.load_model('basic_cnn.h5')
loss2, accuracy = model.evaluate(X_test, Y_test)
print(f"Accuracy: {accuracy * 100:.2f}%")

Plotting Confusion Matrix

The code predicts the test data and creates a confusion matrix. It provides a classification report. The confusion matrix is used to visually analyze the number of correct and incorrect predictions. The classification report presents class-wise metrics for the performance of the model. That is useful in assessing the model’s ability.

def plot_confusion_matrix(model, X_test, Y_test, categories, title):
    Y_pred = model.predict(X_test)
    Y_pred_classes = np.argmax(Y_pred, axis=1)
    cm = confusion_matrix(Y_test, Y_pred_classes)
    plt.figure(figsize=(10, 8))
    sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=categories, yticklabels=categories)
    plt.xlabel('Predicted')
    plt.ylabel('Actual')
    plt.title(title)
    plt.show()
    print("\n Classification Report:\n")
    print(classification_report(Y_test, Y_pred_classes, target_names=categories))
plot_confusion_matrix(model, X_test, Y_test, categories, 'Basic CNN Confusion Matrix')

Building a Densenet121

Here is the implementation of the DenseNet121 architecture. In the architecture, ImageNet weights are included for all sections except for the top layers. Additionally, all the layers in the model are frozen. A New Sequential model is developed based on DenseNet21. Then comes the applying flattening, a dense layer with 512 units and ReLU followed by a softmax layer for multi-class classification.

The model employs the Adam optimization algorithm with a learning rate of 0.0001, while the loss function utilized is sparse categorical cross-entropy, which is necessary to improve the performance on a classification task.

from tensorflow.keras.applications import DenseNet121
from tensorflow.keras.layers import Flatten, Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
DenseNet121_model = DenseNet121(weights='imagenet', include_top=False, input_shape=input_shape)
for layer in DenseNet121_model.layers:
    layer.trainable = False
for layer in DenseNet121_model.layers:
    layer.trainable = False
# Create a new Sequential model
DenseNet121_custom_model = Sequential()
DenseNet121_custom_model.add(DenseNet121_model)
DenseNet121_custom_model.add(Flatten())
DenseNet121_custom_model.add(Dense(512, activation='relu'))
DenseNet121_custom_model.add(Dense(num_classes, activation='softmax'))
learning_rate = 0.0001
optimizer = Adam(learning_rate=learning_rate)
DenseNet121_custom_model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

The model is trained with 100 epochs, with a batch size of 64 using the training data.
DenseNet121_pretrained = DenseNet121_custom_model.fit(x=X_train, y=Y_train, epochs=100, validation_data=(X_test, Y_test), batch_size=64)

This code produces two plots side by side. One plot displays the DenseNet21 model's accuracy improvement over time. And the other shows the changes in loss. This effectively visualizes the model's performance throughout the training and validation phases.
def plot_history(history, title):
    plt.figure(figsize=(12, 4))
    plt.subplot(1, 2, 1)
    plt.plot(history.history['accuracy'], label='train_accuracy')
    plt.plot(history.history['val_accuracy'], label='val_accuracy')
    plt.xlabel('Epochs')
    plt.ylabel('Accuracy')
    plt.legend()
    plt.title(f'{title} Accuracy Curves')
    plt.subplot(1, 2, 2)
    plt.plot(history.history['loss'], label='train_loss')
    plt.plot(history.history['val_loss'], label='val_loss')
    plt.xlabel('Epochs')
    plt.ylabel('Loss')
    plt.legend()
    plt.title(f'{title} Loss Curves')
    plt.show()
plot_history(DenseNet121_pretrained, 'DenseNet121')

This code evaluates the DesnseNet21 model’s performance on the training, validation, and test datasets by assessing accuracy and loss for each. It offers a clear insight into the model performance.
valid_loss, valid_acc = DenseNet121_custom_model.evaluate(X_test, Y_test)
train_loss, train_acc=DenseNet121_custom_model.evaluate(X_train, Y_train)
print('\nValidation Accuracy:', valid_acc)
print('\nValidation Loss:', valid_loss)
print('\nTrain Accuracy:', train_acc)
print('\nTrain Loss:', train_loss)

This code evaluates the DenseNet121_custom_model on the test data (X_test and Y_test), calculating the loss and accuracy. The loss variable holds the loss value, and accuracy stores the accuracy metric. It then prints the accuracy as a percentage, formatted to two decimal places, providing a quick summary of the model's performance on the test dataset.
loss, accuracy = DenseNet121_custom_model.evaluate(X_test, Y_test)
print(f"Accuracy: {accuracy * 100:.2f}%")

This code saves the DenseNet21 model.
# Save the model
DenseNet121_custom_model.save('densenet121_model.h5')

The code predicts the test data and creates a confusion matrix. It provides a classification report. The confusion matrix is used to visually analyze the number of correct and incorrect predictions. The classification report presents class-wise metrics for the performance of the model. Which is useful in assessing the model’s ability.
loaded_model = tf.keras.models.load_model('densenet121_model.h5')
def plot_confusion_matrix(model, X_test, Y_test, categories, title):
    Y_pred = model.predict(X_test)
    Y_pred_classes = np.argmax(Y_pred, axis=1)
    cm = confusion_matrix(Y_test, Y_pred_classes)
    plt.figure(figsize=(10, 8))
    sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=categories, yticklabels=categories)
    plt.xlabel('Predicted')
    plt.ylabel('Actual')
    plt.title(title)
    plt.show()
    print("\n Classification Report:\n")
    print(classification_report(Y_test, Y_pred_classes, target_names=categories))
# Plot confusion matrix and classification report
plot_confusion_matrix(loaded_model, X_test, Y_test, categories, 'DenseNet121 Confusion Matrix')

Building an Efficientnet_b4

Installing EfficientNet Library

The command !pip install -q efficientnet installs the efficientnet library quietly (without verbose output). This library provides pre-trained EfficientNet models, which are commonly used for tasks in image classification due to their efficiency and high performance.

!pip install -q efficientnet

Building a Custom Classifier with Pre-trained EfficientNetB4

In this code block, we load a pre-trained EfficientNetB4 model using weights that have been trained on the ImageNet dataset. By setting include_top=False, we exclude the fully connected layers. To preserve the pre-trained layers, set trainable=False.

A new sequential model is built using EfficientNetB4 as the base. It begins with a Global Max Pooling layer. This is followed by a dense layer, with 256 units. Batch normalization and 50% dropout are applied to prevent overfitting. The final layer is a dense softmax layer for classification. The Adam optimizer with a sparse categorical cross-entropy loss is used to compile the model.

import efficientnet.tfkeras as efn
enet = efn.EfficientNetB4(
    input_shape=input_shape,
    weights='imagenet',
    include_top=False
)
x = enet.output
x = tf.keras.layers.GlobalMaxPooling2D()(x)
x = tf.keras.layers.Dense(256, activation='relu')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Dropout(0.5)(x)
y = tf.keras.layers.Dense(num_classes, activation='softmax')(x)
e_model_b4 = tf.keras.Model(inputs=enet.input, outputs=y)
e_model_b4.compile(
    optimizer = tf.keras.optimizers.Adam(learning_rate=5e-4),
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)

This code trains the EfficientNetB4 model on the dataset for 100 epochs using a batch size of 64.
efficientnet_b4 = e_model_b4.fit(x=X_train, y=Y_train, epochs=100, validation_data=(X_test, Y_test), batch_size=64)

This code produces two plots side by side. One plot displays the DenseNet21 model's accuracy improvement over time. And the other shows the changes in loss. This effectively visualizes the model's performance throughout the training and validation phases.
def plot_history(history, title):
    plt.figure(figsize=(12, 4))
    plt.subplot(1, 2, 1)
    plt.plot(history.history['accuracy'], label='train_accuracy')
    plt.plot(history.history['val_accuracy'], label='val_accuracy')
    plt.xlabel('Epochs')
    plt.ylabel('Accuracy')
    plt.legend()
    plt.title(f'{title} Accuracy Curves')
    plt.subplot(1, 2, 2)
    plt.plot(history.history['loss'], label='train_loss')
    plt.plot(history.history['val_loss'], label='val_loss')
    plt.xlabel('Epochs')
    plt.ylabel('Loss')
    plt.legend()
    plt.title(f'{title} Loss Curves')
    plt.show()
plot_history(efficientnet_b4, 'EfficientNetB4')

This code evaluates the EfficientNetB4 model’s performance on the training, validation, and test datasets by assessing accuracy and loss for each. It offers a clear insight into the model performance.
valid_loss, valid_acc = e_model_b4.evaluate(X_test, Y_test)
train_loss, train_acc = e_model_b4.evaluate(X_train, Y_train)
print('\nValidation Accuracy:', valid_acc)
print('\nValidation Loss:', valid_loss)
print('\nTrain Accuracy:', train_acc)
print('\nTrain Loss:', train_loss)

This code evaluates the e_model_b4 model on the test data (X_test and Y_test), calculating the loss and accuracy. It then prints the accuracy as a percentage, formatted to two decimal places, providing a summary of the model's performance on the test dataset.
loss, accuracy = e_model_b4.evaluate(X_test, Y_test)
print(f"Accuracy: {accuracy * 100:.2f}%")

This code saves the EfficientNetB4 model.
# Save the model
e_model_b4.save('efficientnet_b4_model.h5')

The code predicts the test data and creates a confusion matrix. It provides a classification report. The confusion matrix is used to visually analyze the number of correct and incorrect predictions. The classification report presents class-wise metrics for the performance of the model. Which is useful in assessing the model’s ability.
def plot_confusion_matrix(model, X_test, Y_test, categories, title):
    Y_pred = model.predict(X_test)
    Y_pred_classes = np.argmax(Y_pred, axis=1)
    cm = confusion_matrix(Y_test, Y_pred_classes)
    plt.figure(figsize=(10, 8))
    sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=categories, yticklabels=categories)
    plt.xlabel('Predicted')
    plt.ylabel('Actual')
    plt.title(title)
    plt.show()
    print("\n Classification Report:\n")
    print(classification_report(Y_test, Y_pred_classes, target_names=categories))
# Plot confusion matrix and classification report
plot_confusion_matrix(loaded_model, X_test, Y_test, categories, 'EfficientNetB4 Confusion Matrix')

STEP 5:

Image Classification and Display with Pre-trained EfficientNetB4 Model

{start_text}
This code loads a pre-trained model (efficientnet_b4_model.h5) and an image from a specified path, resizing the image to fit the model's input requirements. The image’s pixel values are normalized to a range of 0 to 1, and the array's dimensions are expanded to include a batch size, making it ready for the model’s input format. The model then predicts the class of the image, retrieves the predicted class label, and prints it. Finally, it displays the image with a title showing the predicted class, providing a visual alongside the classification result.

loaded_model = tf.keras.models.load_model('efficientnet_b4_model.h5')
img_array = cv2.imread('/content/drive/MyDrive/new_projects/skin_cancer/img_001.jpg')
img_resized = cv2.resize(img_array, (img_size, img_size))
img_array = img_resized / 255.0
img_array = np.expand_dims(img_array, axis=0)
prediction = loaded_model.predict(img_array)
predicted_class_index = np.argmax(prediction)
predicted_class = categories[predicted_class_index]
print("Predicted Class:", predicted_class)
# Display the image
plt.imshow(cv2.cvtColor(img_resized, cv2.COLOR_BGR2RGB))
plt.title(f"Predicted: {predicted_class}")
plt.axis('off')
plt.show()

Project Conclusion

The current work aimed to develop a robust solution for skin cancer detection based on deep learning principles. Using such advanced CNN architectures as EfficientNetB4 and DenseNet121, we developed a classifier that identified different skin cancers with more than 80% accuracy. Unlike most academic projects, this is not just a project work, it demonstrates how deep learning in health care can be used to change early detection and diagnosis and give doctors a tool that makes a difference.

Throughout the journey, we tackled real-world issues: from working with a limited and skewed data sample to how our model couldn’t overlearn this data to prevent it from being ineffective in practice. Thus, applying such smart practices as data augmentation, regularization (dropout), and transfer learning on previously trained models, we aimed to achieve high results with minimal computational overhead.

Regardless of whether you operate in the field of biomedical research and development, healthcare-oriented start-ups, or AI technology, the approaches and instruments represented here contain strong guidelines for using AI in diagnostics. With additional refinement, this system may be expanded far beyond mammography toward other varieties of medical imaging, and be of exceptional use to the numerous members of the healthcare community who embrace it.

Challenges and Troubleshoot

Implementing such a large project, of course, has its own challenges. Let’s break them down and look at how you can overcome them:

  • Data Imbalance: A few categories of skin cancer may not have enough data which may lead to biased predictions and poor performance on the minority class.

    • Solution: Implement data augmentation strategies such as rotation, flipping, or zooming enhancing the class size of the datasets where the class instances are underrepresented for better generalization.
  • Small Dataset: Limitation of data may result in inadequate performance of the model.

    • Solution: Utilize Transfer Learning by applying the EfficientNet or DenseNet pre-trained models which are less data-greedy and designed for more adaptation.
  • Overfitting: When the data sample is limited or the data has an imbalanced nature, the model easily overfits and performs badly when generalizing.

    • Solution: Incorporate Dropout layers to control overfitting and also implement early stopping when validation does not show any improvements.
  • Computational Resources: Training deep artificial networks requires a large amount of resources

    • Solution: Train on the free GPU offered by Google Colab or use a lighter pre-trained model.

FAQ

Question 1: Which type of models were applied in skin cancer detection?
Answer: For classification purposes, we employed three different models. Basic CNN model, pre-trained CNN architectures which are DenseNet121 and EfficientNetB4.

Question 2: How accurate is the model?
Answer: EfficientNetB4 gave the highest accuracy above 80% on the test set.

Question 3: What type of dataset was employed?
Answer: The dataset we used had 2,500 images and the image categories included, melanoma, benign keratosis, and basal cell carcinoma. Which were scaled through augmentation to 4,500 images.

Question 4: Why use pre-trained models like EfficientNetB4 and DenseNet121 in medical image classification?
Answer: Most of these models have been trained on large-scale datasets so are capable of learning a series of patterns in images. It is uncommon to make wrong predictions using it especially when working with small-sized data sets.

Question 5: How is overfitting handled in medical image classification?
Answer: To avoid signs of overfitting we employed other strategies such as implementing the Dropout layer and other forms of stopping once the model starts to over-fit.

Code Editor