Blog

Creating a Classification Model using Keras ImageDataGenerator from Tensorflow

In this tutorial we'll see how we can use the Keras ImageDataGenerator library from Tensorflow to create a model for classifying images. We'll be using the Image Data Generator to preprocess our images and also to feed our images into the model using the flow_from_dataframe function.

The data we'll be using comes from a Kaggle competition for predicting Melanoma. We have a number of images of skin that are either malignant or benign and a csv that contains a reference to each image along with  it's resepctive classification.

In this tutorial we'll import our libraries and csv files, initalise define train and validation image data generators, build and train model using transfer learning before scoring our model.

Let's start by importing the libraries and data we'll need to build our model.

IMPORT LIBRARIES & DATA

import pandas as pd
import tensorflow as tf
tf.random.set_seed(101)
from keras_preprocessing.image import ImageDataGenerator
from efficientnet.tfkeras import EfficientNetB0
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from sklearn.metrics import roc_auc_score, accuracy_score

We've imported Tensorflow and set the seed to 101. This ensures that training our model is reproducable and consistent. We'll import the ImageDataGenerator from the Keras_preprocessing library for image augmentation and feeding the images to the model. For the model itself, we'll be using a Sequential model composed of an EfficientNetB0 base model with additional pooling and dense layers. Finally we'll be scoring our model based on AUC (Area Under the Curve) and accuracy so we'll import these from the sklearn metrics library.

Now we'll import our train and validation csv files. These have been pre-processed to include just the image name and whether the image is malignant (1) or benign (0).

train = pd.read_csv('data/train.csv')
val = pd.read_csv('data/validation.csv')

print(train.head())

 

INITIALISE IMAGE DATA GENERATOR

Now we are going to initialise image data generators for both our training and validation datasets. Parameters passed to the ImageDataGenerator tell Tensorflow what processing to perform on our images. All of this happens in real time In other words, it happens in memory at the time the images are being used i.e. when using the images to train our model and none of the augmentations will permanently alter the images we have stored. 

For this project, both train and validation images will be rescaled so that each data point lies between 0 and 1 which is a standard step for image classification tasks. We'll also apply further augmentations to the train data generator only. We'll apply random image zoom using the zoom_range parameter and randomly flip the image horizontally and vertically. For a full list of augmentations that can be applied using an ImageDataGenerator, have a look through the official documentation here.

train_datagen = ImageDataGenerator(
        rescale=1./255,
        zoom_range=(0.8,1)
        horizontal_flip=True,
        vertical_flip=True

        )
val_datagen = ImageDataGenerator(
        rescale=1./255)

 

FLOW FROM DATAFRAME & FLOW FROM DIRECTORY

Now we've told Tensorflow how we want to pre-process our image data, we also have to tell it how and where to get the raw images. Keras image data generator provides methods for this including flow (where arrays of image and target data are passed) and flow_from_directory, where an image directory is passed and the images are stored in subdirectories of this directory according to their classification.

In this project we are going to use the flow_from_dataframe function.This allows us to use a dataframe to tell Tensorflow which images to use and what the classification of each image is.

We're going to pass into the flow_from_dataframe function the following parameters:

  • dataframe: The source dataframe (in our case, the train and val dataframes created earlier)
  • directory: The location of our images relative to the current working directory
  • x_col: The column name in our dataframe where the image filenames are stored
  • y_col: The column name in our dataframe where the target is stored
  • target_size: What dimensions we'd like our images resizing to
  • batch_size
  • class_mode: Our task is binary classification and so we can use the binary mode
IMG_DIM = 224
BATCH_SIZE = 64

train_generator = train_datagen.flow_from_dataframe(train, 
                                                    directory='data/images', 
                                                    x_col='image', 
                                                    y_col='target',
                                                    target_size=(IMG_DIM , IMG_DIM ),
                                                    batch_size=BATCH_SIZE ,
                                                    class_mode='binary')
val_generator = val_datagen.flow_from_dataframe(val, 
                                                directory='data/images', 
                                                x_col='image', 
                                                y_col='target',
                                                shuffle=False,
                                                target_size=(IMG_DIM ,IMG_DIM ),
                                                batch_size=BATCH_SIZE ,
                                                class_mode='binary')

 

INITALISE THE MODEL

Now we have the image data generators setup, let's initalise our model. We'll be training our model using EfficientNetB0 with transfered imagenet weights as a base model, followed by a global average pooling layer and two dense layers for training our model to the sepcifics of our dataset.

We removed the top layer from our base model and set it to not be trainable. This enables us to pass the image data through this pretrained model to get an output and use this output as inputs for our additional dense layers which will be trainable.

base_model = EfficientNetB0(input_shape=(IMG_DIM , IMG_DIM ,3),
                                             include_top=False,
                                             weights='imagenet')

base_model.trainable = False

model = Sequential(
        [base_model,
        GlobalAveragePooling2D(),
        Dense(128, activation='relu'), 
        Dense(1, activation='sigmoid')]
        )

 

COMPILE & FIT MODEL

We'll be using Adam to optimise the weights in our neural network. Adam is a standard optimizer for many computer vision tasks and more detail about how it works can be found in the paper here.

As our task is binary classification, we'll use binary cross entropy as our loss function. This compares the our models probablistic prediction to the actual value and penalises those with a greater difference from the actual. In other words it will frown upon those that are wrong but are more confident.

Finally we'll call the model.fit function, passing our train_generator which will feed our training images through our network after they've been processed in the way we declared in our train data generator above.

model.compile(optimizer = tf.keras.optimizers.Adam(0.001),
              loss='binary_crossentropy',
              metrics=['accuracy', tf.keras.metrics.AUC()])

STEP_SIZE_TRAIN=train_generator.n//train_generator.batch_size

history = model.fit(train_generator,
                    steps_per_epoch=STEP_SIZE_TRAIN,
                    epochs=5)
 

 

VALIDATION & SCORING

Once our model has been fit we can use our validation generator to feed in our validation images for our model to classify and then score to test how well our model has performed.

val_generator.reset()

pred = model.predict(val_generator,
                     steps=None,
                     verbose=1)

 

score = roc_auc_score(val['target'], pred)
print('AUC Score:',score)

accuracy = accuracy_score(val['target'].astype(int), pred.astype(int))
print('Accuracy:',accuracy)

As you can see, we have a AUC (area under the curve) score of 81% and an accuracy of 98%. As future step we have the option to re-train our model using different image augmentations in our data generator to see if this gives us a better score on our validation data.