CNN - Data Augmentation

Data augmentation is a strategy that enables practitioners to significantly increase the diversity of data available for training models, without actually collecting new data. Data augmentation techniques such as cropping, padding, and horizontal flipping are commonly used to train large neural networks.

What is Data Augmentation

In deep learning, the more training data will result the better model.

  • If we have less data, it is more likely to result a overfitted model.

We can Improve our model accuracy by Data Augmentation.

In short, Data Augmentation is a preprocessing technique to help us to generate more image variations to both avoid overfitting and increase accuracy of the model.

Variations include:

  • Cropping
  • Padding
  • Flipping Horizontal and vertical
  • Sheering
  • Rotation
  • Zooming
  • Shifting Horizontal and vertical
    • Note the Black space will be filled automatically (no black/ empty space)

Original Image :

Generated Image :

Benefits of Data Augmentation

  • Require much less effort in creating a dataset (make small dataset into huge dataset)
  • Reduces overfitting due to the increased variety

Disadvantage of Data Augmentation

  • GPU is not support for Data Augmentation (CPU only)
  • Might need a long time to generate images

Some Code Examples

Data Augmentation with Keras

This is a minimum example.

Import Relevant Library

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import keras
from keras.preprocessing.image import ImageDataGenerator

# Creating Image Generator
train_datagen = ImageDataGenerator(
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
rescale = 1. /255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = 'nearest')

test_datagen = ImageDataGenerator(rescale=1. / 255)


# Configure Batch Sizes
test_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary')

validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary')

Fitting Model

1
2
3
4
5
6
7
# Fitting Our Generator
model.fit_generator(
train_generator,
steps_per_epoch = nb_train_samples // batch_size,
epochs = epochs,
validation_data = test_generator,
validation_steps = nb_validation_samples // batch_size)
  • .fit is used when the entire training dataset can fit into the memory and no data augmentation is applied.
  • .fit_generator is used when either we have a huge dataset to fit into our memory or when data augmentation needs to be applied.

Data Augmentation with Tensorflow

This Example is more detailed.

  • tf.keras.preprocessing.image.ImageDataGenerator

Import Library

1
import tensorflow as tf

Datagen and Fitting Model

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
data_augmentation = True
history = None # For recording the history of training process.
if data_augmentation:
print('Using real-time data augmentation.')
# This will do preprocessing and realtime data augmentation:
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
zca_epsilon=1e-06, # epsilon for ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
# randomly shift images horizontally (fraction of total width)
width_shift_range=0.1,
# randomly shift images vertically (fraction of total height)
height_shift_range=0.1,
shear_range=0., # set range for random shear
zoom_range=0., # set range for random zoom
channel_shift_range=0., # set range for random channel shifts
# set mode for filling points outside the input boundaries
fill_mode='nearest',
cval=0., # value used for fill_mode = "constant"
horizontal_flip=True, # randomly flip images
vertical_flip=False, # randomly flip images
# set rescaling factor (applied before any other transformation)
rescale=None,
# set function that will be applied on each input
preprocessing_function=None,
# image data format, either "channels_first" or "channels_last"
data_format=None,
# fraction of images reserved for validation (strictly between 0 and 1)
validation_split=0.0)

# Compute quantities required for feature-wise normalization
# (std, mean, and principal components if ZCA whitening is applied).
datagen.fit(x_train)

# Fit the model on the batches generated by datagen.flow().
history = model.fit_generator(datagen.flow(x_train, y_train,
batch_size=batch_size),
epochs=epochs,
validation_data=(x_test, y_test),
workers=4)
  • .fit is used when the entire training dataset can fit into the memory and no data augmentation is applied.
  • .fit_generator is used when either we have a huge dataset to fit into our memory or when data augmentation needs to be applied.

Why Training become much Slower with Data Augmentation?

It is expected behavior when use data augmentation for your model to train slower. Augmentation flips, rotates and in general transforms an image to enlarge our data set. This is done with CPU which is slower than GPU.

  • We use augmentation not for speed but for increased accuracy.

Reference

Tensorflow - Data augmentation

tf.keras.preprocessing.image.ImageDataGenerator

Data augmentation with Keras

Kaggle - 25 Million Images! [0.99757] MNIST