Weird behaviour for my CNN validation accuracy and loss function during training phase

Issue

Here is the architecture of my network :

cnn3 = Sequential()
cnn3.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
cnn3.add(MaxPooling2D((2, 2)))
cnn3.add(Dropout(0.25))
cnn3.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
cnn3.add(MaxPooling2D(pool_size=(2, 2)))
cnn3.add(Dropout(0.25))
cnn3.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
cnn3.add(Dropout(0.2))
cnn3.add(Flatten())
cnn3.add(Dense(128, activation='relu'))
cnn3.add(Dropout(0.4)) # 0.3
cnn3.add(Dense(4, activation='softmax'))
cnn3.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adam(),
              metrics=['accuracy'])

When I have plotted both training and validation accuracies and loss functions I got the next two figures:

I could not understand why both validation accuracy and loss are not following training accuracy and loss ?

LOSS

accuracy

Solution

Your validation is following the train loss and accuracy. There is just more jitter in the validation lines due to being a smaller data set. The offset between train and validation might be du to some degree of overfitting.

Answered By – drops

Answer Checked By – Timothy Miller (AngularFixing Admin)

Leave a Reply

Your email address will not be published.