Train Accuracy increases, Train loss is stable, Validation loss Increases, Validation Accuracy is low and increases


My neural network trainign in pytorch is getting very wierd.

I am training a known dataset that came splitted into train and validation.
I’m shuffeling the data during training and do data augmentation on the fly.

I have those results:

Train accuracy start at 80% and increases enter image description here

Train loss decreases and stays stable enter image description here

Validation accuracy start at 30% but increases slowly enter image description here

Validation loss increases enter image description here

I have the following graphs to show:

enter image description here

  1. How can you explain that the validation loss increases and the validation accuracy increases?

  2. How can be such a big difference of accuracy between validation and training sets? 90% and 40%?


I balanced the data set.
It is binary classification. It now has now 1700 examples from class 1, 1200 examples from class 2. Total 600 for validation and 2300 for training.
I still see similar behavior:

enter image description here

**Can it be becuase I froze the weights in part of the network?

**Can it be becuase the hyperparametrs like lr?


I found the solution:
I had different data augmentation for training set and validation set. Matching them also increased the validation accuracy!

Answered By – BestR

Answer Checked By – David Goodson (AngularFixing Volunteer)

Leave a Reply

Your email address will not be published.