Can the increase in training loss lead to better accuracy?


I’m working on a competition on Kaggle. First, I trained a Longformer base with the competition dataset and achieved a quite good result on the leaderboard. Due to the CUDA memory limit and time limit, I could only train 2 epochs with a batch size of 1. The loss started at about 2.5 and gradually decreased to 0.6 at the end of my training.

I then continued training 2 more epochs using that saved weights. This time I used a little bit larger learning rate (the one on the Longformer paper) and added the validation data to the training data (meaning I no longer split the dataset 90/10). I did this to try to achieve a better result.

However, this time the loss started at about 0.4 and constantly increased to 1.6 at about half of the first epoch. I stopped because I didn’t want to waste computational resources.

Should I have waited more? Could it eventually lead to a better test result? I think the model could have been slightly overfitting at first.


Your model got fitted to the original training data the first time you trained it. When you added the validation data to the training set the second time around, the distribution of your training data must have changed significantly. Thus, the loss increased in your second training session since your model was unfamiliar with this new distribution.

Should you have waited more? Yes, the loss would have eventually decreased (although not necessarily to a value lower than the original training loss)

Could it have led to a better test result? Probably. It depends on if your validation data contains patterns that are:

  1. Not present in your training data already
  2. Similar to those that your model will encounter in deployment

Answered By – Ali Haider

Answer Checked By – Marie Seifert (AngularFixing Admin)

Leave a Reply

Your email address will not be published.