My checkpoint albert files does not change when training


I train Albert model for question answering task. I have 200 thousand question-answer pairs and I use a saved checkpoint file with 2gb. I trained it on my GPU GeForce 2070 RTX with 1000 steps each time to save checkpoint, during training the checkpoint files just keep the size of 135MB and don’t increase. Is this a problem?

I can’t see why with a much smaller dataset like 1500 question-answer pairs, it also produces 135 MB checkpoint file. It hasn’t stopped training yet but is it possible that the model will improve with this training?


While training your model you can store the weights in a collection of files formatted as checkpoints that contain only the weights trained in a binary format.

In particular, the checkpoints contain:

  • one or more blocks that contain the weights of our model
  • an index file indicating which weights are stored in a particular block

So the fact that the size of the checkpoint file is always the same depends on the fact that the model used is always the same. So the number of model parameters is always the same so the size of the weights you are going to save is always the same. While the suffix data-00000-of-00001 indicates that you are training the model on a single machine.

The size of the dataset, in my opinion, has nothing to do with it.

Answered By – Elidor00

Answer Checked By – Senaida (AngularFixing Volunteer)

Leave a Reply

Your email address will not be published.