Need help understanding the MLPClassifier

Issue

I’ve been working with the MLPClassifier for a while and I think I had a wrong interpretation of what the function is doing for the whole time and I think I got it right now, but I am not sure about that. So I will summarize my understanding and it would be great if you could add your thoughts on the right understanding.

So with the MLPClassifier we are building a neural network based on a training dataset. Setting early_stopping = True it is possible to use a validation dataset within the training process in order to check whether the network is working on a new set as well. If early_stopping = False, no validation within he process is done. After one has finished building, we can use the fitted model in order to predict on a third dataset if we wish to.
What I was thiking before is, that doing the whole training process a validation dataset is being taken aside anways with validating after every epoch.

I’m not sure if my question is understandable, but it would be great if you could help me to clear my thoughts.

Solution

The sklearn.neural_network.MLPClassifier uses (a variant of) Stochastic Gradient Descent (SGD) by default. Your question could be framed more generally as how SGD is used to optimize the parameter values in a supervised learning context. There is nothing specific to Multi-layer Perceptrons (MLP) here.

So with the MLPClassifier we are building a neural network based on a training dataset. Setting early_stopping = True it is possible to use a validation dataset within the training process

Correct, although it should be noted that this validation set is taken away from the original training set.

in order to check whether the network is working on a new set as well.

Not quite. The point of early stopping is to track the validation score during training and stop training as soon as the validation score stops improving significantly.

If early_stopping = False, no validation within [t]he process is done. After one has finished building, we can use the fitted model in order to predict on a third dataset if we wish to.

Correct.

What I was thiking before is, that doing the whole training process a validation dataset is being taken aside anways with validating after every epoch.

As you probably know by now, this is not so. The division of the learning process into epochs is somewhat arbitrary and has nothing to do with validation.

Answered By – Arne

Answer Checked By – Senaida (AngularFixing Volunteer)

Leave a Reply

Your email address will not be published.