When is Dlib's svm_c_linear_trainer better than svm_c_linear_dcd_trainer?


I have a machine learning problem with high-dimensional labelled inputs and a relatively small sample size. Using the very cool visual guide brought me to the svm_c_linear_trainer method. But what I understand from the documentation is that the similar svm_c_linear_dcd_trainer has the option to ‘warm-start’, which sounds like a better thing to do than ‘cold-start’, e.g., inside a cross-validation loop. However, svm_c_linear_dcd_trainer is the selected method for a different type of problem, the main difference being application to unlabelled data.

Would there be a problem in using svm_c_linear_dcd_trainer on labelled data, or is there another good reason why svm_c_linear_trainer is better?


This article about the Dual-Descent SVM and also the documentation on DLib’s website show, that svm_c_linear_dcd_trainer can be used as a drop-in replacement of the standard linear SVM.

It is supposed to have superior performance (quote from mentioned article):

Experiments show that our
method is faster than state of the art implementations.

… because the algorithm supposedly makes better use of optimization while giving the same results the standard SVM algorithms would give.

Page 6 & 7 in the article, in which it is compared to other algorithms on several different datasets might be especially interesting to you.

Answered By – nada

Answer Checked By – Robin (AngularFixing Admin)

Leave a Reply

Your email address will not be published.