Closed quadrupole closed 4 years ago
GPyTorch and Sklearn use different approximations for classification. SKlearn (according to their documentation) uses the laplace approximation whereas GPyTorch uses variational inference. Therefore, the results are going to be different.
That being said - the overfitting you are seeing in this example comes from the model not being fully optimized.
1) Set learn_inducing_points=True
. This chooses the locations of the inducing points which gives the model more freedom to find a better fit, and the model will optimize faster.
2) Use VariationalStrategy
rather than UnwhitenedVariationalStrategy
. This also will accelerate optimization.
3) If you don't want to make the other changes, just increase the number of training iterations.
Hope this helps!
Hello,
I am new to GPyTorch. In particular I am trying to perform classification, and to this end I am comparing the Sklearn implementation of GPs. I have attached some code that illustrates the differences between GPyTorch and sklearn on a synthetic 2D feature space, where I find that the GPyTorch is overfitting the data. Hopefully someone can suggest what can be done to get the GPyTorch model to be closer to the Sklearn implementation.