cornellius-gp / gpytorch

A highly efficient implementation of Gaussian Processes in PyTorch
MIT License
3.56k stars 559 forks source link

[Docs] Errata in "Exact GP Regression with Multiple GPUs and Kernel Partitioning" example? #1114

Closed takafusui closed 4 years ago

takafusui commented 4 years ago

In the document of Exact GP Regression with Multiple GPUs and Kernel Partitioning, you coded:

for i in range(n_training_iter):
            options = {'closure': closure, 'current_loss': loss, 'max_ls': 10}
            loss, _, _, _, _, _, _, fail = optimizer.step(options)

            print('Iter %d/%d - Loss: %.3f   lengthscale: %.3f   noise: %.3f' % (
                i + 1, n_training_iter, loss.item(),
                model.covar_module.module.base_kernel.lengthscale.item(),
                model.likelihood.noise.item()
            ))

            if fail:
                print('Convergence reached!')
                break

but according to the source code of PyTorch-LBFGS (line 486):

fail (bool): failure flag
                    True: line search reached maximum number of iterations, failed
                    False: line search succeeded

You might treat the fail flag in a different way, or maybe am I wrong? Thank you.

The example is found in: https://gpytorch.readthedocs.io/en/latest/examples/02_Scalable_Exact_GPs/Simple_MultiGPU_GP_Regression.html

jacobrgardner commented 4 years ago

I think this is a reasonable use of the fail flag. If line search reaches the maximum number of iterations, it means that we were unable to find a point along the search direction that improved the function value.

Given a deterministic function, this generally implies that we should stop optimizing given the settings that L-BFGS was run with, because if we ran another step we would encounter the same outcome.