cornellius-gp / gpytorch

A highly efficient implementation of Gaussian Processes in PyTorch
MIT License
3.54k stars 557 forks source link

[Docs] Possible error in gpytorch.likelihoods.gaussian_likelihood comments #742

Open mc-robinson opened 5 years ago

mc-robinson commented 5 years ago

FortheFixedNoiseGaussianLikelihoodclass ingpytorch.likelihoods.gaussian_likelihood`, the following comments appear:

Example:
        >>> train_x = torch.randn(55, 2)
        >>> noises = torch.ones(55) * 0.01
        >>> likelihood = FixedNoiseGaussianLikelihood(noise=noises, learn_additional_noise=True)
        >>> pred_y = likelihood(gp_model(train_x))
        >>>
        >>> test_x = torch.randn(21, 2)
        >>> test_noises = torch.ones(21) * 0.02
        >>> pred_y = likelihood(gp_model(test_x), noise=test_noises)

However, I do not believe the last line will work in practice (at least when using model.eval() and likelihood.eval()). This is because the forward function does not have a noise parameter.

In practice, I found the only way to get it working is to set the property directly:

likelihood.noise = test_noises
pred_y = likelihood(model(X_test))

I am pretty sure this is right, but I just started using the library so I am not confident enough to submit a PR -- and thought I would confirm it is indeed an issue first.

Thanks again for the library. Also, on a slightly related note, if anyone has a good suggestion for a relatively easy to implement heteroskedastic GP based on gpytorch, that would be much appreciated. I would be happy to contribute an example/documentation if I could get it working!

jacobrgardner commented 5 years ago

Sorry for the delay responding! The docs are correct. If I do:

observed_pred = likelihood(model(test_x), noise=0.21 * torch.ones(51))

In the simple GP example notebook where I've replaced the GaussianLikelihood with

likelihood = gpytorch.likelihoods.FixedNoiseGaussianLikelihood(noise=0.04 * torch.ones_like(train_y))

Then observed_pred has an added 0.21 on the diagonal:

observed_pred.lazy_covariance_matrix._diag_tensor.diag()
tensor([0.2100, 0.2100, 0.2100, ..., 0.2100, 0.2100])

The code path for this is that **kwargs gets passed along to the gpytorch.likelihoods.noise_models.FixedGaussianNoise forward call, which consumes a noise kwarg.

In terms of heteroskedastic GPs in GPyTorch, @Balandat I believe has some examples of this. There might even be some example notebooks of it over at the botorch repo?

Balandat commented 5 years ago

We have a GP model in botorch that uses another GP for modeling the noise (right now the posterior mean estimate of the GP is used as the OOS noise predictions): https://github.com/pytorch/botorch/blob/master/botorch/models/gp_regression.py#L217

I don't think we currently have an example notebook for this though.

mc-robinson commented 5 years ago

Hi, thank you for your response and the pointer to the botorch code (will definitely take a look).

Sorry if I was a bit cavalier in describing the issue, by not providing a code example to reproduce the error. However, I am still getting the error:

TypeError: forward() got an unexpected keyword argument 'noise'

When I use the following code:

likelihood = gpytorch.likelihoods.FixedNoiseGaussianLikelihood(noise=0.04 * torch.ones(50),
                                                               learn_additional_noise=True)
observed_pred = likelihood(model(X_test), noise=0.21 * torch.ones(50))

(I can provide the whole long code to reproduce the issue if necessary)

However, if I simply comment out the learn_additional_noise=True parameter from the likelihood, the code works. Thus, I think that must be the offending parameter that is throwing things off? From a brief look at the code, I can't understand why, but hopefully maybe that helps.