Closed cisprague closed 4 years ago
While approximate GPs should work with the standard model/posterior API, we havenāt actually used/tested them extensively. Iām currently traveling, will take a look at the specific issue here incentive I get back.
OK so upon further digging it seems that this particular failure you're running into has to do with caching cholesky factors inside gpytorch. Specifically, if I manually compute the cholesky factor for induc_induc_covar
I get a factor of the right size, while in the current code it seems to get a cached result. Maybe @gpleiss or @jacobrgardner have any immediate thoughts (I'm not very familiar with the variational strategy implementation).
This may be related to https://github.com/cornellius-gp/gpytorch/pull/10
Let me take a look and see if this is on our end -- in general we believe the variational code is as stable as the exact code at this point, but there may be some issue related to multiple batches or some other particularly complicated use case.
yeah I guess something funky must be going on with the caching here, if you look at the screenshot self._cholesky_factor(induc_induc_covar)
should return psd_safe_cholesky(induc_induc_covar.evaluate())
if no cached value is used (unless I'm missing something).
Yeah, it is cached: https://github.com/cornellius-gp/gpytorch/blob/02fc8dd366760ec92ed74f889626e55f21a395b3/gpytorch/variational/variational_strategy.py#L70-L73
but the thing is we would not expect the size of induc_induc_covar
to ever change, and the cache is deleted every time the strategy is called:
https://github.com/cornellius-gp/gpytorch/blob/02fc8dd366760ec92ed74f889626e55f21a395b3/gpytorch/variational/variational_strategy.py#L158
Thanks for the help everyone. If manually computing the Cholesky factor gives the correct size, is there a straightforward way to do this with the GPyTorch or BoTorch API?
Just to reiterate, my goal is to map multidimensional heteroskedastic inputs to multidimensional homoskedastic outputs. Is there an easier way to do this? It seems variational methods are the go-to for this.
This is also needed to support the BernoulliLikelihood, which only works with ApproximateGP. I wrote a BayesianOptimizer supporting ApproximateGP here as a demo: https://github.com/thomasahle/noisy-bayesian-optimization It only supports lower confidence bounds so far.
@thomasahle Here is a simple demo for using the BoTorch acquisition function & optimization machinery with an Approximate GP with Bernoulli Likelihood (model taken from the gpytorch tutorial): botorch_bernoulli_approx_gp.ipynb.txt
For the basic use case, this is as simple as
GPyTorchModel
as another superclass to the GP model_num_outputs
attribute (used internally by BoTorch)The BatchedMultiOutputGPyTorchModel
does some trickery that likely doesn't play well with variational inference, we'll have to take a closer look at that.
@cisprague as Jake said, #1047 will probably fix at least part of this issue.
I'm going to close this for now as this should work fine when not using the BatchedMultiOutputGPyTorchModel
. Feel free to re-open if needed.
š Bug:
gpytorch.models.ApproximateGP
compatibilityTo reproduce
Code snippet to reproduce
Stack trace/error message
Expected Behavior
BoTorch should be compatible with any GPyTorch model via inhereting from
botorch.models.gpytorch.GPyTorchModel
, but this does not seem to work in the case of agpytorch.models.ApproximateGP
with a variational strategy. It should work like other models do. I am trying to model a mapping fromn x d
heteroskedastic inputs ton x m
homoskedastic outputs.System information
Please complete the following information:
Additional context
Using a mobile robot, with growing position uncertainty, to build an environmental map with sensors.