cornellius-gp / gpytorch

A highly efficient implementation of Gaussian Processes in PyTorch
MIT License
3.46k stars 545 forks source link

Bug: Exploit Structure in get_fantasy_strategy #2494

Closed naefjo closed 1 week ago

naefjo commented 3 months ago

Hello :)

This PR is related to #2468 and cornellius-gp/linear_operator#93.

The DefaultPredictionStrategy's get_fantasy_model updates the gram matrix with new datapoints and updates the lik_train_train_covar's root_decomposition and root_inv_decomposition caches by passing them to the constructor. However by using to_dense in lines 214-215, the caches in the __init__ on line 69 and 72 respectively are constructed with root and inv_root of type torch.tensor which in RootLinearOperator.__init__ will assign a DenseLinearOperator to self.root since to_linear_operator defaults to DenseLinearOperator if provided with a torch.tensor.

As a result in LinearOperator.cat_rows, the object E will be of type DenseLinearOperator which in turn will fail the check for for triangular matrices here. This once again leads to a stable_pinverse with QR decomposition instead of exploiting a fast triangular solve to compute the inverse.

fteufel commented 3 months ago

Thanks for finding this! I've been confused myself for a few weeks why get_fantasy_model wasn't speeding things up compared to just recomputing caches, but couldn't figure it out.

Can confirm this makes things faster.