Closed naefjo closed 1 week ago
Thanks for finding this! I've been confused myself for a few weeks why get_fantasy_model
wasn't speeding things up compared to just recomputing caches, but couldn't figure it out.
Can confirm this makes things faster.
Hello :)
This PR is related to #2468 and cornellius-gp/linear_operator#93.
The
DefaultPredictionStrategy
'sget_fantasy_model
updates the gram matrix with new datapoints and updates thelik_train_train_covar
'sroot_decomposition
androot_inv_decomposition
caches by passing them to the constructor. However by usingto_dense
in lines 214-215, the caches in the__init__
on line 69 and 72 respectively are constructed withroot
andinv_root
of typetorch.tensor
which inRootLinearOperator.__init__
will assign aDenseLinearOperator
toself.root
sinceto_linear_operator
defaults toDenseLinearOperator
if provided with atorch.tensor
.As a result in
LinearOperator.cat_rows
, the objectE
will be of typeDenseLinearOperator
which in turn will fail the check for for triangular matrices here. This once again leads to astable_pinverse
with QR decomposition instead of exploiting a fast triangular solve to compute the inverse.