Lately I've been working to expand the gpytorch rbfwithgrad kernel to work with keops. The kernel itself isnt horribly complicated as its just computing the rbf (already an implementation with keops documented which is great) as well as two 1st order gardient blocks and a hessian block backprop. While the individual operations within these segments seem relatively straightforward for what lazytensors can do, I'm running into some confusion on what exactly the limits are on operations I can perform with lazytensors.
primarily, how flexible these tensors are, ie: can I perform additional tensor math operations onto them or is it better instead to initialize several modified lazytensors at the beginning and go from there?
Additionally, I read in documentation that I can concat these which will be necessary to compute my final covariance matrix, is this the preferred method or is there a better way to approach constructing a matrix like this?
Thanks in advance! I am a admittedly a bit of a noob with tensor operations as well as this library so sorry for any questions that may seem glaringly obvious.
Hi all,
Lately I've been working to expand the gpytorch rbfwithgrad kernel to work with keops. The kernel itself isnt horribly complicated as its just computing the rbf (already an implementation with keops documented which is great) as well as two 1st order gardient blocks and a hessian block backprop. While the individual operations within these segments seem relatively straightforward for what lazytensors can do, I'm running into some confusion on what exactly the limits are on operations I can perform with lazytensors.
primarily, how flexible these tensors are, ie: can I perform additional tensor math operations onto them or is it better instead to initialize several modified lazytensors at the beginning and go from there?
Additionally, I read in documentation that I can concat these which will be necessary to compute my final covariance matrix, is this the preferred method or is there a better way to approach constructing a matrix like this?
Thanks in advance! I am a admittedly a bit of a noob with tensor operations as well as this library so sorry for any questions that may seem glaringly obvious.
(gpytorch source code on kernel for reference: https://docs.gpytorch.ai/en/latest/_modules/gpytorch/kernels/rbf_kernel_grad.html)