getkeops / keops

KErnel OPerationS, on CPUs and GPUs, with autodiff and without memory overflows
https://www.kernel-operations.io
MIT License
1.03k stars 65 forks source link

implementing keops for rbf kernel with gradient information #260

Closed JGittles closed 2 years ago

JGittles commented 2 years ago

Hi all,

Lately I've been working to expand the gpytorch rbfwithgrad kernel to work with keops. The kernel itself isnt horribly complicated as its just computing the rbf (already an implementation with keops documented which is great) as well as two 1st order gardient blocks and a hessian block backprop. While the individual operations within these segments seem relatively straightforward for what lazytensors can do, I'm running into some confusion on what exactly the limits are on operations I can perform with lazytensors.

primarily, how flexible these tensors are, ie: can I perform additional tensor math operations onto them or is it better instead to initialize several modified lazytensors at the beginning and go from there?

Additionally, I read in documentation that I can concat these which will be necessary to compute my final covariance matrix, is this the preferred method or is there a better way to approach constructing a matrix like this?

Thanks in advance! I am a admittedly a bit of a noob with tensor operations as well as this library so sorry for any questions that may seem glaringly obvious.

(gpytorch source code on kernel for reference: https://docs.gpytorch.ai/en/latest/_modules/gpytorch/kernels/rbf_kernel_grad.html)

JGittles commented 2 years ago

closing ticket - significantly more developed in approach now so will be formulating a better ticket shortly