getkeops / keops

KErnel OPerationS, on CPUs and GPUs, with autodiff and without memory overflows
https://www.kernel-operations.io
MIT License
1.04k stars 64 forks source link

Best way to implement a gradient kernel #205

Open fwilliams opened 2 years ago

fwilliams commented 2 years ago

Hi, I'm working with Gaussian Processes with derivative observations and am trying to invert a kernel of the following form:

image

It would be nice to use KernelSolve in this case to invert the linear system. The expressions for the gradient parts of the kernel are fairly easy to write out as symbolic formulae, but I don't see an obvious way to write a reduction expression that I can plug directly into KernelSolve Any ideas?

Thanks in advance!

jeanfeydy commented 2 years ago

Hi @fwilliams ,

All my apologies for the long time to answer, Covid-Christmas has been pretty disruptive here in Paris :-) To be clear, are you trying to solve a “block-wise” linear system where each block “at location (i,j)” is given by the expression above, evaluated on points x_i and x’_j?

If this is the case, I believe that the simplest solution could be to:

  1. Understand your “big”, block-wise linear operator as a sum of four linear operators. If you already have closed-form expressions for the four parts of your block, this should be easy - assuming that you re-order the “input” and “output” vectors. From a matrix perspective, what I have in mind is to re-order a collection of numerous “small blocks with 4 tiles” into a matrix with “only 4 big blocks”.
  2. Write a linear function that implements your linear operator using 4 distinct KeOps reductions.
  3. Use the SciPy solvers to solve your linear system. As shown in this tutorial and those benchmarks, scipy.sparse.linalg really is a great library. Moving data between the GPU (for the KeOps reduction) and the main CPU RAM (for the scipy computations in e.g. a conjugate gradient solver) is a bit wasteful, but I assume that you won’t loose more than a factor x2 or x3.

What do you think?

Best regards, Jean