getkeops / keops

KErnel OPerationS, on CPUs and GPUs, with autodiff and without memory overflows
https://www.kernel-operations.io
MIT License
1.04k stars 64 forks source link

Computing the Trace of a kernel. #217

Closed LaetitiaPapaxanthos closed 2 years ago

LaetitiaPapaxanthos commented 2 years ago

Hi everyone and thank you for the great package!

I would like to know if there is a good way to compute the Trace of the variable kernel_ij below.

A is a mxd matrix and B a nxd matrix. m and n are very large (5e4 to 1e6) and d is small (5 -> 20).

    A_i = LazyTensor(A.view(m, 1, d))
    B_j = LazyTensor(B.view(1, n, d))

    distance_ij = ((A_i - B_j)**2).sum(dim=2)
    kernel_ij = (- distance_ij).exp()

Additionally, is there a way to access the diagonal (without computing the Trace this time)?

Thank you very much for your help!

jeanfeydy commented 2 years ago

Hi @LaetitiaPapaxanthos ,

Thanks for you interest in the library! If I understand your question correctly, you don't need to use KeOps here. You can compute efficiently the trace and the diagonal of your kernel matrix with:

mn = min(m, n)
distance_ii = ((A[:mn] - B[:mn])**2).sum(dim=-1)
diagonal = (- distance_ii).exp()
trace = diagonal.sum()

(KeOps is not relevant here, since there is no n*m computation to speed up.) Of course, if you are using the kernel matrix elsewhere as a KeOps LazyTensor, this means that your code will include two copies of your kernel formula:

We can all agree that this is not elegant.

We intend to fix this issue in 2022 with a 100% NumPy-compatible SymbolicArray API that will support directly these operations (e.g. np.diagonal)... but this will take us a few months: we are currently focusing on rolling out our (much improved) compilation engine with KeOps v2.0.

Does this make sense for you?

Best regards, Jean

LaetitiaPapaxanthos commented 2 years ago

Thank you for your quick reply. I am in the case where I still need the LazyTensor but I agree that you don't need it for the computation of the diagonal. I close the comments, thanks a lot!