Open adam-hartshorne opened 11 months ago
Hi @adam-hartshorne ,
Thanks for your interest in our library, and apologies for the long delay (I have had a very busy August+September).
The workaround described in the paper is actually fairly efficient: once the optimal indices s_i
have been computed using KeOps LazyTensor, indexing the y's with y[s_i, :]
is a cheap, O(N)
operation. Getting rid of the workaround and simply making the .min()
reduction differentiable is certainly on our to-do list - but at the moment, making sure that KeOps works flawlessly on all configurations is our main priority.
Now, to get the best performance in the backward pass when many values of s_i
are identical, you may actually want to replace y[s_i, :]
with the equivalent torch.index_select(y, 0, s_i)
, whose backward operator is faster as discussed here and there.
Does this work for you?
Best regards, Jean
I was looking through the following paper which outlines how to code Chamfer Distance using Keops.
Fast geometric learning with symbolic matrices https://www.jeanfeydy.com/Papers/KeOps_NeurIPS_2020.pdf
However, it uses .min operator which isn't supported by back-propagation. The suggested alternative to min operator in the code states to use the following code instead,
However, this requires the use of the non-lazy tensors, which negates one of the major useful elements of Keops.
Thus, I was wondering is it possible to efficiently calculate Chamfer Distance using Lazy Tensor representation?