Hi
Thanks for the great package!
I met an shape inconsistency issue that may caused by Lazytensor autograd.
Can you help to take a look? Thanks!
Here is an example code.
If use the same kernel written by torch, there is no error.
if use the batchsize=1, there is no error.
The error only comes when use keops and batch_size>1
_Traceback (most recent call last):
File "/playpen-raid1/zyshen/proj/shapmagn/shapmagn/grad_debug.py", line 68, in <module>
loss.backward()
File "/playpen-raid1/zyshen/anaconda3/envs/pr/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/playpen-raid1/zyshen/anaconda3/envs/pr/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
File "/playpen-raid1/zyshen/anaconda3/envs/pr/lib/python3.7/site-packages/torch/autograd/function.py", line 89, in apply
return self._forward_cls.backward(self, *args) # type: ignore
File "/playpen-raid1/zyshen/anaconda3/envs/pr/lib/python3.7/site-packages/pykeops/torch/generic/generic_red.py", line 161, in backward
grad = grad.reshape(arg_ind.shape) # The gradient should have the same shape as the input!
RuntimeError: shape '[2, 2000, 3]' is invalid for input of size 6000_
Hi Thanks for the great package! I met an shape inconsistency issue that may caused by Lazytensor autograd.
Can you help to take a look? Thanks!
Here is an example code. If use the same kernel written by torch, there is no error. if use the batchsize=1, there is no error. The error only comes when use keops and batch_size>1