Closed Ignasijus closed 2 years ago
Hi, I try to multiply two sparse tensors that require gradients, but gradients disappear after using spspmm:
import torch from torch_sparse import spspmm indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]]) valueA = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float, requires_grad=True) indexB = torch.tensor([[0, 2], [1, 0]]) valueB = torch.tensor([2, 4], dtype=torch.float, requires_grad=True) indexC, valueC = spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)
print(valueC) tensor([8.0, 6.0, 8.0]) print(valueC.grad_fn) None
Meanwhile, with spmm gradients are preserved. Is this a bug or autograd is not supported for spspmm?
Yes, spspmm is the only operator that currently lacks autograd support. It is tracked here: https://github.com/rusty1s/pytorch_sparse/issues/45
spspmm
Hi, I try to multiply two sparse tensors that require gradients, but gradients disappear after using spspmm:
Meanwhile, with spmm gradients are preserved. Is this a bug or autograd is not supported for spspmm?