rusty1s / pytorch_sparse

PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations
MIT License
1.01k stars 147 forks source link

Gradients disappear with Sparse Sparse Matrix Multiplication #239

Closed Ignasijus closed 2 years ago

Ignasijus commented 2 years ago

Hi, I try to multiply two sparse tensors that require gradients, but gradients disappear after using spspmm:

import torch
from torch_sparse import spspmm

indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float, requires_grad=True)

indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.tensor([2, 4], dtype=torch.float, requires_grad=True)

indexC, valueC = spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)
print(valueC)
tensor([8.0, 6.0, 8.0])
print(valueC.grad_fn)
None

Meanwhile, with spmm gradients are preserved. Is this a bug or autograd is not supported for spspmm?

rusty1s commented 2 years ago

Yes, spspmm is the only operator that currently lacks autograd support. It is tracked here: https://github.com/rusty1s/pytorch_sparse/issues/45