I think SpecialSpmmFunctionFinal 's forward section is intend to compute the row sum of sparse matrix ,and backward return the gradient for the sparse matrix's values,but I find torch.sparse might solve the backward of row sum operation,for example:
`i = torch.LongTensor([[0, 1, 1],[2, 0, 2]]) #row, col
v = torch.FloatTensor([3, 4, 5]) #data
v.requires_grad=True
m=torch.sparse_coo_tensor(i, v, torch.Size([2,3])) #torch.Size()
m.retain_grad()
m1=torch.sparse.sum(m,dim=1)
m1.retain_grad()
m2=torch.sparse.sum(m1)
m2.backward()
print(v.grad)#v's gradient is tensor([1., 1., 1.])`
So why do you write the autograd function or something I understand is wrong?
waitting for your reply,thanks.
I think SpecialSpmmFunctionFinal 's forward section is intend to compute the row sum of sparse matrix ,and backward return the gradient for the sparse matrix's values,but I find torch.sparse might solve the backward of row sum operation,for example: `i = torch.LongTensor([[0, 1, 1],[2, 0, 2]]) #row, col v = torch.FloatTensor([3, 4, 5]) #data v.requires_grad=True m=torch.sparse_coo_tensor(i, v, torch.Size([2,3])) #torch.Size() m.retain_grad()
m1=torch.sparse.sum(m,dim=1) m1.retain_grad()
m2=torch.sparse.sum(m1) m2.backward() print(v.grad)#v's gradient is tensor([1., 1., 1.])` So why do you write the autograd function or something I understand is wrong?
waitting for your reply,thanks.