rusty1s / pytorch_sparse

PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations
MIT License
1.01k stars 147 forks source link

Does spspmm operation support autograd? #45

Open changym3 opened 4 years ago

changym3 commented 4 years ago

Hi, you say autograd is supported for values tensors, but it seems it doesn't work in spspmm.

Like this:

indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.tensor([1, 2.0, 3, 4, 5], requires_grad=True)
indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.tensor([2, 4.0], requires_grad=True)
indexC, valueC = torch_sparse.spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)

print(valueC.requires_grad)
print(valueC.grad_fn)

And the answer is:

False
None

In my case, I want to parameterize the sparse adjacent matrix and feature matrix in GCN, so the inputs need to be both differentiable. I wonder if there're some bugs or just the way it is.

Regards.

rusty1s commented 4 years ago

That's the only function that does not have proper autograd support. Gradients for sparse-sparse matrix multiplication are quite difficult to obtain (since they are usually dense). I had a working, but slow implementation up to 0.4.4 release, but removed it since it wasn't a really good implementation. If you desperately need it, feel free to try it out.

changym3 commented 4 years ago

That's the only function that does not have proper autograd support. Gradients for sparse-sparse matrix multiplication are quite difficult to obtain (since they are usually dense). I had a working, but slow implementation up to 0.4.4 release, but removed it since it wasn't a really good implementation. If you desperately need it, feel free to try it out.

Hey! Thanks for your great work! I have installed the 0.4.4 release of torch_sparse and it totally works out in my experiments! Maybe you could add this information to the documentation. It takes me so long to figure out this no-autograd problem.

Thanks a lot again!

LuciusMos commented 4 years ago

Hey! Thanks for your great work! I have installed the 0.4.4 release of torch_sparse and it totally works out in my experiments! Maybe you could add this information to the documentation. It takes me so long to figure out this no-autograd problem.

Thanks a lot again!

Thank you so much for your question raising! It really troubles me for almost a week!

rusty1s commented 4 years ago

Sorry for the inconveniences. I have plans to add backward support for spspmm back in ASAP, see https://github.com/rusty1s/pytorch_geometric/issues/1465.

jlevy44 commented 3 years ago

Do you have any updates on autograd support?

jlevy44 commented 3 years ago

I'm parameterizing the weights of a sparse matrix to treat it as a locally connected network for a sparsely connected MLP implementation. Could I still run a backward pass to update these weights after calling matmul between this sparse matrix and a dense input?

jlevy44 commented 3 years ago

Nevermind, already seeing some nice implementations out there! https://pypi.org/project/sparselinear/ https://stackoverflow.com/questions/63893602/neural-network-layer-without-all-connections

JRD971000 commented 3 years ago

Does spspmm still lack autograd support?

github-actions[bot] commented 2 years ago

This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. Is this issue already resolved?

jaynanavati-az commented 2 years ago

Does spspmm still lack autograd support? @rusty1s .. it seems to use SparseTensor, which is supposed to be fully supported by autograd?

rusty1s commented 2 years ago

Sadly yes :(

jaynanavati-az commented 2 years ago

Is there an alternative? It is difficult to get earlier versions of torch sparse that have this to work on newer cuda versions.. :(

rusty1s commented 2 years ago

There isn‘t a workaround except for installing an earlier version. If you are interested, we can try to bring it back with your help. WDYT?

jaynanavati-az commented 2 years ago

@rusty1s sounds good, why don't we start with putting back your existing implementation? is it not better than having nothing?

rusty1s commented 2 years ago

Here's the roadmap in order to achieve this:

jaynanavati-az commented 2 years ago

Is there an alternative? It is difficult to get earlier versions of torch sparse that have this to work on newer cuda versions.. :(


From: Matthias Fey @.> Sent: 14 April 2022 22:05 To: rusty1s/pytorch_sparse @.> Cc: Nanavati, Jay @.>; Comment @.> Subject: Re: [rusty1s/pytorch_sparse] Does spspmm operation support autograd? (#45)

Sadly yes :(

— Reply to this email directly, view it on GitHubhttps://github.com/rusty1s/pytorch_sparse/issues/45#issuecomment-1099666507, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARWCGGSZA7DZOLJ74WY6ZITVFCJBZANCNFSM4LEFGYZA. You are receiving this because you commented.Message ID: @.***>


AstraZeneca UK Limited is a company incorporated in England and Wales with registered number:03674842 and its registered office at 1 Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA.

This e-mail and its attachments are intended for the above named recipient only and may contain confidential and privileged information. If they have come to you in error, you must not copy or show them to anyone; instead, please reply to this e-mail, highlighting the error to the sender and then immediately delete the message. For information about how AstraZeneca UK Limited and its affiliates may process information, personal data and monitor communications, please see our privacy notice at www.astrazeneca.comhttps://www.astrazeneca.com

rusty1s commented 2 years ago

With PyTorch 1.12, I assume you can also try to use the sparse-matrix multiplication from PyTorch directly. PyTorch recently integrated better sparse matrix support into its library :)