Closed Zakobian closed 5 years ago
Hey @Zakobian, sorry for the delay. I recently noticed that I had gotten the torch API slightly backwards the first time. In fact the torch documentation says that an autograd.Function
can only be used once (for graph building), so that may be the issue that you encounter.
There's an open PR #1516 with a better API (and updated examples) where the torch API is used correctly. It will likely be merged soon, but if you want to give it a try, just check out my torch_better_api
branch.
Hello, I have been trying to do gradient descent using a simple MNIST image and a RayTransform forward operator, however the 'OperatorAsModule' doesn't seem to be able to propogate backwards after having done it once. Following is an example:
After 1 loop it is no longer able to produce a gradient. However adding
inside the loop fixes the problem and the gradient exists. And it is able to do gradient descent successfully. I have tried using different optimizers/instead of using optimizers just using torch.autograd itself, but the issue seems to arise nonetheless.
Unless there is an issue with the way I handle things, it seems that after 1 forward pass through the module, it can no longer produce gradients, i.e. if I do
before the loop, the gradient stays at None.
I use odl 0.7.0, torch 1.1.0, astra 1.8.3 Plase let me know, if there is any extra information required.