Closed Mcc979843753 closed 2 months ago
The data with requires_grad set to True changes to requires_grad being False after passing through self.projector.fbp(). This prevents the loss from being backpropagated. Can this issue be resolved? This problem does not occur with self.projector(). I noticed that in previous versions, the author attempted to use two losses (projector loss and CT loss), but in the latest version, only projector loss is used. I hope the author can address this issue, as it is crucial for using LEAP in network training.
@hkimdavis would you please respond to this issue?
fbp() is a separate function in Projector class to provide FBP reconstruction. It is not supposed to be used as a part of training/gradient updates. A possible way to run FBP would be like this: for data in data_loader: proj, a, b, c... = data img = self.projector.fbp(proj) y = self.nn_model(img)
For the application setup you provided, we are currently considering adding a separate FBP class based on torch.nn.module. Thank you.
Hyojin and I added a new feature to leaptorch.py which makes FBP differentiable. We also added test_recon_projections_NN.py which demonstrates how this can be used.
This fix should have resolved your issue, so I am going to close this, but feel free to open another issue if you run into anything else.
If requires_grad is True after data passes through self.projector.fbp (), requires_grad is False, resulting in loss that cannot be transmitted back. Can this problem be solved? However, self.projector () is not the case. I saw that in the previous version, I tried to use two loss (projector loss and CT loss), but the latest version only has projector loss. Hopefully, the authors can work this out, as this is important for network training with leap.