Closed josafatburmeister closed 2 years ago
Hi @josafatburmeister ,
Thanks for your feedback, let me try to verify and investigate this issue ASAP.
I can reproduce the issue locally now, seems we don't test requires_grad=True
in CI:
https://github.com/Project-MONAI/MONAI/blob/dev/tests/test_generalized_dice_loss.py
I will try to fix it and also test other losses ASAP.
Thanks.
Hi Nic, the differentiability is verified here https://github.com/Project-MONAI/MONAI/blob/dev/tests/test_seg_loss_integration.py
@Nic-Ma Thank you very much for your quick reply. Replacing the above code with the following lines should fix the problem:
w = self.w_func(ground_o.float())
infs = torch.isinf(w)
w[infs] = 0.0
max_values = torch.max(w, dim=1)[0].unsqueeze(dim=1)
w = w + infs * max_values
Hi @josafatburmeister ,
Cool, thanks for your suggestion! I verified your code locally, it works fine. Would you like to contribute a PR for it directly?
Thanks in advance.
Yes, I'm happy to submit a PR for that later today.
Describe the bug The code used for weight clamping in the forward method of the GeneralizedDiceLoss module is not differentiable:
When computing the loss gradients, Pytorch throws the following error:
RuntimeError: Output 0 of UnbindBackward0 is a view and is being modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
To Reproduce
The bug can be reproduced using the following python script:
Note that the error does not occur if
requires_grad
is set toFalse
forprediction
andtarget
.Expected behavior The forward method of the GeneralizedDiceLoss module should be differentiable.
Environment
Ensuring you use the relevant python executable, please paste the output of:
Additional context