Open YOU-TO-BE opened 3 years ago
Came here to post this because I just got the error on Colab:
----> 3 backprop.visualize(img, target_class, guided=True, use_gpu=True)
1 frames
/usr/local/lib/python3.7/dist-packages/flashtorch/saliency/backprop.py in visualize(self, input_, target_class, guided, use_gpu, figsize, cmap, alpha, return_output)
180 # (title, [(image1, cmap, alpha), (image2, cmap, alpha)])
181 ('Input image',
--> 182 [(format_for_plotting(denormalize(input_)), None, None)]),
183 ('Gradients across RGB channels',
184 [(format_for_plotting(standardize_and_clip(gradients)),
/usr/local/lib/python3.7/dist-packages/flashtorch/utils/__init__.py in denormalize(tensor)
117
118 for channel, mean, std in zip(denormalized[0], means, stds):
--> 119 channel.mul_(std).add_(mean)
120
121 return denormalized
RuntimeError: Output 0 of UnbindBackward is a view and is being modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
@MisaOgura: Any idea what's going wrong here?
I see a related closed issue in torchvision that suggests wrapping in torch.no_grad()
: https://github.com/pytorch/vision/issues/3025#issuecomment-729972517
...but adding no_grad()
would break the backpropagation, wouldn't it? So... Not sure how to fix this.
EDIT: seems the breaking change happened in pytorch 1.7, and affected many other packages. But haven't found another package that had this issue and was trying to visualize gradients.
@MisaOgura Is There any update version for this problem? or any suggest workround?
I also facing this issue, when using pytorch1.8.1 with gpu cuda111 and also tried without the gpu, but still no luck
thanks
RuntimeError Traceback (most recent call last)