Open drscotthawley opened 3 years ago
Perhaps something important has changed on Colab since these examples were created: The other Colab demo on activation minimization also fails, although in that case it gives a RuntimeError. (RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
)
These problems only seem to affect the Colab notebooks. When I cloned the repo and ran locally using the non-colab notebooks, I was able to see the saliency maps ok.
I tried debugging a bit but only managed to figure out that the problem lies in that the gradient returned from AlexNet in the backpropagation does not correspond to the expected dimensions, so self.gradients is never updated because it fails the if statement in _record_gradients:
grad_in[0].shape: torch.Size([1, 64, 55, 55]) self.gradients.shape: torch.Size([1, 3, 224, 224])
When running with CPU or TPU on colab it works fine!
Thanks so much for sharing your work and make it so easy...in theory...to use. Although I haven't been able to get it to work by running your Colab example(s).
Describe the bug The two middle images involving gradients for each example, "Gradients across RGB channels" and "Max Gradients", appear as uniform color. The RGB is all grey and the Max is all purple.
To Reproduce Steps to reproduce the behavior:
Expected behavior Gradient images should show content, such as colored pixels around the owl's eyes as in your Medium post. That is not what running the Colab demo gives me. See my sample screenshot below.
Screenshots![Screenshot from 2020-09-29 00-20-16](https://user-images.githubusercontent.com/13925685/94515957-5da4f980-01ea-11eb-813a-221c40309cad.png)
Environment (please complete the following information): Colab. Whatever OS that's running. Brand new installation of FlashTorch via the
!pip install flashtorch
in the notebook. Looks like (0.1.3) Looks like Torch 1.6Additional context From pip (after I re-ran it a second time):