-
Isn't there a bug in the way negative gradients are filtered out for ReLU's in Guided Backprop?
What you do is:
`torch.clamp(grad_in[0], min=0.0)`,
while the paper states:
> rather than mas…
-
NStepLSTM fails to backward when chainer.config.train=False and cuDNN is enabled.
* Conditions
- OS/Platform: Colaboratory
- Chainer version: 5.0.0
- CuPy version: 5.0.0
- CUDA version…
-
Hello,
first off: thanks a lot for open-sourcing all these methods in a single place of reference, incredibly useful!
I am trying to use `src/guided_backprop.py` to visualize predictions of a `t…
-
I am trying to develop a minimalist, simple to use and effective visualization program for visualizing what the deep network learns on the lines of the work presented in the paper https://arxiv.org/pd…
-
Hello,
I read your amazing article about network interpretability methods. In fact, I try to visualize ResNet recently via Guided-Back.
But I found there are some troubles when I handle the Batch N…
-
why do you replace ReLU with GuidedBackpropReLU?
-
The inception_v3 does not has a relu layer, but it has a F.relu function in its forward pass. You cannot update the relu in this way for inception_v3.
-
I'm trying to implement Saliency Maps and Guided Backpropagation in Keras using the following code on Lasagne.
https://github.com/Lasagne/Recipes/blob/master/examples/Saliency%20Maps%20and%20Guided%2…
-
Does anyone know how to display/visualize the trained filters for the generators and discriminators after training is finished?
-
Does it theoretically make sense? I am not sure...
Resnet is different from VGG:
* It learns residual.
* It has block structure, and shortcut
Just try with code. ResNet50. slim model. See wha…