-
Hi,
Thank you for this very nice work.
I've been trying to encapsulate GradCam into a single wrapper than can be used like any other model for prediction from dataloaders.
Here is what I di…
-
Possible culprits:
- Saving models in Ignite? Serialization issues?
- Possible bug in test.py?
Update: the model is definitely trained correctly, and the issues does not seem to be related to t…
-
Hi, first of all thanks for the amazing work you have done with Captum!
I have a question on the Deconvolution implementation. Most likely I'm missing an important implementation detail or using th…
-
### Here isthe debug info for model optimizer:
Op: Deconvolution
[ 2019-07-16 21:51:00,931 ] [ DEBUG ] [ infer:140 ] Inputs:
[ 2019-07-16 21:51:00,931 ] [ DEBUG ] [ infer:34 ] input[0]: shape = …
-
I find the "target_layer_names" of these two models, but when I run the modified code, I get the following error:
**RuntimeError: size mismatch, m1: [1 x 277248], m2: [768 x 1000] at /opt/conda/con…
-
https://nbviewer.jupyter.org/github/UntangleAI/example/blob/master/stylized_imagenet_vis_check_alexnet.ipynb
If you still want to check it out.
Would appreciate insights as to why stylised image…
-
Hi, Thank you very much for this awesome repo!
In Guided Backprop implementation ( in guided_backprop.py ) I don't see why it is necessary to block the gradients where the neuron didn't activate ( th…
-
In application programming, we have debugging and error checking statements like print, assert, try-catch, etc. But when it comes to deep neural networks, debugging becomes a bit tricky. Visualizing C…
-
Hey Alber,
I was looking at the analysis results of gradient vs Guided Backprop/DeConvNet methods for simple models. After reading the papers, my impression is that these methods are similar excep…
-
# creating placeholders to pass featuremaps and
# creating gradient ops
featuremap = [tf.placeholder(tf.int32) for i in range(config["N"])]
reconstruct = [tf.gradients(tf.transpose(tf.tra…