Hi @jacobgil
Thanks for sharing the code. I've been working with it and here's a version I ended up with.
Please feel free to edit it and correct me if my reasoning if wrong.
Here's a list of major changes:
At least in the current version replacing keras.relu with tf.relu is unnecessary
There was no need in creating target_category_loss. Loss you were computing (that's y_c in the paper) can be accessed by simple indexing.
In guided backprop there were also some strange computations with taking max and then sum. The output here should be just the gradient of conv output w.r.t. the input image.
There are some minor optimizations, such as computing GradCAM with dot product instead of the loop, accessing model's layer directly by it's name.
One difference with the paper that remains is l2-normalization in grad_cam. I believe authors didn't use it, but I kept it, because it helped in my case.
Modules versions: keras (2.0.8), tensorflow (1.3.0)
Hi @jacobgil Thanks for sharing the code. I've been working with it and here's a version I ended up with. Please feel free to edit it and correct me if my reasoning if wrong.
Here's a list of major changes:
There are some minor optimizations, such as computing GradCAM with dot product instead of the loop, accessing model's layer directly by it's name.
One difference with the paper that remains is l2-normalization in
grad_cam
. I believe authors didn't use it, but I kept it, because it helped in my case.Modules versions: keras (2.0.8), tensorflow (1.3.0)