Closed ajay-bhargava closed 4 years ago
@ajay-bhargava Hey, So are you able to visualize any layers? (This just calculates the output of the gradient and we then visualize the same.) The layer I have mentioned in the code is the output of the last Conv layer so it will show the activated area of the image. (I have never tried to visualize any other layer tbh so i need to check up on that. )
Did you check the layers in the model and select the attention block layer and try to put that layer in the visualization? Check this one out: https://www.kaggle.com/sironghuang/understanding-pytorch-hooks.
Let me know if you need some help. I will try to look into this too.
Hi,
Great job on implementing the models from Ozan Oktay. I am, unfortunately, having a bit of difficulty in interpreting how you're visualizing the attention gates (intermediate kernels/layers) in the Attention Unet model. Could you please provide some documentation on your implementation?
In particular, you lose me here:
which is referenced here:
How is this code grabbing the Attention-Gates in the interior of the model?