MichiganCOG / Gaze-Attention

Integrating Human Gaze into Attention for Egocentric Activity Recognition (WACV 2021)
MIT License
23 stars 4 forks source link

grad-cam generation #7

Open jpainam opened 1 year ago

jpainam commented 1 year ago

Hi, Could you share the util functions you used to generate the grad cam?

Thanks.

kylemin commented 1 year ago

Thank you for your interest.

We used the PyTorch version of Grad-CAM++: link. Please refer to this repository. We also noticed that there is a well-maintained library for various types of CAMs that you can refer to: link2

Thank you, Kyle

jpainam commented 1 year ago

Thank you. I was able to generate GradCAM for model_base using [https://github.com/1Konny/gradcam_plus_plus-pytorch](this link). For model_gaze, register_backward_hook doesn't fire. But register_forward_hook does. So, I don't have the gradients['value']

Is there any else I'm missing?

You have three submodel (model_base, model_gaze, model_attn), when you said you visualize the last convolutional layer of our model. Which of the three models?

Thank you

kylemin commented 1 year ago

You can visualize the output after the last conv layer (Mixed_5c) of model_base. You do not need to use register_backward_hook for model_gaze because you are expected to register the hooks only for model_base

jpainam commented 1 year ago

Ok thank. you train three differents network, (I3D, I3D w/ gaze and I3D w/ gaze and attention). You visualize the output of Mixed_5c for each network.