Closed jain-avi closed 6 years ago
I have not done that. but I think you can visualize the attention map such as self.conv1_1_blocks in AttentionModule_stage1_cifar.
for the equation" out = (1 + out_conv1_1_blocks) * out_trunk ", I think the feature before mask is out_trunk , and attention mask is out_conv1_1_blocks, and feature after mask is out .
I have a trained residual attention model, and I want to visualize the masks given in Figure 1. Any idea how do the authors do that? @tengshaofeng If u have already done it, can u share the code to actually visualize the attention masks?