when plotting one of the attention maps, i am supposed to see something similar to fig1, right?
i run python inference.py and extracted gh2. i plotted all gh2[0, :, :, i] for i in range(12), right to the image. but i am seeing something strange.
below are the plots from 0 to 11.
here are the unique values per map:
also strange. the sigmoid could be doing this. but with or without sigmoid, i am supposed to get attentions that point to rois.
can you help? i may be missing something.
can you show how did you plot attentions in fig1?
very much appreciated.
thanks
hi, can you please confirm that
gh2
is the attention maps? https://github.com/rakutentech/FAU_CVPR2021/blob/0bfb778526908f36b6136e836d8b382877bacfa4/inference.py#L53it is of size
(batch_size, 12, 12, number_action_units)
. here,number_action_units=12
. attention maps are output of the arrow in this fig.3 from https://openaccess.thecvf.com/content/CVPR2021/papers/Jacob_Facial_Action_Unit_Detection_With_Transformers_CVPR_2021_paper.pdfwhen plotting one of the attention maps, i am supposed to see something similar to fig1, right?![image](https://user-images.githubusercontent.com/23446793/230786560-1de61f82-bb78-4a64-b0ae-82167bbadd10.png)
i run
python inference.py
and extractedgh2
. i plotted all gh2[0, :, :, i] for i in range(12), right to the image. but i am seeing something strange. below are the plots from 0 to 11. here are the unique values per map:also strange. the sigmoid could be doing this. but with or without sigmoid, i am supposed to get attentions that point to rois. can you help? i may be missing something. can you show how did you plot attentions in fig1? very much appreciated. thanks