megvii-research / MOTR

[ECCV2022] MOTR: End-to-End Multiple-Object Tracking with TRansformer
Other
633 stars 93 forks source link

Attention Map #53

Open amindehnavi opened 2 years ago

amindehnavi commented 2 years ago

Hi, is there any way to generate the output attention maps of model.transformer.decoder.layers[i].cross_attn layer? when I follow the referenced functions, I finally get stuck in MSDA.ms_deform_attn_forward function in the forward method of the MSDeformAttnFunction class which is located at ./models/ops/functions/ms_deform_attn_func.py file, and I couldn't find any argument to set True to get the attention map in output.

./models/deformable_transformer_plus/DeformableTransformerDecoderLayer image

./models/ops/modules/ms_deform_attn.py image

./models/ops/functions/ms_deform_attn_func.py image