Kevinz-code / CSRA

Official code of ICCV2021 paper "Residual Attention: A Simple but Effective Method for Multi-Label Recognition"
GNU Affero General Public License v3.0
209 stars 37 forks source link

Some problems about vision transformer #16

Open sure7018 opened 2 years ago

sure7018 commented 2 years ago

Hello, by combining the code and your paper, I have the following questions(about vit_ csra):

In the code, the class token is not used in the input of the last CSRA module, so why set the class token in the code in "VIT_CSRA". Has the last MLP head used for classification in the vision transformer been deleted directly?

Kevinz-code commented 2 years ago

Hi, thanks for reproducing our paper.

"why set the class token in the code in "VIT_CSRA" ": setting class token at the beginning is the original structure of VIT, which is not in the range of our modification. What we do is to fit CSRA into the VIT models (e.g. use all the final patch embeddings instead of the final one single class token).

"Has the last MLP head used for classification in the vision transformer been deleted directly?", Yes. We use our CSRA module instead the original classification head.

Best, Authors.