Open TCvivi opened 2 years ago
Dear author, thank you for your code and paper. I noticed that in your paper ‘Multi-Head Self-Attention via Vision Transformer for Zero-Shot Learning’, the attributes dimension of the CUB (102 -> 312) and SUN (312 -> 102) in Table 1 is wrong.
Dear vivi, Do you have any idea how to visualize attention maps as shown in the paper?
Dear author, thank you for your code and paper. I noticed that in your paper ‘Multi-Head Self-Attention via Vision Transformer for Zero-Shot Learning’, the attributes dimension of the CUB (102 -> 312) and SUN (312 -> 102) in Table 1 is wrong.