hbdat / cvpr20_DAZLE

Fine-Grained Generalized Zero-Shot Learning via Dense Attribute-Based Attention accepted @ CVPR20
MIT License
51 stars 15 forks source link

att, original_att and w2v_att problems #19

Open zjrzjr666 opened 2 years ago

zjrzjr666 commented 2 years ago

image hi, it was a pleasure to read your paper. I had some problems reading your code. For example, in the first figure, I noticed that you seem to have extracted three attention vectors, att, original_att and w2v_att. The third one I can understand is that you get the class attribute through w2v? But your paper seems to use GloVe. And can you tell me the difference and meaning of att and original_att? They both seem to be (312,200). I also found class description files in the cub public dataset What is their difference?And why would you read three attribute-based attentions? image image image

FrankLeeCode commented 2 years ago

Here's the explanation in readme file of xlsa17. Hope it can help.

att_splits.mat includes the following fields:

hbdat commented 2 years ago

Hi @FrankLeeCode ,

Thank you so the answer :)

Best, Dat