KaihuaTang / VQA2.0-Recent-Approachs-2018.pytorch

A pytroch reimplementation of "Bilinear Attention Network", "Intra- and Inter-modality Attention", "Learning Conditioned Graph Structures", "Learning to count object", "Bottom-up top-down" for Visual Question Answering 2.0
GNU General Public License v3.0
295 stars 57 forks source link

Details about baseline model #9

Closed PeterBishop0 closed 3 years ago

PeterBishop0 commented 3 years ago

In baseline model ,class Attention use x = v * q ,however, in BUTD paper ,it use the feature vector v concatenated with the question embedding q Maybe I misunderstood it , but it would be nice of you to explain my confusion !THANKS!

PeterBishop0 commented 3 years ago

HADAMARD PRODUCT FOR LOW-RANK BILINEAR POOLING Did you implement this one instead?

PeterBishop0 commented 3 years ago

Only the attention part implement LOW-RANK BILINEAR POOLING IN ATTENTION MECHANISM