hengyuan-hu / bottom-up-attention-vqa

An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
GNU General Public License v3.0
754 stars 181 forks source link

About Visual Genome #34

Open silverbulletmdc opened 6 years ago

silverbulletmdc commented 6 years ago

Hi, I noticed that you use the pre-trained features of the original repo, which was trained using Visual Genome. But you said your repo was trained without extra information from Visual Genome. What does that mean? How can you say you don't use VG but you use the features trained by VG?

hengyuan-hu commented 6 years ago

I can’t remember whether the feature was trained using visual genome. But we don’t use questions and answers from visual genome. On Wed, Sep 12, 2018 at 2:14 AM Dechao Meng notifications@github.com wrote:

Hi, I noticed that you use the pre-trained features of the original repo, which was trained using Visual Genome. But you said your repo was trained without extra information from Visual Genome. What does that mean? How can you say you don't use VG but you use the features trained by VG?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/hengyuan-hu/bottom-up-attention-vqa/issues/34, or mute the thread https://github.com/notifications/unsubscribe-auth/AEglZdP-d7ajlDKwwgho9bSh4846EgMGks5uaNB_gaJpZM4Wk9ux .

silverbulletmdc commented 6 years ago

Thanks! The faster R-CNN model is changed to predict the attributes of objects(The attributes information is provided by Visual Genome), and then they use the features of Faster R-CNN as visual features. That's how the original paper use VG. I have another question. I notice there are spatial features in the features dataset which has 6 dimensions but you don't use it. I don't know what is the clear meaning of each dimension. But I think the current way you just ignore the spatial information could make some question hard to answer. Just like "What is on the desk". Have you tried using these features to improve your results?

hengyuan-hu commented 6 years ago

Yeah we tried to use spatial features, x0,y0,height,width. That’s why those features are there. But it doesn’t help at all no matter how we tried. That’s why it’s not used. On Wed, Sep 12, 2018 at 7:11 PM Dechao Meng notifications@github.com wrote:

Thanks! The faster R-CNN model is changed to predict the attributes of objects(The features are provided by visual genome), and then they use the features of Faster R-CNN as visual features. That's how the original paper use VG. I have another question. I notice there are spatial features in the features dataset which has 6 dimensions but you don't use it. I don't know what is the clear meaning of each dimension. But I think the current way you just ignore the spatial information could make some question hard to answer. Just like "What is on the desk". Have you tried using these features to improve your results?

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/hengyuan-hu/bottom-up-attention-vqa/issues/34#issuecomment-420859453, or mute the thread https://github.com/notifications/unsubscribe-auth/AEglZW7c7m-NN1EjGdQNZRxFVBleFjSAks5uab7ogaJpZM4Wk9ux .

silverbulletmdc commented 6 years ago

Thanks for your reply! Your code was much simpler and easy understanding than original repo so I learned a lot. This is a great job!