peteanderson80 / bottom-up-attention

Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome
http://panderson.me/up-down-attention/
MIT License
1.43k stars 378 forks source link

What do the pretrained features contain? #59

Open ghost opened 5 years ago

ghost commented 5 years ago

Hi, thanks for bottom-up attention!

I'm now trying to download the pretrained features, but I failed for many times. So now, I just want to know, what do the pretrained features contain?

Region position? Feature map? Or labels?

I hope I made myself clear.

LeeDoYup commented 5 years ago

The pretrained features are very important, because it contains feature vectors of objects in each image and the location of bounding box.

Rushing-Life commented 2 years ago

The pretrained features are very important, because it contains feature vectors of objects in each image and the location of bounding box.

Hi,do you know where the attribute classifier code is