jiasenlu / HieCoAttenVQA

349 stars 123 forks source link

The accuracy is weird #3

Closed chingyaoc closed 8 years ago

chingyaoc commented 8 years ago

Hi all, I run this code with alternating attention and VGG feature and the output accuracy is weird. Here is what it looks like It supposed to be 60.5 according to the paper. Btw in step of downloading image model, I didn't see image_model folder.

rohit789123 commented 8 years ago

There is no image_model folder

Thanks

chingyaoc commented 8 years ago

Thanks for the reply. Btw is there codes for attention visualization? Since readme mentioned it but I couldn't find the code for it.

idansc commented 8 years ago

It seems like there is a bug in the model, since it almost always produce "yes" as the answer. Not sure what the bug is yet.

Edit: remove the line that restrict the #samples to 5000 @ vqa prepro