Open archanray opened 5 years ago
@LousyLory Thanks for your interest :)
Hi @Cadene, thanks for the response!
The following are the values I got: without-VG with-VG yes/no 83.75 82.95 number 49.3 49.21 others 56.93 57.0 overall 67.11 66.8
I understand that these are within error rates, but the second model under-performs in each and every segment of the VQA challenge on test-dev data.
@LousyLory Thanks for this info.
I am sorry for the issue. We developed on pytorch v0.5 and released a clean code on pytorch v1.x. I will make sure that our results can be reproduce easily on this version as well. I will keep you updated.
While I focus in this issue, you could use the pretrained models of https://github.com/Cadene/block.bootstrap.pytorch as a baseline if you need to.
Cool thanks!!
Hi, I trained using VGenome, but the performance on VQA overall went down. I used the supplied yaml as config, does anything needs to be changed?