hengyuan-hu / bottom-up-attention-vqa

An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
GNU General Public License v3.0
754 stars 181 forks source link

Validation accuracy? #10

Closed hamidpalangi closed 6 years ago

hamidpalangi commented 6 years ago

Hi all,

Thanks for the great work. Trying to reproduce 63.58 Validation accuracy but the accuracy in last epoch (epoch 29) is 63.17. I observe higher accuracy in previous epochs, e.g., 63.51 in epoch 14. Do you guys do early stopping or I am doing something wrong for not getting 63.58 in the last epoch?

Thanks! Hamid

hengyuan-hu commented 6 years ago

It overfits and that's normal for this task and the network we used, as indicated in the original paper. Any number around 63.5 indicates a successful training. Learning is not deterministic and random stuffs within pytorch may affect the result in different runs or under different versions.

On Sat, Mar 3, 2018 at 11:52 AM Hamid Palangi notifications@github.com wrote:

Hi all,

Thanks for the great work. Trying to reproduce 63.58 Validation accuracy but the accuracy in last epoch (epoch 29) is 63.17. I observe higher accuracy in previous epochs, e.g., 63.51 in epoch 14. Do you guys do early stopping or I am doing something wrong for not getting 63.58 in the last epoch?

Thanks! Hamid

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/hengyuan-hu/bottom-up-attention-vqa/issues/10, or mute the thread https://github.com/notifications/unsubscribe-auth/AEglZauyOgsQ2XPy3OP-rWux4X3-U_Ooks5tavRpgaJpZM4SbDl3 .

hamidpalangi commented 6 years ago

Makes sense, thanks for clarification.