hengyuan-hu / bottom-up-attention-vqa

An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
GNU General Public License v3.0
753 stars 181 forks source link

Reported accuracy of Implemented Model #38

Closed bcsaldias closed 5 years ago

bcsaldias commented 5 years ago

Hi!

The README says that running the code using the default parameter leads to the 'table results.' However, I'm run the code in an AWS p2.x16large machine and got:


    train_loss: 3.83, score: 48.52
    eval score: 45.38 (92.66)
epoch 22, time: 325.48
    train_loss: 3.80, score: 48.76
    eval score: 45.37 (92.66)
epoch 23, time: 329.43
    train_loss: 3.78, score: 49.02
    eval score: 45.27 (92.66)
epoch 24, time: 329.51
    train_loss: 3.75, score: 49.34
    eval score: 45.09 (92.66)
epoch 25, time: 326.05
    train_loss: 3.73, score: 49.50
    eval score: 45.40 (92.66)
**epoch 26, time: 327.16
    train_loss: 3.71, score: 49.75
    eval score: 45.43 (92.66)**
epoch 27, time: 326.92
    train_loss: 3.69, score: 49.97
    eval score: 45.03 (92.66)
epoch 28, time: 328.04
    train_loss: 3.67, score: 50.22
    eval score: 45.35 (92.66)
epoch 29, time: 332.07
    train_loss: 3.66, score: 50.44
    eval score: 45.17 (92.66)

Any thought about why is this happening?

Thanks, Belen

Einstone-rose commented 4 years ago

Hello, i find the same question. Why do you close this issues, do you solve the problem?

vajjasaikiran commented 3 years ago

Hi @Einstone-rose @bcsaldias I am also facing the same issue. Were you able to solve this?