fartashf / vsepp

PyTorch Code for the paper "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives"
Apache License 2.0
489 stars 125 forks source link

Finish validation without running model.train() #4

Closed Wangzhpp closed 6 years ago

Wangzhpp commented 6 years ago

The code in 176, 177 line of train.py:

if model.Eiters % opt.val_step == 0:  
    validate(opt, val_loader, model)

will run the validation and call model.val_start() to stop the batch_norm and dropout layers of img_encoder and txt_encoder.

However, it will not call model.train_start() until next epoch, that may cause the dropout and batch_norm layers will only activate in the first val_step steps for each epoch...

Is it a bug need to be fixed?

fartashf commented 6 years ago

Good catch. Thank you. I'm checking if it has any effect on the results but I don't expect it to have much. The image models used are pre-trained and kept fixed unless finetuning is enabled. So batch norm should be mostly fine. But dropout could help by regularizing the model.

xh-liu commented 6 years ago

Hi, I also found this problem and I found that in my experiment, fixing this bug will cause the performance drop when finetuning CNN is enabled, which is strange....

fartashf commented 6 years ago

Thanks @xh-liu for reporting. I'm reopening this issue as I'm observing non-negligible changes in the results.

fartashf commented 6 years ago

Based on the reruns, the behaviour is affected noticeably by enabling the training mode. I reverted the change but added a flag to enable training mode. To make the default behaviour reproducing the results, by default the flag is disabled.

Regarding why the results are affected, we only have guesses. Results for VGG19 that only has dropout is affected more compared to Resnet with only batchnorm. In general, dropout is not necessarily needed for all models. Especially, in learning joint embeddings it might make the optimization more difficult. Maybe the amount of data matters. In any case, we will investigate further.

Thanks again for reporting.

fartashf commented 5 years ago

Revisiting this issue, the test loss seems to be lower for when the training mode is reset correctly compared to when it's only reset at the beginning of the epoch. There is a difference in the performance as measured by R@K but it's not a better or worse solution considering all K's. The master branch stays on the PyTorch 0.2 compatible code to keep the results reproducible. PyTorch >0.4 forces us to fix this issue, so there is a new branch for compatibility with PyTorch 0.4.