airsplay / vokenization

PyTorch code for EMNLP 2020 Paper "Vokenization: Improving Language Understanding with Visual Supervision"
MIT License
186 stars 22 forks source link

About the finetune accuracy #9

Open yxgnahz opened 2 years ago

yxgnahz commented 2 years ago

Hi, thanks for your interesting work. I met a problem when I tried to finetune the model. I loaded the released pretrained model BERT_base model, and finetuned it on GLUE using the given finetuning scripts, I got only 69.08 on QQP and 31.82 on MNLI. Therefore, I wondered (1) Is the GLUE performance reported in the paper exactly the performance after three-epoch finetuning or you just picked up the highest during finetuning? (2) For the pretrained model, did you just use the model at the last iteration or you picked up one during the pretraining process? Thanks in advance.

TobiasLee commented 2 years ago

save issue, did you find a way out?

airsplay commented 2 years ago

I think that Xinyun has found the right configuration to reproduce the results. To debug it, please try the following

  1. Try loading the original BERT model and see whether the results are correct or not.
  2. Check whether all model weights are loaded into the model by reading the log. HF sometimes change the model API thus the weight name can be different.
TobiasLee commented 2 years ago

@airsplay Thanks for your reply. I have decreased the learning rate from 1e-4 to 5e-5, and then the results are correct on MNLI.

airsplay commented 2 years ago

@TobiasLee Thanks for checking it! Just for clarification, you mean increase 1e-4 to 5e-4 or decrease 1e-4 to 5e-5?

TobiasLee commented 2 years ago

Ooops, I made a typo. The original LR used in the paper is 1e-4 and I decreased it to 5e-5 actually for stable results on MNLI.

@TobiasLee Thanks for checking it! Just for clarification, you mean increase 1e-4 to 5e-4 or decrease 1e-4 to 5e-5?