alontalmor / MultiQA

138 stars 23 forks source link

The hyper-param tuning used in your paper #20

Open danyaljj opened 4 years ago

danyaljj commented 4 years ago

I have tried your code for multiple datasets:

> python multiqa.py train --datasets SQuAD1-1  --cuda_device 0,1
> python multiqa.py train --datasets NewsQA  --cuda_device 0,1
> python multiqa.py train --datasets SearchQA  --cuda_device 0,1

Following by corresponding evaluation:

> python multiqa.py evaluate --model model --datasets SQuAD1-1 --cuda_device 0  --models_dir  'models/SQuAD1-1/'
> python multiqa.py evaluate --model model --datasets NewsQA --cuda_device 0  --models_dir  'models/NewsQA/'
> python multiqa.py evaluate --model model --datasets SearchQA --cuda_device 0  --models_dir  'models/SearchQA/'

I am getting relatively bad scores (EM/F1):

which suggests that I am not using proper hyper-params. Do you think that explains it? If so, I would appreciate more clarify on this sentence from your paper: "We emphasize that in all our experiments we use exactly the same training procedure for all datasets, with minimal hyper-parameter tuning." especially with respect to "minimal hyper-parameter tuning".