Closed neverneverendup closed 4 years ago
We used 1 P100 GPU with 16G of VRAM. The training is quite fast on the SciFACT dataset. It can finish all 20 epochs in about an hour. You may want to try lowering the batch size on your GPU or train it on the cloud.
The paper table 1 reports metrics on the test set. We currently don't publish the test set labels so the pipeline is running on the dev set. That's why the numbers don't match exactly. You are seeing the metrics on the dev set.
Thanks😊! I got it.
Dear author, Thanks very much for your excellent work. I want to train a model according to the default parameters setting in 'rationale_selection_scifact_train.py', but what I meet OOM. I used a single GTX 1080ti for training, and the batch is set to 8. Could you plz tell me your hardware settings and the time cost to train a single model? BTW,when testing the pre-trained model according to the full_example script, the results I obtained is slightly different with the result in table1. Is it normal?
Thanks~