Closed Brandonnogithub closed 3 years ago
Hi,I tried ProphetNet in 1080Ti, with 11G memory. If I set the parameter --fp16, OOM occurs. However, if I remove --fp16, with batch_size 1, only 7.7GB memory is used. Please try removing --fp16 and tell us does it work.
It works. Thanks. That's really amazing.
Hi @Brandonnogithub, I would like to run ProphetNet on Question Generation task. Could you please tell me how to proceed.
Hi @Brandonnogithub, I would like to run ProphetNet on Question Generation task. Could you please tell me how to proceed.
Hi, you just need to follow the readme(https://github.com/microsoft/ProphetNet/tree/master/ProphetNet_Code) and change the datasest into SQuAD dataset
Hi @Brandonnogithub, I would like to run ProphetNet on Question Generation task. Could you please tell me how to proceed.
Hi, you just need to follow the readme(https://github.com/microsoft/ProphetNet/tree/master/ProphetNet_Code) and change the datasest into SQuAD dataset Thank you for your reply. Meantime, I found the answer to my question. https://huggingface.co/microsoft/prophetnet-large-uncased-squad-qg
I tried to run prophetnet on 2080ti(11G memory) with Question Generation task. However, even if I set the max-sentences as 1, it still be out of memory. So I wonder whether it is possible to run this model on 11G memory GPU. Because it has similar structure and size to the other pretrained models like BERT and Unilm, which I can run them on 11G memory GPUs.