amazon-science / fact-check-summarization

MIT License
77 stars 15 forks source link

OOM problem #4

Closed gaozhiguang closed 3 years ago

gaozhiguang commented 3 years ago

Hi, when i reimplement the experiment , i met the OOM problem:

2021-07-04 14:54:41 | WARNING | fairseq.trainer | OOM: Ran out of memory with exception: CUDA out of memory. Tried to allocate 1.51 GiB (GPU 0; 10.92 GiB total capacity; 8.34 GiB already allocated; 319.00 MiB free; 10.08 GiB reserved in total by PyTorch)

My GPU is 11G, where can i change the batch_size or something else for running.

01070 commented 3 years ago

Hi, I have a problem here, can you help me, I don’t know how to get the QAGen here 1. Generating question and answer pairs from summaries

gaozhiguang commented 3 years ago

Sorry, not clear either, may be you should ask the author.

fnan commented 3 years ago

max-tokens is used to control batch size. It's set to be 1024. You can try to lower it but this may lead to input sequence truncation since the max sequence length for BART is 1024.