utterworks / fast-bert

Super easy library for BERT based NLP models
Apache License 2.0
1.86k stars 341 forks source link

CUDA out of memory when runnig the code from example #286

Open aleh4d opened 3 years ago

aleh4d commented 3 years ago

I tried to run the code from example on the fast-bert page, but got out of GPU memory error:

Exception has occurred: RuntimeError CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 6.00 GiB total capacity; 4.35 GiB already allocated; 84.91 MiB free; 4.44 GiB reserved in total by PyTorch)

How to make fast-bert to use less GPU memory? Which parameters to set?

TingNLP commented 3 years ago

modify max_seq_length and batch_size_per_gpu you can refer https://github.com/google-research/bert

image

JayDew commented 3 years ago

Could you please share what values worked for you? I also cannot run the examples

Elzic6 commented 3 years ago

only way (works for me each time I have the same issue) is to make a "factory reset runtime" image