Closed JohnGiorgi closed 4 years ago
I am facing the same issue.When I use the WarmupLinearSchedule and the 7th epoch training , I get a CUDA out of memory issue
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Running import gc
, thengc.collect()
and emptying the GPU’s cache should solve the issue temporarily. See #1742
❓ Questions & Help
I am facing a strange issue when using the schedulers available in this library within a cross-validation loop. Basically, in each fold, I initialize a new model, optimizer, and scheduler. GPU memory accumulates until I eventually get a CUDA out of memory issue.
The simplest example I could come up with to reproduce the error is:
This will run until it (very quickly) uses up all 12GB on my Titan XP GPU. To make sure it was truly the initialization of the scheduler, I also tested
And did not see the memory accumulation or OOM error.
My question(s) is/are:
Thanks a lot.