Closed Sean1572 closed 1 year ago
I still noticed the same memory usage spikes during validation on my machine, prolly should test it first
There might be an official way to free them without del
There might be an official way to free them without del
https://discuss.pytorch.org/t/how-to-delete-a-tensor-in-gpu-to-free-up-memory/48879/9
We can do empty cache but this seems to be enough
Obsreved that during the inner validation loop, the gpu utilization worsened resulting in CUDA out of memory errors. This was because tensors for training were still within scope during validation. This change should fix this error by freeing those tensors from memory prior to validation run