Closed PaulHax closed 1 month ago
This is probably not a leak but a OOM after loading a big dataset, we have to adjust the batching parameter for larger datasets depending on one's hw.
The out of memory "Tried to allocate" number threshhold keeps going down (after a sucessfull selection of a smaller set)
Steps
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 983.00 MiB. GPU
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 314.00 MiB. GPU