Open olegred opened 1 year ago
same whith me ` torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 5.78 GiB of which 8.12 MiB is free. Including non-PyTorch memory, this process has 5.48 GiB memory in use. Of the allocated memory 5.10 GiB is allocated by PyTorch, and 297.78 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
``
Fantastic project. However, after I run it, it locks up my cuda memory, even after I terminate the program. Any way this could be helped?