Hi, this work is amazing. But I have encountered a strange problem. On desktop computer with 2080 8g gpu, it works well. However, when I run it on the server with 3090 24G gpu, it crashed with oom: I think it maybe the problem with the version of cuda and pytorch. so I kept the environment on server same with it on desktop. It still cannot run normally. Do you have any suggestion?
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.77 GiB (GPU 0; 23.70 GiB total capacity; 19.12 GiB already allocated; 3.93 GiB free; 19.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Hi, this work is amazing. But I have encountered a strange problem. On desktop computer with 2080 8g gpu, it works well. However, when I run it on the server with 3090 24G gpu, it crashed with oom: I think it maybe the problem with the version of cuda and pytorch. so I kept the environment on server same with it on desktop. It still cannot run normally. Do you have any suggestion?
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.77 GiB (GPU 0; 23.70 GiB total capacity; 19.12 GiB already allocated; 3.93 GiB free; 19.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF